Without the proper governance policies, training programs and security controls, unsanctioned AI tools could become a huge risk for companies worldwide.
DeepSeek’s meteoric rise has taken the tech world by storm. With its advanced AI model that rivals leading competitors at a fraction of the cost, and its utilization of less powerful hardware and open-source accessibility, the newcomer has challenged established industry norms and prompted significant market reactions.
Risky business: But DeekSeek brings with it potential corporate risk if it is misused within organizations that haven’t implemented advanced data security controls. "Employees want to use AI because it makes them more productive, but if the recent DeepSeek cyber attacks teach us anything, it’s that these AI tools can be more vulnerable than we think," says Tim Morris, Chief Security Advisor at cybersecurity endpoint management company Tanium. "Without the proper governance policies, training programs and security controls, unsanctioned AI tools could become a huge risk for companies worldwide."
DeepSeek’s open source nature opens it up for exploration – by both adversaries and enthusiasts, says Chester Wisniewski, Director and Global Field CTO at Sophos, the British managed cybersecurity software services company. "Like Llama, it can be played with and largely have the guardrails removed. This could lead to abuse by cyber criminals, although it’s important to note that running DeepSeek still requires far more resources than the average cybercriminal has."
As with any other AI model, it will be critical for companies to make a thorough risk assessment, which extends to any products and suppliers that may incorporate DeepSeek or any future LLM. They also need to be certain they have the right expertise to make an informed decision.
Finding solutions: Addressing these security risks requires a multi-pronged approach. "Controls that block unapproved apps, use DLP (Data loss prevention) to control data movement into approved apps, and leverage real-time user coaching to empower people to make informed decisions when using GenAI apps are currently among the most popular tools for limiting the GenAI risk," says Ray Canzanese, Director of Netskope Threat Labs, a company handling cloud threats for Enterprises.
Having a resilient approach that provides visibility into unapproved AI usage by detecting LLMs and related scripts on employee devices is key, says Morris. "These tools can help IT departments track unauthorized downloads, flag suspicious activity, and ensure compliance with company policies."
Due to its cost-effectiveness, we are likely to see various products and companies adopt DeepSeek, which potentially carries significant privacy risks, says Wisniewski. "As with any other AI model, it will be critical for companies to make a thorough risk assessment, which extends to any products and suppliers that may incorporate DeepSeek or any future LLM. They also need to be certain they have the right expertise to make an informed decision."
High stakes: Getting AI security right is vital for organizations, if they want to reap the benefits of the technology. "The companies that can strike the right balance between innovation and security will thrive in the AI-powered future; those that can’t will continue flying blind without any IT visibility," says Morris.