Chinese AI startup DeepSeek is emerging as a competitive player in the generative AI space, but security experts are raising concerns about potential vulnerabilities in its platform.
A report from security firm Wiz first raised questions about DeepSeek's reliability, identifying a significant vulnerability. Upon disclosure of the vulnerability, DeepSeek “promptly secured the exposure,” according to Wiz. Though details remain limited, researchers also noted concerns about how the company manages user information. Further security testing conducted by a joint team from Cisco and the University of Pennsylvania found that DeepSeek-R1’s safety mechanisms struggled against specific adversarial prompts, with researchers able to bypass restrictions in multiple cases.
DeepSeek has also experienced service outages, further fueling discussions about the risks of relying on third-party AI services. While such issues are common for cloud-based AI providers, experts caution that any platform handling sensitive corporate data must demonstrate strong security measures.
"For privacy reasons, I would not recommend building on top of their cloud-hosted service offering," says Ruben Boonen, CNE Capability Development Lead with IBM X-Force Adversary Services. "There is a risk."
For companies considering AI integration, security protocols are a key factor. Many enterprise-focused AI providers offer local deployment options to reduce reliance on cloud services. IBM, for example, provides enterprise-grade AI solutions that can be hosted privately, while Meta has released open-source models that organizations can run within their data centers.
Boonen notes that DeepSeek's model can also be run locally, eliminating concerns about cloud security risks. However, he cautions that companies relying on its cloud-based services must be particularly mindful of potential vulnerabilities.
Part of the discussion around DeepSeek involves uncertainty over whether customers can opt out of data collection for AI training. While companies like OpenAI offer such options, it remains unclear whether DeepSeek provides similar assurances.
"In the case of OpenAI, you have some measure of trust that if you tell them not to train on your data, they actually don't do it," Boonen says. "I am not familiar with DeepSeek's opt-outs; even if they are outlined well, I would have less confidence in enforcing those."
Maryam Ashoori, a Senior Director of Product Management for IBM watsonx, announced on LinkedIn that enterprises can now "safely deploy these models on-premises or in the cloud of your choice," highlighting watsonx.ai's new one-click deployment system for DeepSeek's distilled AI models.
Learn how to navigate the challenges and tap into the resilience of generative AI in cybersecurity.
Understand the latest threats and strengthen your cloud defenses with the IBM X-Force Cloud Threat Landscape Report.
Find out how data security helps protect digital information from unauthorized access, corruption or theft throughout its entire lifecycle.
A cyberattack is an intentional effort to steal, expose, alter, disable or destroy data, applications or other assets through unauthorized access.
Gain insights to prepare and respond to cyberattacks with greater speed and effectiveness with the IBM X-Force Threat Intelligence Index.
Stay up to date with the latest trends and news about security.