Is DeepSeek Too Good to be True? (Spoiler Alert: Yes)

The good, bad, and worse news continues to roll in around DeepSeek

Is DeepSeek Too Good to be True? (Spoiler Alert: Yes)

Over the past week, DeepSeek has dominated tech and security headlines, with an unrelenting wave of revelations far beyond the typical “any publicity is good publicity” mantra. Let’s recap the remarkable chain of events (note: the Day numbers are approximate):

Day 0: DeepSeek Launches with a Bang

DeepSeek entered the market with bold claims of exceptional performance and efficiency, allegedly upending established AI paradigms. The startup, a scrappy Chinese company, claims it developed its app and model for under $6M—an astonishingly low figure compared to the tech giants. This news sent shockwaves through the stock market, pummeling major players like Nvidia.

Our take: There’s no denying DeepSeek has achieved efficiencies that could help democratize AI, and developers are flocking to its app and R1 model. Budget-conscious users are already seeing tangible benefits.

But remember, you usually get what you pay for. The market’s enthusiastic embrace of these unverified claims has overlooked significant risks. DeepSeek’s rapid shortcuts come with major implications—especially regarding security and privacy, the key obstacles to large-scale AI adoption.

Day 2: Rumors of IP Theft

Allegations arise that DeepSeek’s rapid success might be due to copying OpenAI’s models. Microsoft detected suspicious download activity, and OpenAI has accused DeepSeek of intellectual property theft.

Our take: The legal boundaries between imitation, model distillation, and outright theft are murky, and resolving these disputes could take years—likely long enough to render the outcome irrelevant.

Here’s an analogy: Imagine building a new sports car from scratch. It requires substantial investment in design, engineering, and production. Pioneers like OpenAI invested billions to reach the breakthroughs fueling today’s AI landscape. Could a competitor create a knockoff sports car for far less? Sure—if they borrow designs, use secondhand parts, slap on a shiny paint job, and skip safety checks. Just don’t expect it to handle well at high speeds or survive rough conditions.

If the accusations hold, this type of massive, suspicious API activity preventable. OWASP recently updated its Top Ten LLM Application Risks with LLM10:2025 – Unbounded Consumption. Tools like AppSOCAI already provide safeguards against this and other OWASP or MITRE AI risks.

Day 3: The Privacy and Security Alarms

With echoes of TikTok, a free, wildly popular AI app based in China has triggered major concerns about privacy, politics, and security. Security experts, politicians, and regulators have quickly taken action, banning the app’s use in numerous organizations. DeepSeek faces scrutiny under the EU AI Act and similar emerging regulations worldwide.

But confusion has arisen: What’s the difference between DeepSeek (the company), DeepSeek (the app), and DeepSeek (R1—the AI model)? While the app, hosted in China and rife with security issues, raises obvious concerns, downloading the R1 model and running it on secure platforms like AWS may seem safer—right?

Day 4: Data Breach

DeepSeek (the company) is hacked, exposing personal data of over 1 million users. That didn’t take long.

Day 5: Security Experts Test the R1 Model

AI security experts, including AppSOC start testing and Red Teaming versions of the DeepSeek R1 model. AppSOC tested over 6,400 prompts against the model for a range of model threats (jailbreaking, prompt injection, malware generation, hallucinations, supply chain issues, training data leaks, toxicity, and more) and the model failed more than 35% of all tests. In some categories, failure rates exceeded 90%. Suffice it to say, these results are unacceptable for any enterprise AI application, or any AI project that deals with personal information, sensitive data, or IP.

AppSOC AI Dashboard: DeepSeek-R1 Test Results

Our next blog will provide a comprehensive breakdown of these findings. For now, it’s clear that these failure rates are unacceptable for enterprise AI applications, especially those handling sensitive data, intellectual property, or personal information.

Our take: With over 1.25 million models available on Hugging Face alone—and countless others from reputable sources—there’s no reason to risk using DeepSeek R1 for critical projects. Ensure your AI projects are protected by implementing rigorous safeguards and screening processes to block risky or untrustworthy models.

Final Reminder: Secure your AI infrastructure, and don’t let flashy, budget-friendly promises lure you into compromising your data and security.