Chinese AI startup DeepSeek inadvertently exposed sensitive user data through unsecured databases.
DeepSeek, known for its DeepSeek-R1 large language model (LLM), left two ClickHouse database instances publicly accessible without authentication. Security researchers at Wiz discovered that these databases contained over a million log entries, including user chat histories, API keys, backend system details, and operational metadata. The exposure meant that anyone with knowledge of the database URLs could query and access sensitive information.
The unsecured databases, hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000, allowed arbitrary SQL queries via a web interface. The ‘log_stream’ table stored internal logs dating back to January 6, 2025, containing:
Wiz Research warned that the exposure posed a severe security risk, allowing potential attackers to retrieve plaintext passwords and proprietary data. Although it remains unclear whether malicious actors exploited this vulnerability, Wiz promptly reported the issue, and DeepSeek secured the databases soon after.
This incident emphasizes the need for good cybersecurity practices, particularly for AI companies handling vast amounts of user data. Exposed chat logs raise significant privacy concerns, especially for businesses using AI tools for confidential operations.
Moreover, the exposure of backend details and API keys could have led to privilege escalation attacks, granting unauthorized access to DeepSeek’s internal network and potentially causing larger-scale breaches. This security lapse, coupled with DeepSeek’s recent struggles against persistent cyberattacks, raises concerns about the company’s preparedness against future threats.
Read also:
To prevent similar incidents, AI companies must adopt proactive security measures:
See also: HIPAA Compliant Email: The Definitive Guide
Risks include privacy breaches, unauthorized access to internal systems, and potential privilege escalation attacks.
It raises concerns about data security and highlights the need for stricter cybersecurity policies in AI-driven organizations.