The Italian Competition Authority (AGCM) has initiated an investigation into Chinese artificial intelligence startup DeepSeek over concerns that the company may have misled consumers by failing to provide adequate warnings about the risk of false or misleading content generated by its AI models.
In a statement released on Monday, the AGCM asserted that DeepSeek had not offered users “sufficiently clear, immediate, and intelligible” notice about the possibility that its AI systems could produce inaccurate, misleading, or fabricated information—commonly referred to as “hallucinations.” These hallucinations occur when AI tools generate seemingly plausible but false responses to user queries.
The AGCM, which also enforces consumer protection laws, emphasized that users must be properly informed of the limitations and potential inaccuracies of AI-generated content, especially as such tools become more widely used in personal, educational, and professional contexts.
At the time of publication, DeepSeek had not responded to Reuters’ request for comment.
This marks the second regulatory action involving DeepSeek in Italy this year. In February 2025, Italy’s data protection authority (Garante per la Protezione dei Dati Personali) ordered DeepSeek to block access to its chatbot for failing to comply with privacy requirements, including transparency in data usage and user rights.
The AGCM’s latest probe highlights growing regulatory scrutiny across Europe over the transparency, safety, and ethical use of generative AI technologies. Authorities have increasingly called for AI developers to implement robust safeguards and clear user guidance to prevent misuse and mitigate harm.
The investigation is ongoing, and the AGCM has not yet specified a timeline for its conclusions or potential sanctions.