Italy's data protection authority, Garante, has imposed a €15 million ($15.58 million) fine on OpenAI following an investigation into ChatGPT's data collection and privacy practices. The penalty comes after authorities found multiple violations of data protection regulations.
The investigation revealed that OpenAI failed to provide transparency about its use of personal data for training ChatGPT and lacked proper legal grounds for data collection. Authorities also noted inadequate age verification systems to protect minors from inappropriate AI-generated content.
A key concern highlighted was OpenAI's failure to report a security breach that occurred in March 2023. The company's training processes were found to violate the European Union's General Data Protection Regulation (GDPR).
Beyond the monetary penalty, Garante has mandated OpenAI to launch a six-month public awareness campaign. The campaign must explain ChatGPT's operations, including its data collection methods, training models, and user rights. This initiative aims to help both users and non-users understand how their personal data might be used and how to exercise their privacy rights.
OpenAI has announced plans to appeal the decision, describing the fine as "disproportionate." The company stated that the penalty amount exceeds their revenue in Italy during the relevant period by nearly 20 times. While maintaining their commitment to user privacy, OpenAI emphasized their efforts to develop AI solutions that respect data protection requirements.
This case underscores the growing tension between AI technology advancement and regulatory compliance, particularly in regions with strict data protection laws like the European Union.