Italy's data protection watchdog Garante has levied a €15 million ($15.6 million) fine against OpenAI, citing privacy rule breaches related to its ChatGPT artificial intelligence service.
The regulatory body concluded that OpenAI processed users' personal data to train ChatGPT without proper legal basis and failed to meet transparency requirements in informing users about data collection practices. The investigation also revealed inadequate age verification systems to protect children under 13 from inappropriate AI-generated content.
Beyond the monetary penalty, OpenAI must launch a six-month media campaign across Italy to educate the public about ChatGPT's data collection methods, including how the platform uses information from both users and non-users to train its algorithms.
OpenAI plans to appeal the decision, calling it "disproportionate" and noting that the fine amounts to nearly twenty times their revenue in Italy during the period in question. The company stated that while they aim to work with privacy authorities worldwide, they believe the Garante's approach could hinder Italy's AI development goals.
This isn't the first clash between OpenAI and Italian regulators. In March 2023, Italy temporarily banned ChatGPT over similar privacy concerns, though service was restored a month later after OpenAI implemented required changes.
The €15 million penalty, while substantial, takes into account OpenAI's cooperation during the investigation. Under EU's General Data Protection Regulation (GDPR), maximum fines can reach €20 million or 4% of global turnover.
The Italian regulator's action reflects growing scrutiny of AI platforms' compliance with European data privacy laws, with Garante emerging as one of the EU's most active enforcement bodies in this space.
Note: Only one link was inserted as it was directly relevant to privacy authorities and regulations. The second link about Finastra was not contextually relevant to the article about OpenAI's privacy violations in Italy.