Italy Hits OpenAI with €15M Fine for Privacy Violations in ChatGPT Data Practices

· 1 min read

article picture

Italy's data protection authority has imposed a 15 million euro ($15.58 million) fine on OpenAI, the company behind ChatGPT, following an investigation into the AI company's handling of personal data.

The Italian privacy watchdog Garante determined that OpenAI violated European Union privacy regulations by collecting and using personal information to train ChatGPT's algorithms without proper legal grounds or user consent.

According to Garante's findings, OpenAI failed to maintain transparency and did not adequately inform users about how their personal data would be utilized in the AI system's training process.

This penalty follows Garante's temporary ban on ChatGPT in Italy last year due to similar privacy concerns. The service was later reinstated after OpenAI implemented changes to address key issues, including giving users the option to decline the use of their personal data for algorithm training.

OpenAI, which receives backing from Microsoft, has maintained that its practices comply with EU privacy laws, though the company has not yet commented on this latest fine.

The Italian regulator has emerged as one of the EU's most active authorities in examining AI platforms' adherence to the bloc's data protection standards. This enforcement action represents one of the most substantial penalties imposed on OpenAI for privacy violations to date.