ChatGPT vs Italian Supervisory Authority: who wins?
In 2023 the real technological “boom” happened — products based on artificial intelligence flooded the market. They can accomplish different tasks: Midjourney generates text into images, Soundful allows you to create music, and SlidesAI can prepare presentations for you. However, the most famous and popular one is ChatGPT.
ChatGPT (Generative Pre-trained Transformer) — is a chatbot based on AI, developed by the US laboratory OpenAI, which gives information and answers to users’ requests. Chat can generate an essay for you, create a new recipe or even write code for the program.
Regardless of the proven practical benefits of AI in different fields, its use has some risks for users’ privacy and data protection.
Actually, such an indicative story happened with ChatGPT in Italy, which is the first European country where chat was banned.
What happened with ChatGPT in Italy, and if this story had a happy ending?
30 March 2023
The Italian Supervisory Authority imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI. It means the ban of ChatGPT in Italy.
According to the Italian supervisory authority, a data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service was reported on 20 March. An inquiry into the facts of the case was initiated, and Italian SA found some problems.
So, what are the reasons for banning ChatGPT in Italy?
no information is provided to users and data subjects whose data are collected by Open AI |
no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies |
the information made available by ChatGPT does not always match factual circumstances, so inaccurate personal data are processed |
the lack of whatever age verification mechanism exposes children to receiving responses that are absolutely inappropriate to their age and awareness. *service is allegedly addressed to users aged above 13 according to OpenAI’s terms of service. |
Violation of Art. 13 GDPR (information to be provided where personal data are collected from the data subject) |
Violation of Art.6 GDPR (lawfulness of processing) |
Violation of Art. 5 GDPR
(‘accuracy’ principle) |
Violation of Art.8 GDPR (conditions applicable to child’s consent) |
Also, they mentioned the violation of Art. 25 GDPR (data protection by design and by default).
Italian SA gives 20 days to OpenAI to implement measures to comply with the order, otherwise, a fine may be imposed.
4-8 April
- Italian SA informs that OpenAI agreed to collaborate to ensure the interests of Italian users.
- OpenAI suggests measures for resolving the problems and Italian SA assesses them.
11 April
Italian SA informs that they will lift temporary limitations if OpenAI implements these measures by 30 of April 2023:
Information |
|
|
Legal basis |
|
Exercise of data subjects’ rights |
OpenAI will add easily accessible tools to enable data subjects, including non-users, to exercise their rights: |
|
|
Protection of children |
|
|
Awareness-raising campaign (by 15 May 2023) |
|
27 April
OpenAI updated the Privacy policy.
28 April
- Italian SA informs that OpenAI reinstates service in Italy with enhanced transparency and rights for European users and non-users.
What lessons can be learned for developing GDPR-compliant AI?
1. Use the GDPR-compliant dataset for training the model.
- Data must be collected specifically for the purpose of training AI algorithms.
- The legal basis for their processing must be either consent or legitimate interest. Use of data collected for the contractual performance for the purpose of training AI is a violation of GDPR.
- Such data must comply with the accuracy principle.
- A dataset must be diverse enough for AI not to be biased and have a discriminative attitude toward someone.
- Stipulate in the Privacy Policy what personal data is collected and how long it will be stored.
2. Provide data subjects with easily accessible tools for executing their rights, in particular, to object to the processing of their personal data or to obtain rectification of their personal data as generated incorrectly.
3. Set up a reliable system for checking the user’s age.
4. Ensure necessary security measures for personal data protection (e.g. pseudo anonymization and encryption).
Conclusion
Based on the chronology of events, ChatGPT’s ban in Italy had a happy ending for all parties. Open AI implemented additional measures for users’ data protection and reinstated service in Italy, while European Supervisory Authorities created a task force on Chat GPT.
At the same time, after the story in Italy, the European Commission started a more scrupulous attitude to generative AI, such as ChatGPT. In particular, the European Commission is planning to add to the AI Act provision that such companies will have to disclose any copyrighted material used to develop their systems.
Regardless of the potentially significant benefits of using AI in different fields, many legal issues remain, including user protection against unlawful usage of personal data. Therefore, we recommend being mindful of the information access to which you give to chat or share with it.