How AI affects the Privacy Policy and Terms of Use
Congratulations on a new feature in your application! Now that you have programmed it and started implementing it (and the release is coming soon! Or has it already happened?), an urgent question arose:
Do I need to change my privacy policy (Privacy Policy) and my website agreement (Terms of Use / Terms of Service)?
And then immediately the following two:
Who will make me?
What to change?
This article will help you understand what to do immediately after the software engineering work is completed so that the use of AI is healthy (and does not harm the trust of concerned users).
Let’s start with the Privacy Policy!
In fact, the GDPR itself forces you to disclose information about the use of AI, right in the text of several of its articles. For example, Article 13 and 14 clearly state that the controller must disclose data about:
- personal data being processed (for example, personal data that is present in the training dataset);
- the purposes of the processing (i.e. for what purposes the AI with access to personal data will be used);
- recipients of personal data (if you use a ready-made model and not your own);
- terms of processing and storage of personal data;
- AI operation logic, if it is used to make automated decisions with legal or other serious consequences (for example, refusal of credit, sale of services or goods, an appointment with a doctor, etc.);
- sources of data (if the data is obtained by the model not directly from the user but from another dataset or material).
Some other countries have similar requirements for disclosure of information about the processing process.
We remind you that GDPR is a technologically neutral regulation: it covers both manual and semi- or fully automated processing. Artificial intelligence will not be an exception. The GDPR applies to it, as well as other laws on the protection of personal data (if personal data is also processed in the work of AI, of course).
Moreover, if your model provider is not located in the EU but in a third country such as the USA or Ukraine, there is a possibility that a Data Protection Impact Assessment (DPIA) must first be conducted before even allowing the model to access any personal data of your users. This DPIA can be very helpful in describing, for example, the protection measures you take to ensure that nothing happens to the data of Europeans in a third country; because it is in the DPIA that you can detail possible risks, available tools and mitigation strategies.
In addition, it is better to describe the rights of the data subjects and how they can enjoy them in the privacy policy. Can they remove their data from the AI model datasets? Can it correct inaccurate data? Can he get access to his data and a description of the transactions (read, purposes and legal grounds) with them? How can he ask a living person to step in to review the decisions? It is also better to put all this, at least, in the privacy policy (and even better – also in the design of personal accounts or as separate tools with quick access from the footer, for example).
Also remember: if you’re using user data to train a model based on their consent, give them instructions and a tool to just as quickly withdraw their consent at any time.
And now let’s go to the public offer (Terms of Use)
Terms of Use, Terms of Service, ToS, ToU, EULA, Terms and Conditions… You will find many names that hide a similar essence: an agreement between you and the user regarding the provision of services (and the collection of fees, if applicable).
And here we are again: we will have to enter new information here!
For example, depending on the essence of your product, you can add points about:
- prohibited uses of your AI-powered product;
- copyright and related rights (if AI generates text, images, audio or other works protected by copyright);
- monetization models (if you charge for an AI feature, but the monetization model does not provide for the same payment for each call through an API);
- compliance with the requirements of the law on the protection of fair competition (for example, not to issue AI work for the work of a specific human artist);
- guarantees of product performance (including AI as its components), as well as disclaimers and disclaimers of guarantees;
- obligations for users of your product (or model) to disclose information about the use of AI (and the AI itself) in their own privacy policies;
- AI restrictions and limits imposed by you, the model provider or, for example, the platform where the product is used (photostocks, markets for illustrations or texts, for example);
- if relevant at the time you read this article, then also export or other restrictions (for example, Chinese censorship laws) adopted by the state on the use and/or sale of an AI product to other countries, etc.
The absence of these clauses can not only lead to mistrust of more AI-savvy users but also lead to the help desk (and the legal department) being flooded with questions and complaints from users who do not understand the legal status of the product (or will assemble their own compliance puzzle to resell the items of your product in their own product, if you work in B2B service).
Conclusions: what’s next?
We are still waiting for the omnibus laws on AI (AI Act) in the USA and the EU to ensure that you have accurately foreseen all legal risks. But even without them, there are many sources from which a knowledgeable lawyer will be able to take information and regulatory models to embed them in the logic of your documents.
Don’t rely on luck only! It’s better to immediately inform your users about the use of AI and give them all the legal leverage to control their rights and data (and understand the limitations and limits of technology and your responsibility to them) than be sorry in the courtroom.