Home » Artificial intelligence and transparency – a data protection impossibility?

Artificial intelligence and transparency – a data protection impossibility?

5/5 - (1 vote)

Artificial intelligence (AI) has become an indispensable part of our society, and its influence on our decisions is constantly increasing, sometimes even taking decisions for us completely. AI is already making independent decisions in the context of autonomous driving, disease detection, and voice assistants. Although it offers numerous advantages, headlines such as ” Fired by the algorithm ,” ” When algorithms refuse a home loan ,” or ” Warning about racist and sexist AI ” tarnish the positive image and at the same time. Highlight the dangers and risks that the use of AI can pose to the rights and freedoms of those affected. For this reason, it is not surprising that the principle of transparency is becoming increasingly important, and that those affected, in particular, want to know what makes the AI ​​tick and how it makes its decisions a data protection impossibility.

The “brain” of artificial intelligence a data protection impossibility

AI applications also comprise multiple processing steps, canada business fax list but at their core are the AI ​​model. Hose architecture and methods were creat by the developer. Without subsequent training of the AI ​​model, it can rightly be described as ignorant, as the model only acquires knowledge through training with sample data. The sample data consists of an input value, e.g., an image, text, or other data, and a target value that the AI ​​model predicts based on the input data. For example, whether the customer is creditworthy for a loan based on their information or data.

With each training run, the model learns by constantly adjusting the parameters, also called model weights, and ideally, delivers increasingly better predictions. The resulting AI consists of the AI ​​model used and the trained model weights.

In most cases, the AI ​​model is modeled after the human brain in the form of neurons or neural networks. For very large models, experts also refer to this as deep learning. This often involves a network with several thousand neurons . Even if the AI ​​model itself and the arrangement of the neurons are clearly visible, it is usually a black box . This is due to the fact that the trained .Model weights and the associat . Therefore not always comprehensible.

The principle of transparency

For this reason, it is necessary to look at the origins of the concept of transparency in data protection law.  When, and on what occasion about them can make independent decisions about the use of their data (cf. BVerfGE 65, 1 (43)). However, if you want to be successful those who have no knowledge of how their data is being processed are also unable to exercise their rights under Articles 15 to 22 GDPR for the protection of personal data – in keeping with the saying

Furthermore, the idea of ​​transparency is reflected in numerous places in the GDPR. The three most important standards are the disclosure and information obligations under Articles 13 to 15 GDPR, which also represent the central prerequisites for safeguarding the rights of data subjects. Only with sufficient information can the data subject make a truly informed decision regarding the processing of their personal data

In summary, the principle of transparency describes the state of making the processing of data visible and understandable for “everyone”.

What does this mean in practice?

When AI is us, marketing list controllers face the challenge of complying with the principle of transparency . Violations of this obligation may result in a fine under Art. 83 GDPR. To avoid this risk, the framework of AI in a comprehensible manner.

Practical implementation options:

. Considering the aspect that AI is a black box, be to disclose the AI ​​model or the source code, including the model weights . After all, these are the linchpins of the AI ​​application. . .

Scroll to Top