AI vs. data protection

Are ChatGPT and Co. incom­pa­ti­ble with the GDPR?

The Ita­li­an data pro­tec­tion aut­ho­ri­ty recent­ly announ­ced a nati­on­wi­de ban of the AI appli­ca­ti­on ChatGPT due to pri­va­cy con­cerns and Ger­man data pro­tec­tion aut­ho­ri­ties have initia­ted an inves­ti­ga­ti­on ant sent a ques­ti­on­n­aire to Ope­nAI. The Ita­li­an data pro­tec­tion aut­ho­ri­ty has sin­ce lifted the ban and, in Ger­ma­ny as well, mea­su­res of this kind appear to be off the table for now. Nevert­hel­ess, a fun­da­men­tal ten­si­on is evi­dent bet­ween AI and data protection.

ChatGPT and Co.: fun­da­men­tal pri­va­cy concerns?

The con­cerns expres­sed by the data pro­tec­tion aut­ho­ri­ties rela­te par­ti­cu­lar­ly to the ques­ti­ons as to whe­ther the pro­ces­sing of per­so­nal data in ChatGPT is com­pa­ti­ble with the fun­da­men­tal prin­ci­ples of the GDPR, whe­ther this pro­ces­sing rests on a valid legal basis and whe­ther data sub­jects are ade­qua­te­ly infor­med. The con­cerns expres­sed by the aut­ho­ri­ties are not new and are also appli­ca­ble to other AI solu­ti­ons which pro­cess per­so­nal data. Upon clo­ser exami­na­ti­on, it quick­ly beco­mes clear that use of AI rai­ses con­sidera­ble chal­lenges in terms of data pro­tec­tion law if one holds to a strict inter­pre­ta­ti­on of the GDPR:

  1. Accu­ra­cy of data pro­ces­sing
    Fal­se infor­ma­ti­on may have gra­ve con­se­quen­ces for data sub­jects. For this reason, the GDPR gene­ral­ly pro­vi­des that only fac­tual­ly accu­ra­te per­so­nal data may be pro­ces­sed, and that inac­cu­ra­te per­so­nal data must be dele­ted or cor­rec­ted wit­hout delay. But as things stand, when sys­tems like ChatGPT are asked to for­mu­la­te state­ments about spe­ci­fic peo­p­le, the AI will fre­quent­ly add in inac­cu­ra­te infor­ma­ti­on. This ten­den­cy, known as “hal­lu­ci­n­a­ti­on,” the­r­e­fo­re con­flicts with the prin­ci­ple of accu­ra­cy in data pro­ces­sing. The GDPR also gives data sub­jects a right to rec­ti­fi­ca­ti­on in cases invol­ving the pro­ces­sing of inac­cu­ra­te data.
  2. Legal basis: pro­hi­bi­ti­on sub­ject to appr­oval
    In accordance with the GDPR, all pro­ces­sing of per­so­nal data requi­res a legal basis. Accor­din­gly, it is often asked whe­ther per­so­nal data may be pro­ces­sed in con­nec­tion with the trai­ning of AI sys­tems. After all, this pro­cess invol­ves “fee­ding” AI sys­tems with lar­ge amounts of infor­ma­ti­on which is publicly available online, such as web­sites, publi­ca­ti­ons and jour­nal artic­les, inclu­ding the per­so­nal data they con­tain. Text and data mining is express­ly per­mit­ted in accordance with § 44 b(1) of the Ger­man Copy­right Act. But in the absence of the data subject’s con­sent, the only pos­si­ble legal basis for this pro­ces­sing is that of a legi­ti­ma­te inte­rest. Whe­ther the controller’s inte­rest in pro­ces­sing the data out­weighs the data subject’s inte­rest in pre­ven­ting it must be exami­ned on a case-by-case basis: the data pro­tec­tion aut­ho­ri­ties and the courts have yet to form a con­clu­si­ve assess­ment on this question.
  3. Trans­pa­rent infor­ma­ti­on
    The GDPR requi­res tho­se who pro­cess per­so­nal data to make the pro­ces­sing trans­pa­rent for data sub­jects and noti­fy them accor­din­gly. This duty to noti­fy data sub­jects appli­es in cases whe­re per­so­nal data is coll­ec­ted from the data sub­ject direct­ly (Artic­le 13 of the GDPR) as well as in cases whe­re per­so­nal data is coll­ec­ted from third par­ties (Artic­le 14 of the GDPR). The noti­fi­ca­ti­on must be trans­pa­rent and com­pre­hen­si­ble and must be con­vey­ed in clear and simp­le lan­guage. AI solu­ti­ons like ChatGPT face seve­ral chal­lenges in this regard. Use of AI is high­ly tech­ni­cal, so that pro­vi­ding trans­pa­rent and easi­ly under­stan­da­ble infor­ma­ti­on is a chall­enge for this reason alo­ne. Moreo­ver, the duty of noti­fi­ca­ti­on gene­ral­ly includes data sub­jects who­se data was used to train the sys­tem, even if they never inter­ac­ted with the sys­tem at all. After dis­cus­sions with the Ita­li­an data pro­tec­tion aut­ho­ri­ty, Ope­nAI now pro­vi­des much more exten­si­ve infor­ma­ti­on about the pro­ces­sing of per­so­nal data and the func­tio­ning of ChatGPT.

Con­for­mance of AI with data pro­tec­tion law 

The­re is a poten­ti­al con­flict bet­ween data pro­tec­tion law and the “natu­re” of AI sys­tems; after all, the much-vaunted “intel­li­gence” of AI is based on the exten­si­ve pro­ces­sing of (per­so­nal) data. But it is also clear that per­ma­nent­ly ban­ning AI sys­tems would be both unrea­li­stic and unwi­se unless Euro­pe wants to com­ple­te­ly aban­don the deve­lo­p­ment of AI sys­tems. Resol­ving con­flicts invol­ving the use of AI sys­tems requi­res an inter­pre­ta­ti­on of the GDPR which is tech- and innovation-friendly. Com­pa­nies using AI sys­tems would be well-advised to con­duct a data pro­tec­tion impact assess­ment (DPIA) in each indi­vi­du­al case. In our expe­ri­ence, it is easier to show that use of AI con­for­med to data pro­tec­tion law if the spe­ci­fic risks asso­cia­ted with this use, as well as sui­ta­ble mea­su­res to address tho­se risks, are well-documented.

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.