EU’s Taskforce Shares Preliminary Findings on OpenAI’s Accuracy and Privacy Practices
The European Data Protection Board’s ChatGPT Taskforce offered a first look into its year-long scrutiny of OpenAI’s adherence to privacy and inclination to deliver accurate responses. The task force noted that OpenAI has not done enough to address inaccurate responses from the generative AI tool.

- This week, the European Data Protection Board’s ChatGPT Taskforce offered a first look into its year-long scrutiny of OpenAI’s adherence to privacy and inclination to deliver accurate responses.
- ChatGPT Taskforce comprises national privacy watchdogs from European Union (EU) member states.
A ChatGPT Taskforce set up by the European Data Protection Board (EDPB) has published preliminary findings, noting that OpenAI has not done enough to address inaccurate responses from the generative AI tool.
The Report of the work undertaken by the ChatGPT Taskforce is based on an investigation conducted by EDPB, which comprises national privacy watchdogs from European Union (EU) member states.
The ChatGPT Taskforce laid out several aspects of its investigation, including the transparency and legality of OpenAI’s data collection practices, the lawfulness of using ChatGPT-generated outputs for training, fairness to users regarding their inputs via prompts, and accuracy of outputs.
The ChatGPT Taskforce’s preliminary findings are based on more than 12 months of work to ascertain whether OpenAI’s consumer-facing generative AI chatbot subscribes to privacy regulations outlined for the EU.
“It has to be noted that the purpose of the data processing is to train ChatGPT and not necessarily to provide factually accurate information. As a matter of fact, due to the probabilistic nature of the system, the current training approach leads to a model which may also produce biased or made-up outputs. In addition, the outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy. In any case, the principle of data accuracy must be complied with,” EDPB noted.
See More: OpenAI Stops Researching Artificial Intelligence Risks
“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle, as recalled above.”
EDPB’s report reiterated how OpenAI cannot offload privacy risks stated under GDPR to the user by adding clauses in the terms and conditions that data subjects are responsible for their chat inputs.
OpenAI is also obligated to apprise users that it is using content input through prompts for training. Further, OpenAI’s data collection, pre-processing and training scrutiny because they “carry peculiar risks for the fundamental rights and freedoms of natural persons as ‘web scraping’ enables the automated collection and extraction of certain information from different publicly available sources on the Internet.”
OpenAI is expected to define precise collection criteria, ensuring specific data categories are not collected and certain sources, including public social media profiles, are excluded from data collection. The company is also expected to undertake measures to delete or anonymize personal data collected through web scraping before using it for training.
The EU’s AI Act is still months away from being enforced, which means any violations by OpenAI would be subject to penalties amounting to 4% of the annual global revenue under the General Data Protection Regulation (GDPR). Under the AI Act, the prescribed penalties are €35 million or 7% of global revenue, whichever is greater.
OpenAI was previously in the throes of the Italian Data Protection Authority’s (DPA) ire when it was banned in April 2023 “until ChatGPT respects privacy.” Access to ChatGPT was restored after OpenAI complied with the Italian privacy watchdog’s list of demands. The ChatGPT Taskforce was set up after Italy highlighted concerns over OpenAI’s service.
Following the tussle with the Italian DPA, OpenAI commenced operations in the EU in Ireland, thus bringing it under the jurisdiction of the Irish Data Protection Commission.