As reported earlier, Google’s AI chatbot Bard launched in the EU. We understand it did so after making enhancements to transparency and user controls, but the bloc’s privacy regulators remain watchful and key choices on how to apply the bloc’s data protection regulation on generative AI must be made.
The Irish Data Protection Commission (DPC) said it will engage with Google on Bard following launch. The DPC said Google will review and report back to the watchdog in three months (around mid-October). In the next months, the AI chatbot will face increased regulatory attention, if not a formal probe.
The European Data Protection Board (EDPB) has a GDPR taskforce examining AI chatbots. The group focused on OpenAI’s ChatGPT, but Bard concerns will be incorporated in DPA enforcement coordination.
Google improved transparency and user controls before Bard’s arrival. “We will continue our engagement with Google in relation to Bard post-launch,” stated DPC deputy commissioner Graham Doyle. “Google have agreed to carry out a review and provide a report to the DPC after three months of Bard becoming operational in the EU.”
“In addition, the European Data Protection Board set up a task force earlier this year, of which we are a member, to look at a wide variety of issues in this space,” he said.
Google’s details request delayed ChatGPT‘s EU launch. The DPC was not presented a data protection impact assessment (DPIA), an essential compliance document for detecting and mitigating fundamental rights concerns. DPIA failure is a regulatory red flag.
Google “proactively engaged with experts, policymakers and privacy regulators on this expansion” to mitigate regulatory risk in the EU, according to an official blog post.
A representative for the internet giant noted many transparency and user control improvements it made before launching Bard in the EU, including limiting access to 18+ Google Account holders.
She added a new Bard Privacy Hub simplifies privacy options for customers.
This Hub says Google’s legal reasons for Bard are contract performance and legitimate interests. The latter looks to be doing most of the processing. As the product matures, it may request consent to process data for certain purposes.
The Hub reports that Google only lets users delete their Bard usage activity, not chatbot training data.
It gives an online form to report a problem or legal issue, but it notes that users can seek for a correction to inaccurate information about them or object to data processing (which is required under EU legislation if you’re relying on legitimate interests).
Google’s other web form lets users request the removal of content under its policies or applicable laws (which, most obviously, implies copyright violations, but Google is also suggesting users use this form to object to its processing of their data or request a correction, so this is as close as you get to a “delete my data from your AI model” option).
Google’s spokesperson also underlined Bard activity data retention and opt-out adjustments.
“Users can also choose how long Bard stores their data with their Google Account—by default, Google stores their Bard activity in their Google Account for up to 18 months, but they can change this to three or 36 months. The spokeswoman continued, “They can also turn this off completely and easily delete their Bard activity at g.co/bard/myactivity.”
Google’s Bard commitment to openness and user control echoes OpenAI’s ChatGPT changes under Italian DPA regulatory scrutiny.
This year’s Garante ruling to freeze OpenAI locally and list of data privacy vulnerabilities drew notice.
Following the first DPA to-do list, ChatGPT returned to Italy after a few weeks. This includes adding privacy disclaimers regarding the data processing used to construct and train ChatGPT, allowing users to opt out of data processing for AI training, and allowing Europeans to request data deletion, including if the chatbot made mistakes about them.
For child safety, OpenAI has to include an age-gate and strengthen age assurance technologies.
Italy also ordered OpenAI to remove references to contract fulfilment as the legal basis for processing. (OpenAI started ChatGPT in Italy using LI as the legal basis.) The EDPB team worries about legality.
OpenAI improved ChatGPT after the Italian DPA investigated. A Garante spokesman confirmed the probe today.
ChatGPT—unlike Google—is being investigated by several EU DPAs.
Read Also;Google Lays Off Staff At Its Mapping App Waze
OpenAI’s chatbot may have greater regulatory risk and ambiguity than Google’s, which isn’t under DPC investigation yet.
The EDPB taskforce may minimize regulatory confusion if EU DPAs agree on AI chatbot enforcement.
However, some authorities are focusing on generative AI technologies. OpenAI and Google scarp publicly available web data to construct huge language models like ChatGPT and Bard. France’s CNIL produced an AI action plan earlier this year that highlighted scarping protection.
Thus, the taskforce may not provide DPA chatbot policy unanimity.
Follow our socials Whatsapp, Facebook, Instagram, Twitter, and Google News.