On a support page, Google details the full list of 180 countries in which Bard is now available. This includes countries all over the globe, but very noticeably not any countries that are a part of the European Union. It’s a big absence from what is otherwise a global expansion for Google’s AI.
The reason why isn’t officially stated by Google, but it seems reasonable to believe that it’s related to GDPR. Just last month, Italy briefly banned ChatGPT over similar concerns that the AI couldn’t comply with the regulations. Google also slyly hints this might be the case saying that further Bard expansions will be made “consistent with local regulations.”
In other words, Bard probably does things that run afoul of the stricter privacy regulations in the EU. Make of that what you will.
This is the main problem:
Who has access to my Bard conversations?
We take your privacy seriously and we do not sell your personal information to anyone. To help Bard improve while protecting your privacy, we select a subset of conversations and use automated tools to help remove personally identifiable information. These sample conversations are reviewable by trained reviewers and kept for up to three years, separately from your Google Account.
Please do not include information that can be used to identify you or others in your Bard conversations.
I would not go that far.
It is currently possible to use Bard without recording any activity: https://myactivity.google.com/product/bard?utm_source=bard
If Google has not done a major regression in how they handle user data since I was there, this January, it is very unlikely the data will be used after you explicitly disable collection.
As for why it is not enabled in other jurisdictions? If I were to guess, probably someone was too slow in getting privacy approvals. Those take time. If ChatGPT could be re-enabled in Italy, I am pretty sure Bard will be available there soon.
(TL;DR;: Don’t attribute to malice what can be explained by laziness)
sukru,
It’s a whole new company now, everyone and everything you knew about them is changed.
Haha, j/k, but some day it will be true and it’s probably already true for earlier employees before you. It’s like revisiting a house or university campus you used to live at, it can be an eerie experience.
It seems very likely to me that it’s the GDPR. You say it could take time, but I don’t know, laws like this are wide reaching and it’s conceivable that some services will just stay shut in unfriendly jurisdictions.
Alfman,
I have read they started shutting down on campus cafes, including some I liked in the past. So, yes, when I eventually visit friends over there, I expect to see a different world.
(I have worked in projects that are more sensitive than Bard, and we already had support for EU and other regions. I have no worries they will eventually launch there).
There are basically two parts to the privacy approval: the user facing policy, and internal processes to make sure they are met. The second part is pretty much standard at the moment (as of January at least, again, I don’t expect regression). The first part might require some more back and forth with the attorneys. (Currently I am speculating).
As far as I’m aware, GDPR doesn’t have a pre=approval process. Maybe there’s some internal process that requires time though?
The GDPR includes provisions for erasure of personal data (‘the right to be forgotten’). Speculatively, that could be a problem for AI models like Bard, since once the model has been trained on the data, it’s likely to be very challenging to reverse.
https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/
So it’s possible the GDPR makes it legally problematic to incorporate any data from any EU citizen into the model, independent of whether any particular individual has given consent. But this is purely speculation, of course.
flypig,
There are internal approval processes. Without going into detail, I can say they are extensive to make sure all the affected regulatory metrics are met.
As for “deletion of data”, that has already been solved… quite a while ago. Again, can’t go into details, but if you delete an activity from https://myactivity.google.com/, that will be reflected in personalized recommendations, pretty much immediately (there might be extremely minor delays, of course).
Thanks for clarifying and for taking the time to reply.
Unfortunately that’s not a satisfactory answer for an end user like me. If a process is opaque and mysterious, and doesn’t align with incentives, I’m disinclined to trust it. I have no reason to.
But that doesn’t relate to Bard not being available in the EU. One of the weaknesses of the GDPR is that in many areas it assumes companies are trustworthy and acting in good faith.
flypig,
Yeah that’s a good point. Without an independent ombudsman with actual investigatory powers, it’s down to the company overseeing itself. Unfortunately, sometimes representatives tell the public one thing, but then doing something else internally. Not too long ago Google was found guilty of misleading and/or deceptive on privacy claims.
https://www.reuters.com/legal/exclusive-google-pay-about-400-million-settle-location-tracking-lawsuit-sources-2022-11-14/
I’m pretty sure I was affected by this too, it was a truly despicable violation of user trust by google. Just to be clear, I believe many employees really do care about ethics and users, isolated incidents aren’t necessarily representative. But still opaque corporate processes where nobody is independently representing user interests is automatically a bad sign for accountability. The problem though is that this is business as usual. Most employees work for companies with opaque operations. It’s just the way things are.
Actually…
https://www.theguardian.com/technology/2011/mar/30/google-privacy-reviews-buzz-ftc
Google already agreed to privacy audits for 20 years, starting in 2011. So that leaves another 8 years of direct regulatory checks.
(They have messed up during the Buzz era. After that all those policies are subject to government approval. They would still occasionally mess up, but much less than other comparable companies).
And…
Just to avoid confusion: getting and using a lot of data, vs misusing data are two separate concerns.
Yes. Google processes massive amount of user data.
sukru,
I’m really curious about it. However after doing several searches I couldn’t find any information about either the auditors or the scope of the audits. All I found were articles at the time that there would be audits performed by external companies that google themselves would pay for. However I can’t even find evidence this happened at all, much less that it was done with transparency. Do you know where to find information? It’s been over a decade and theoretically several reports should have been furnished by now. (and before anyone interjects, yes I tried searching google).
Alfman,
One such resource: https://cloud.google.com/security/compliance/compliance-reports-manager
And a high level overview of some of the certifications: https://workspace.google.com/learn-more/security/security-whitepaper/page-5.html#independent-third-party-certifications
I am not sure whether these are exactly what you are looking for.
sukru,
I was referring specifically to the audit(s) that you mentioned. Are they not published? If not then I’d say it’s ironic that even their privacy audits are not transparent.
It’s a change of topic, but obviously customers buying google “cloud” services will have their own unique concerns and legal obligations for HIPPA, PCI compliance, etc. Given that osnews is hosted at google, these hosting certifications may be relevant to our privacy here on osnews,