As generative AI models become more widely available, you may be integrating them into your apps. In line with Google’s commitment to responsible AI practices, we want to help ensure AI-generated content is safe for people and that their feedback is incorporated. Early next year, we’ll be requiring developers to provide the ability to report or flag offensive AI-generated content without needing to exit the app. You should utilize these reports to inform content filtering and moderation in your apps – similar to the in-app reporting system required today under our User Generated Content policies.
I like that this will be a system-wide requirement, which will slowly make it a common sight on Android, and thus, something users expect and know how to work with. In the same blog post announcing this new generative “AI” policy, Google also announced tighter rules around certain broad application permissions, limiting full-screen notifications, and more/
That’s none of you fucking business, Google. Just leave us alone instead of trying to police everything.
There will be a day when generative AI will be just another tool in the toolbox and there will be absolutely nothing anyone including google can do about it. I understand that people will have strong opinions on the use of AI, but ultimately those opinions do not matter because it’s not police-able. The technological restrictions needed to make such opinions police-able would be far more dystopian than generative AI itself. One way or another generative AI is coming, people may as well get used to it.