While generative AI is a step towards automation, it also poses a number of problems, not least authenticity.
How do you know if the reviews you read about a product are true and written by people who have actually tried the product?
That’s the battle Google intends to wage from August 28, 2023 with its new product review policy. The new policy provisions include restrictions on the use of automated content and artificial intelligence.
To better understand these provisions, we’ll take a look in this article at their content on the one hand, the penalties incurred and how to maintain authenticity on the other.
Follow along!
What to take away from the updated product evaluation policies on automated AI content?
Google has modified its product review policies in relation to AI-generated automated content. These updates argue that AI-generated or automated review content should be flagged as spam.
Here’s what the new rule states:
Automated content: we do not allow notices generated primarily by an automated program or artificial intelligence application. If you have identified such content, please mark it as spam in your feed using the <is_spam> attribute.
Google
To ensure compliance with these new policies, Google combines automatic and human evaluation methods. Machine learning algorithms are employed to support this effort, while specially trained experts handle more complex cases requiring context.
Measures taken against violations include disapproving infringing content or notices, issuing warnings or suspending accounts in the event of repeated or serious infringements.
In addition, if images are reported as being in breach of the policy, content relating to those images will also be blocked.
Additional product rating guidelines
Google’s existing product rating policies aim to preserve the authenticity, legality and ethics of reviews on its platform.
The policies also prohibit unsafe products or unsafe acts, and reviews of potentially harmful or largely illegal products.
To protect reviewers, the policy prohibits the sharing of personal and confidential information, phone numbers, email addresses or URLs in review content.
To ensure a clean and respectful review environment, Google’s policies prohibit obscene, profane, offensive language, violent or defamatory content and personal attacks.
Measures against this are drastic, and include not only removal of the review, but also reporting to law enforcement, particularly when the violation involves minors.
The search engine also stresses that comments that infringe copyright do not comply with its guidelines. The same applies to comments involving plagiarism.
Google has stated in its updated policy that reviews must be submitted in the original language, with a translation option offered by Google for users.
Maintaining authenticity online
Google’s policy updates highlight the growing need for human-created content that is clearly distinguishable from AI-generated content.
This demonstrates the role of humans in review and rating systems, and limits the effectiveness of AI-generated reviews. These updates could therefore affect the way certain companies promote their products.
This commitment helps to ensure that the information and reviews in Google’s search results are reliable, which is essential for the trust of online businesses and consumers alike.
In a nutshell
All in all, Google’s product review updates take AI-generated content, including reviews, into account. The search engine considers AI-generated reviews to be spam, and therefore a violation of its policy.
Google has taken steps to identify such automatically-generated content, and to impose sanctions ranging from blocking the review to deleting the user’s account.