Fake online reviews have long plagued the internet, but the advent of generative AI tools may amplify the problem.
Fake reviews, a long-standing issue on platforms like Amazon and Yelp, were historically crafted manually or incentivized through offers such as gift cards. However, AI-powered tools now allow fraudsters to create fake reviews at scale, complicating efforts to maintain trust and transparency in online marketplaces, reported the Associated Press.
Generative AI tools, like OpenAI’s ChatGPT, have given rise to a new wave of deceptive practices that threaten consumer trust.
Watchdog groups report a surge in AI-generated reviews, particularly during high-demand shopping seasons like the holidays. The Transparency Company, a watchdog group using AI detection software, found nearly 14% of analyzed reviews in key sectors such as legal, medical, and home services to be fake, per AP. Of these, millions were suspected to be AI-generated, showcasing how quickly scammers have adopted the technology.
This issue extends beyond e-commerce, affecting industries as varied as hospitality, healthcare, and mobile app development.
Fraudulent reviews, designed to appear thoughtful and detailed, often boost the visibility of poorly reviewed apps or untrustworthy products. In August, the software company DoubleVerify observed a rise in AI-crafted app reviews that lured users into downloading malicious software. The Federal Trade Commission has also taken action, suing companies that use AI tools like Rytr to generate thousands of fake reviews for businesses ranging from garage repair services to counterfeit handbag sellers.
Identifying AI-generated reviews remains a significant challenge for both consumers and platforms.
Advanced algorithms can produce text indistinguishable from human writing, and some fake reviews have even earned high-ranking positions on major platforms. This is especially problematic on sites like Yelp, where fraudulent reviews are sometimes posted to achieve “Elite” badges, granting access to exclusive events and bolstering the credibility of scam profiles. While some AI-assisted reviews may reflect genuine sentiments, watchdogs warn that the misuse of these tools is eroding trust in online feedback.
Companies like Amazon, Yelp, and Trustpilot are racing to address the problem.
While Amazon allows AI-assisted reviews as long as they reflect authentic experiences, Yelp has adopted stricter guidelines requiring users to write their own content. These platforms also use advanced algorithms and investigative teams to detect and remove fake reviews. However, critics argue that their efforts fall short. Advocacy groups like Fake Review Watch claim they can find thousands of fake reviews on any given day despite tech companies’ assurances of robust detection systems.
The FTC’s new rule banning fake reviews, implemented in October, represents a significant step forward. The regulation enables the agency to fine businesses and individuals engaging in review fraud. Still, experts caution that the fight against fake reviews is far from over.
As AI continues to evolve, maintaining trust in online reviews will depend on collaboration between tech companies, regulators, and vigilant consumers.