Google profits from AI content spam generated by ChatGPT and LLMs



summary
Summary

Google can’t or won’t crack down on AI content in search, so content spammers take advantage. And Google makes money.

According to a study by Newsguard, a technology company that evaluates the quality of news and information on the web, 141 brands are likely unknowingly placing programmatic ads on low-quality AI-generated content sites that have “little or no human oversight.”

Newsguard classifies these sites as “Unreliable Artificial Intelligence-Generated News” (UAIN). In the last month alone, this category has grown from 49 to 217 tracked sites, it said. The tracking identifies about 25 new UAIN sites per week.

The sites are identified using error messages from AI models such as ChatGPT’s “As an AI language model”. Because this method is inherently imprecise, Newsguard assumes a high number of undetected sites.

ad

These sites use chatbots like ChatGPT to generate articles or rewrite existing articles from major publishers. The quality appears to be good enough to avoid detection by ad tech companies’ anti-spam detectors. One of the sites studied reportedly publishes more than 1,200 articles per day. Every article is used as advertising space.

Google profits from AI spam

Across 55 of the sites classified as UAIN, 141 established brands ran a total of 393 programmatic ads. The ads were served in the United States, Germany, France, and Italy.

356 of these 393 ads, or more than 90 percent, came from Google Ads, Google’s own service for displaying ads on websites. Google makes money on each ad served.

Advertisers can choose not to show ads on certain sites. However, this requires extensive research and is time-consuming to maintain.

With Ads and Adsense, Google controls the bids from advertisers and the delivery on the sites where advertisers’ ads appear.

Recommendation

generated fake blogs with generic content about life advice or self-optimization that easily attracted thousands of readers who even commented on them, believing they were dealing with humans.

However, Newsguard’s research shows for the first time how the massive growth in this area is being driven by increasingly powerful and accessible language models.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top