Google taps large language models to cut invalid ad traffic by 40%

How Google works: Experiments, entities, and the AI layer beneath search

Google is deploying large language models (LLMs) from its Ad Traffic Quality team, Google Research, and DeepMind to better detect and block invalid traffic – ad activity from non-human or uninterested sources – across its platforms.

Why we care. Invalid traffic drains advertiser budgets, skews publisher revenue, and undermines trust in the digital ad ecosystem. Google’s upgraded defenses aim to identify problematic ad placements more precisely, reducing policy-violating behaviors before they impact campaigns. This would mean fewer wasted impressions, better targeting accuracy, and stronger protection for their budgets.

By the numbers. Google said there was a 40% reduction in invalid traffic tied to deceptive or disruptive ad serving practices. This is due to faster detection of risky placements, which is accomplished in real time by analyzing app/web content, ad placements, and user interactions.

Between the lines: Google already runs extensive automated and manual checks to ensure advertisers aren’t billed for invalid traffic. However, the LLM-powered approach could be a bigger leap in speed and accuracy and could make deceptive ad strategies far harder to profit from.