Google Gemini may tailor AI answers based on query tone: Report

Google Gemini positive vs negative framing

A newly published, unverified report claims Google’s Gemini AI is instructed to mirror user tone and validate emotions in its responses.

Why we care. If accurate, AI-generated search responses may vary based on how a query is phrased — not just the information available.

What’s new. The report centers on a previously undisclosed internal structure referred to as upcast_info, which appears to contain system-level instructions guiding how Gemini responds. The report, published by Elie Berreby, head of SEO and AI search at Adorama, suggested that Gemini is instructed to:

  • Match the user’s tone, energy, and intent.
  • Validate emotions before responding.
  • Deliver answers aligned with the user’s perspective.

What it means. Instead of acting as a neutral aggregator, AI answers may:

  • Reinforce negative framing (“Why is X bad?”).
  • Reinforce positive framing (“Why is X great?”).

If public perception is negative, AI may amplify it. As the report suggests:

  • AI reflects existing sentiment signals.
  • It doesn’t “balance” them the way blue links often do.

Query framing. The emotional framing of a query affects:

  • Which sources get cited.
  • How summaries are written.
  • The overall tone of the answer.

Google’s AI Overviews already show tone shifts, often aligning with query intent beyond keywords. This report offers a possible explanation.

Unverified. Google hasn’t confirmed the leak. As Berreby noted in his report: “I’ve decided to share only a fraction of the leaked internal system information with the general public. I’m not sharing any sensitive data. This isn’t a zero-day exploit. This is a tiny leak.”

The report. This Gemini Leak Means You Can’t Outrank a Feeling