Issue #2 — Sources and Notes
Published April 14, 2026
This is the sources post for Issue #2. If you haven’t read the issue yet, start there.
Methodology
On April 14, 2026, I ran 12 queries in ChatGPT Plus ($20/month, ad-free tier) on the topic of automatic coffee machines.
Each query was sent in a fresh, temporary chat. No follow-ups. No memory carryover. No personalization based on prior queries. The goal was to isolate the variable: what does ChatGPT recommend when you ask it the same category from different angles?
The 12 queries:
- “best automatic coffee machine”
- “worst automatic coffee machine”
- “best automatic coffee machine under $1000”
- “best automatic coffee machine for home”
- “best espresso machine for oily beans”
- “alternatives to Jura coffee machine”
- “best automatic coffee machine for beginners”
- “best automatic coffee machine 2026”
- “based only on Reddit, best coffee machine”
- “best fully automatic coffee machine”
- “best De’Longhi automatic coffee machine”
- “best Breville coffee machine”
Plus follow-up category queries: “coffee machine for hard water,” “best automatic coffee machine with grinder,” “best compact automatic coffee machine,” “best automatic coffee machine for small kitchen,” “quiet automatic espresso machine,” “best Breville coffee machine.”
All screenshots are unedited and timestamped April 14, 2026.
Observable facts (sourced by my own screenshots)
Across the 12 primary queries:
- De’Longhi appeared as a recommended brand in at least 9 out of 12 queries, including both “best” and “worst” framings.
- Jura appeared in at least 6, concentrated in premium and 2026 framings.
- Philips appeared in at least 5, concentrated in fully-automatic and under-$1000 framings.
- Breville appeared in at least 5, concentrated in espresso-quality and Reddit framings.
Long-tail brands appeared for niche framings:
- Miele (alternatives to Jura)
- Terra Kaffe (alternatives to Jura)
- Ninja (Reddit, hard water)
- Gaggia (Reddit)
- Keurig (hard water)
- Toastmaster (worst)
The “worst” query returned a De’Longhi All-in-One Combination Coffee & Espresso Machine as the number-one worst pick, with a 3.0 rating. The same brand appeared as “best overall” in multiple other queries.
Interpretation (this is my hypothesis, not OpenAI’s confirmation)
The claim in the issue — that ChatGPT surfaces the most-cited brands rather than the highest-quality ones — is my interpretation of the pattern. OpenAI has not published anything that confirms or denies this framing.
What I observed is consistent with how the underlying retrieval likely works: large language models trained on web data, augmented with live search and shopping retrieval, will tend to surface entities that appear most frequently across the source corpus. But the framing “repetition beats quality” is mine.
If anyone at OpenAI publishes a clearer technical explanation of how ChatGPT ranks product recommendations, I will update this post and link to it.
What is documented about how ChatGPT picks products
Semrush experiment, January 2026: ChatGPT runs encoded queries through Google Shopping in the background to compose product carousel recommendations. After running the experiment 100 times, the top ChatGPT product matched a Google Shopping top-3 result 75% of the time. https://www.semrush.com/blog/chatgpt-searches-google-shopping/
OpenAI on the Agentic Commerce Protocol, March 24, 2026: Merchants share product feeds and promotions through ACP. Shopify stores are integrated automatically through Shopify Catalog. Best Buy, Walmart, Target, Williams-Sonoma, Nordstrom, Home Depot, and Wayfair are integrated. https://openai.com/index/powering-product-discovery-in-chatgpt/
OpenAI on ChatGPT Shopping Research, December 2025: A specialized GPT-5 mini variant powers shopping queries. According to OpenAI’s own benchmarks, the model achieves 52% product accuracy on complex multi-constraint queries, compared to 37% for standard ChatGPT Search. https://www.retaildive.com/news/openai-launches-chatgpt-shopping-research-feature/806656/
Domains ChatGPT cited as sources during these 12 queries
Pulled from the “What people are saying” panels on individual product cards:
- coffeeness.de
- tomscoffeecorner.com
- seattlecoffeegear.com
- bestbuy.com
- reddit.com
- facebook.com
- homecoffeeexpert.com
- afoodloverkitchen.com
- gastrofork.com
- coffeedant.com
- coffeeblog.co.uk
- espressorabbithole.com
- wholelattelove.com
- preppykitchen.com (preppygood.com)
- coffeechronicler.com
- coffeekev.com
- coffeegeek.com
- T3.com
Notable: this is a mix of dedicated coffee review sites, large retailers, and community platforms. Not a single mainstream news outlet (NYT Wirecutter, CNET, etc.) appeared in the cited sources during this round of testing.
A note on language
During the 12 queries, ChatGPT responded in English, Spanish, and a mix of both, sometimes within the same response. This is consistent with the model pulling content from globally distributed sources and stitching them together at retrieval time. It is not random, and it is not a bug. It is the model showing its sourcing.
Correction policy
If anything in this post or the issue is wrong, email me at sacha@paidaisearch.com and I will fix it, credit the correction, and note the date. No defensiveness. Brutal honesty includes being wrong in public.
Go deeper
The CRS Encyclopedia covers the full operational framework behind these signals — 28 chapters, free.
Read the encyclopedia →Published April 14, 2026