When OpenAI debuted ChatGPT Shopping last week, in a Challenge to Google, its CEO, Sam Altman, described the upgrade as a “concierge” that can translate a plain-language request—say, “sleek waterproof earbuds under $150 that ship fast”—into a shortlist complete with photos, summarized reviews, and a single click-to-buy link. The appeal is obvious: less scrolling, relevant recommendations, fewer suspicious sponsored results and more organic results, and no need to decode boilerplate five-star reviews. Yet research on automation bias in consumer finance has found that once an algorithm confidently nominates “the best” choice, many people stop questioning those results and cease comparison. Service and product providers know this, and because consumers rarely click past the first results page when comparison-shopping, they compete to appear there. In the mortgage context, for example, platforms may feature lenders that pay the highest referral fees at the top—masquerading as though they’re ranked by lowest interest rate—while relegating lower-fee lenders with equal or better rates to page two. In early 2023, CFPB Director Rohit Chopra issued the Advisory Opinion on Digital Mortgage Comparison-Shopping Platforms and Related Payments to Operators, warning that this kind of “non-neutral presentation” can steer consumers in violation of the law. Cutting search friction can feel like a gift—until it becomes a blindfold that dulls our instinct to ask, “What other options exist?” By tapping into a user’s full chat history, ChatGPT Shopping takes personalization to a new extreme: it can home in on each individual’s “sweet spot,” subtly steering decisions at a hyper-personalized level while raising acute privacy and data-management risks should any sensitive details slip out. Unlike Facebook—which pieces together check-ins, page likes, browsing behavior, and purchase records to serve ads—ChatGPT Shopping would arguably synthesize nuanced conversational clues (tone, past preferences, even off-hand remarks) to shape recommendations in ways that go far beyond traditional profiling. After The Australian revealed a leaked 23-page Facebook memo showing the platform could identify moments when teens might need an emotional boost, Facebook responded that, although it monitors overall sentiment to improve its services, it does not permit advertisers to target users based on those emotional signals. Moving beyond emotional targeting, ChatGPT Shopping would need to institute robust safeguards—including data minimization, transparent and explainable personalization algorithms, granular user controls over which past chats inform recommendations, and independent audits of its hyper-personalization mechanisms—to prevent undue influence and preserve consumer autonomy and privacy. Indeed, a recent large-scale field experiment on Reddit’s r/ChangeMyView found that AI-generated, personalized replies achieved persuasive rates up to six times higher than human comments—demonstrating just how potent and realistic LLM-driven persuasion can be in everyday online interactions

ChatGPT and the EU: User Surge and DSA Scrutiny
Across the Atlantic, where jurisdictions such as the United Kingdom and some other European countries will not have ChatGPT memory upgrades offered, ChatGPT’s search functionality has exploded in popularity—so much so that EU regulators are considering whether it should soon be classified as a “Very-Large Online Platform” under the Digital Services Act (DSA). In just six months, ChatGPT Search has grown from 11.2 million to 41.3 million monthly users in the European Union. Once it hits 45 million, the DSA automatically imposes this classification and with it requirements such as annual risk audits, allowing users to opt out of recommendation systems and profiling, obligatory researcher access, and hefty fines of up to certain percentages of the entity’s global turnover. Crucially, this threshold applies to the search function alone—not the new shopping feature—so surpassing it would subject even recommendation tweaks within ChatGPT Shopping to the DSA’s strict transparency requirements. In other words, the surge in users may compel OpenAI to reveal its ranking logic long before any affiliate fees start flowing.

ChatGPT Shopping: SEO, Affiliate Links, and Social-Media Monetization Lessons

OpenAI frames the new tool as an answer, in many ways, to search-engine optimization (SEO) sticky situation. SEO is the craft of rewriting headlines, alt-text, and even model numbers in order to make certain webpages climb Google’s search result rankings. The technique can be useful—entities and stakeholders want to be found—but it also fuels an arms race of keyword-stuffed descriptions, recycled stock photos, and fake or paid-for reviews. ChatGPT Shopping claims to look past those tricks and elevate genuine quality. Yet Altman has mentioned that he would be open to “tasteful” advertising in which they charge affiliate fees for purchases made through ChatGPT.

But how would ChatGPT’s recommendation system work? Would it be similar to advertisements done now on social media pages? Scholars have documented, in a growing body of work, how similar revenue streams reshaped social networks. Therefore, what began as authentic, peer-to-peer sharing morphed into pay-to-play storefronts.

The parallel is sharper when we recall the creator economy’s authenticity crisis. Marketing researchers Reto Hofstetter and Johanna Franziska Gollnhofer coined the phrase “creator’s dilemma” after studying influencers torn between staying “real” and maximizing commissions. Exploring this tension, the Harvard Business Review recently published a piece that argued that the industry “needs guardrails,” warning that undisclosed sponsorships corrode trust, while a June 2024 Forbes piece found that a large majority of shoppers assume influencers do not actually use what they pitch. If ChatGPT quietly pockets tiered commissions, the best-case outcome is a swift trust collapse—an Instagram-style authenticity erosion condensed into months—but the more troubling possibility is that users’ deep dependence on its ‘voice’ for everything from homework and therapy prompts to travel planning and even election-related information recommendations, could blunt skepticism, leaving them unable or unwilling to second-guess or abandon its recommendations.” This isn’t hypothetical: the mentioned above field trial showed that personalized LLM comments on r/ChangeMyView changed the original poster’s mind 18 percent of the time—six times the 3 percent rate for human comments—showing how powerfully an AI “voice” can persuade.

Consumer-Protection Flashpoints for ChatGPT Shopping Tips

The first flashpoint is disclosure. Both U.S. Federal Trade Commission endorsement rules and the DSA require any material connection—i.e., a commission—to be “clear and conspicuous.” Because a chat interface has no hashtags or banner labels, the disclosure must appear within the very sentence that recommends the product—for example: “I may earn a commission if you buy through this link.” Anything less risks reviving the opacity that plagued early social commerce. After all, consumers quickly lose trust in an influencer who earns commissions or accepts freebies without clearly disclosing that relationship.

Second is the need for second opinions, a gap my automation-bias study flagged. I proposed “hyper-nudges”: simple prompts like “Compare three alternatives,” “See independent reviews,” or “Ask me why I ranked this first.” In user tests, even a three-second pause restored deliberation. Embedding those nudges could make ChatGPT Shopping a teaching tool rather than a shepherd.

Third comes youth vulnerability. In my study This Is Not a Game, my co-author and I showed how emotionally immersive chatbots exploit dopamine loops, pulling younger generations into marathon conversations. Add friction-free shopping suggestions and impulse buys could surge. South Korea now limits late-night in-app purchases and require spending caps for minors—guardrails worth adapting before a “midnight retail therapy” habit starts to form.

Finally, transparency buys time. OpenAI could voluntarily publish high-level ranking factors (“price-to-performance ratio,” “verified owner reviews outweigh unverified,” “no commission influences relevance”) and invite accredited researchers to audit outputs for affiliate bias. Facebook, Instagram, and TikTok learned—under subpoena—that opacity breeds retroactive regulation. Sunlight, offered early, often earns lighter oversight. Moreover, harnessing crowdsourced knowledge can be extremely useful, and also enrich audit processes. Likewise, RegTech tools—such as automated compliance platforms and blockchain-based tracking—are increasingly utilized to monitor recommendation integrity in real time.

A Playbook for Safer ChatGPT Conversational Commerce

OpenAI should take the lead by implementing real-time affiliate disclosures, built-in comparison nudges, age-aware spending controls, and a researcher portal to prevent ChatGPT Shopping from repeating social media’s worst instincts. The recent field trial on Reddit—where AI comments changed minds six times the human rate—underscores just how urgent these safeguards are. Lawmakers might then fine-tune rather than overhaul: extend influencer-disclosure rules to “AI curators,” fund sandbox audits, and mandate age-appropriate design in conversational commerce. Consumers, for their part, should treat ChatGPT Shopping’s pitch like the words of a charismatic salesperson—often helpful, never gospel. Open another tab, run one manual search, or consult a human friend before clicking “Buy Now.” Learn from the past, and ChatGPT Shopping could help us fix the SEO mess without blurring the line between helper and huckster. Ignore the signals, and ChatGPT Shopping may teach us—again—how easily trust can be monetized and exploited.

Read the full article here

Share.