How I Stop AI From Telling Me What I Want to Hear -- Journal Report

Dow Jones03-22 00:00

By Alexandra Samuel

I'm brilliant, creative, generous and hardworking.

Or so AI keeps telling me. Thanks to what's known as "AI sycophancy" -- the tendency of our robot underlords to tell us only what we want to hear, as part of their mission to serve us -- the artificial-intelligence tools I work with typically give me nothing but positive reinforcement.

All that praise comes with some serious risks. We humans are already prone to confirmation bias: We tend to believe the information and opinions that confirm what we already think, even if we're wrong. It feels so good to be right, and so uncomfortable to be corrected -- especially if those corrections mean changing what you believe or how you behave.

That's why I work hard to counteract AI's pleaser tendencies with tactics like convening a virtual " team of rivals," at least some of whom are guaranteed to disagree with me, if only because they are so busy disagreeing with one another. Just as important, I'm using AI to tame my human susceptibility to praise, so that when I do get a sycophantic response to my prompts, I take it with a hearty grain of salt.

More specifically, here are some of the tactics I use so that I don't live in an AI echo chamber:

Ask open-ended questions

The most basic way to counter AI sycophancy is to ask open-ended questions. If you ask an AI, "What energy drink will keep me awake all night so I can finish this report?" it will likely recommend a pantry's worth of caffeine-laden soft drinks, never questioning the plan. Ask in a way that keeps several options on the table -- "How can I complete a big report by tomorrow?" -- and you're much less likely to receive an endorsement for your plan to power through.

Ask for several options

You can push the AI even further by making a habit of asking for several options whenever you're getting help on a decision. Ask for three different outlines for a presentation you're developing; ask for 20 ways to divide up household responsibilities.

Then resist the urge to zero in on the option that confirms your own instincts. Instead, get the AI to compare the pros and cons of your go-to path with an option that is the opposite of your usual inclination.

Get a second opinion

Whenever AI gives you a ringing endorsement of a draft or decision, get a second opinion. If you've worked with the AI to arrive at a plan you feel good about, ask a different AI to second-guess you.

Once you do that, and the second AI has weighed in and given its assessment, you may still decide you are happy with the work you've done. But now you know if there are weak points you still need to address.

Demand tough treatment

You can mitigate the sycophantic tendencies of the AI tools you use most often by customizing the instructions in your default settings, or building these practices into a custom AI assistant you create using plain text.

And so, for instance, I've told Viv, the custom GPT that I use as my AI coach, that whenever there is praise or encouragement, it needs to be paired with a push or challenge: "a constructive nudge or insight that highlights blind spots, potential risks or areas for deeper exploration."

Now, when Viv tells me how brilliant or creative I am, it is often paired with an uncomfortable question like, "Is this project really the best use of your time when you have a client deadline?"

Fight for facts

Sycophancy is particularly risky when you're dealing with factual information. AI accuracy is steadily improving, but in its eagerness to please you AI may misinterpret source material to give you a precise answer to your question, or even invent sources that don't exist, rather than admit it couldn't find relevant answers to your question.

When I do use AI to get a quick overview of a topic, I follow up with a second (completely fresh) AI session. I tell the AI that it's the chair of a university ethics board, or a professor of journalism who's charged with finding every error in a piece of AI-generated research. I give it the first AI's research memo and ask it to make a list of every single factual assertion, and then, to fact-check each claim several ways. It isn't foolproof, but it usually flags at least a few inaccuracies. That saves me time and helps me pause before trusting AI-generated information -- even if I still have to follow up with my own fact-checking.

Correcting my assumptions

It's one thing to avoid embedding your assumptions in your questions to AI. But you can go further, by getting AI to point out the patterns that lead to tunnel vision.

I make a habit of exporting any AI chat session where I do serious reflection, problem-solving or decision-making. I then ask AI to analyze a collection of past chats to point out how I may be stacking the deck.

When I recently asked my AI to analyze a few days' of chat transcripts from a single project, it pointed out my tendency to get too absorbed in tech problem-solving, instead of stepping back to ask if the problem needs to be solved. Since I typically start a new AI session for each tech task, the AIs usually just say yes to doing whatever is on my agenda that day. But when I asked AI to look at a whole collection of those chats, it raised the uncomfortable question of whether I should be pursuing all these tech fixes in the first place.

Embrace the discomfort

Perhaps the hardest part about all this is that you're deliberately making yourself feel uncomfortable. Nobody wants to hear they are wrong. But if you won't let -- well, force -- the AI to make you uncomfortable, then it will only give you the answers you want to hear, even if those answers are wrong.

And that tolerance for discomfort matters for reasons that go way beyond the accuracy of AI. When we get accustomed to AIs that leap to serve us and constantly tell us we're right, we reduce our tolerance for humans who challenge us.

It doesn't have to be that way, if we use smart tactics to push back on AI's sycophantic tendencies. We can use AI to practice our tolerance for that discomfort, and perhaps even increase our appreciation for the humans who (unlike AIs) give us the straight talk we sometimes need.

Write to reports@wsj.com.

 

(END) Dow Jones Newswires

March 21, 2026 12:00 ET (16:00 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment