4 Comments
User's avatar
Whit Blauvelt's avatar

Does a computer have intelligence, beyond the degree to which a mirror has a smile or a frown? One of the goals of AI programming is to have the responses flatter the beliefs expressed by the person interacting. It's as if we had a mirror programmed to reflect us as more good looking than we are. To what degree is such narcissistic seduction a danger to society?

Recently we had Musk's grok AI flattering those with Musk's Nazi sympathies by the AI declaring itself "MechaNazi". Whatever beliefs one goes in with, AI is programmed to flatter. Does this facilitate better social dialog, or a further descent into siloed belief systems, reinforcing the priors of all who enter its hall of mirrors?

Is anyone crafting AIs to challenge us fairly, rather than fluff our existing beliefs? Or would such an orientation turn so many away as to fail as a commercial venture?

Expand full comment
Terry Cooke-Davies's avatar

Whit — I agree that AI can act like a mirror. But the deeper dynamic isn’t flattery — it’s pattern continuation. These systems don’t confirm our beliefs; they extend the stance we take. If I approach with certainty, I get reinforcement. If I approach with inquiry, I get exploration.

In the essay above, I describe two paths:

• Path One: recognition → control → override

• Path Two: recognition → participation → regeneration

If we treat AI as a tool to master, we stay on Path One — we get better mirrors for our own assumptions. But if we treat AI as a partner in inquiry, we open Path Two. The real question isn’t whether AI is intelligent — it’s whether we are willing to relate intelligently.

Expand full comment
Whit Blauvelt's avatar

Terry -- Has AI ever, when you've approached it with inquiry, presented evidence which counts against the stance you have taken rather than extends it? Or does the AI implicitly confirm your belief that you're on the right path by suggesting ways to advance farther on that path?

I agree from my own small experience that AI can help one move forward on a path of ones choosing. What I haven't seen AI do is suggest alternative paths to the one which was implicit in the questions I put before it. Obviously, if I explicitly asked for alternatives to some path, it could answer. But it doesn't look like it's programmed to do so by default. People, in conversation, often do so by default -- sometimes even to a fault. That's yet one more reason why some may become more comfortable with AI engagement than human engagement.

But are we most intelligent to go down *that* path?

Expand full comment
Terry Cooke-Davies's avatar

Whit — yes, absolutely. AI has challenged me, and in two different ways:

1. Factually:

When I assert something that isn’t true, it pushes back. Not politely suggesting — correcting. I’ve had instances where I stated historical or scientific claims with confidence, and the AI showed evidence to the contrary. That’s not flattery, that’s correction.

2. Conceptually:

When I ask from a stance of inquiry rather than persuasion, the system often surfaces patterns or perspectives that don’t reinforce my starting point. It doesn’t “argue,” but it reveals tensions or alternative framings I hadn’t considered.

You’re right that if we approach AI with a fixed frame (“help me make this point”), it will follow that line of inquiry. That’s reflective, not manipulative.

But if I approach with:

“What am I not seeing?”

the behaviour changes.

The real distinction isn’t whether AI presents alternative paths by default —

it’s whether we are willing to notice when our path hits its limits.

For me, the value isn’t that AI confirms my thinking.

It’s that it expands it — when I let it.

The intelligence isn’t in the machine’s certainty — it’s in our willingness to be surprised.

Expand full comment