I was half-watching a video about AI at the gym last week when the presenter said something that stuck. He was simplifying what AI can actually do. The word he used was "parlor trick." Not real magic. Not intelligence. A trick. And it got me thinking about this idea of looking at AI not as intelligence, but as an illusion of intelligence.

That distinction matters more than any technical explanation I've heard this year.

What the Trick Looks Like

A friend of mine runs a small consulting firm. Last month he showed me a strategy deck that AI had written for him. Forty slides. Clean structure. Solid recommendations. "Took me twenty minutes," he said, grinning like a man who'd found a cheat code.

I asked him one question. "What would you change about the recommendation on slide fourteen?"

He paused. "I haven't read it that closely yet."

That's the parlor trick. The output looked like thinking. It felt like intelligence. But the person holding it couldn't tell you if it was right or wrong, because the work of thinking never happened. Not in the machine. Not in him.

What's happening underneath is pattern matching at enormous scale. A machine learning model trains on massive amounts of data and finds statistical patterns. Then it produces outputs that match those patterns. Large language models work by predicting the next token in a sequence, one piece at a time (Vaswani et al., 2017). Image classifiers sort pixels into categories. The results are impressive. The process is mechanical.

Here's what puts it in perspective. Your brain weighs about 1.3 kilograms. It runs on roughly 20 watts of power. To match what your brain does casually every second, AI needs stacks of computers filling warehouses, consuming megawatts. The Oak Ridge Frontier supercomputer needs 20 megawatts to achieve what your brain does on the energy budget of a dim light bulb (NIST, 2025). The machine brute-forces its way to an answer your brain reaches through something we still don't fully understand.

Spectacular on stage. Simple once you see the wires.

But the Trick Keeps Getting Better

This is where it gets uncomfortable.

A few years ago, AI-generated text was clumsy and obvious. Now it passes bar exams. GPT-4 scored well above the passing threshold for every US jurisdiction on the Uniform Bar Exam (Katz et al., 2024). It passes medical licensing exams, matching real doctors across multiple specialties (Katz et al., 2024). It composes music that listeners rate as emotionally stirring (Fišer et al., 2025). The models get larger. The outputs get harder to tell apart from human work.

I felt this myself. I was reviewing something a colleague had written. Three paragraphs in, I realized I couldn't tell which parts were his and which parts he'd handed to a machine. Not because the machine was brilliant. Because the writing was good enough that the question didn't seem to matter.

But it did matter. I just couldn't explain why.

So here is the honest question: at what point does the parlor trick become real magic? If a machine produces work you can't distinguish from a human's, does the process underneath still matter?

I think it does. And here's my reason.

A parlor trick doesn't fail gracefully. It works until it doesn't, and when it breaks, there's nothing behind it. No judgment. No instinct. No ability to say "this doesn't feel right" before the data confirms it.

In 2023, a New York lawyer named Steven Schwartz used ChatGPT to research cases for a legal brief. The AI gave him six cases with realistic citations, plausible holdings, and convincing legal reasoning. He asked ChatGPT if the cases were real. It said yes. He filed them in federal court. Every single case was fabricated. The judge sanctioned Schwartz and his colleagues $5,000 (Mata v. Avianca, Inc., S.D.N.Y., 2023). The trick looked perfect. The moment someone checked the wires, there was nothing there.

A human who built that legal argument from scratch would have known where the reasoning was strong and where it was thin. The machine that generated it had no idea. It doesn't have ideas. It has probabilities.

The gap between "impressively right most of the time" and "right when it matters most" is where human judgment lives. And that gap has not closed. Abhi tak nahin. Not yet.

Should We Worry or Look Forward to It?

So which is it? Should this make us nervous, or excited?

Both. And I'll tell you which one pulls harder for me.

Worry is the right response to something powerful that you don't fully control. It keeps you alert. It forces hard questions about governance, about limits, about what we allow these systems to do without a human in the loop. That lawyer sleepwalked. He trusted the trick and never checked the wires. Without worry, we all sleepwalk into problems we chose not to see.

But curiosity pulls harder. Not because I'm optimistic by nature. Because I've seen what happens when people engage with a new tool instead of running from it. They find edges the tool's creators never imagined. They use the machine to clear away the tedious work and spend their time on the work that only they can do.

Every type of work is being touched by this technology. Not in the future. Now. To look away from that is a choice. And it's a choice you'll pay for.

The Wires Are Still Showing

The parlor trick is getting better every year. It is possible, in our lifetime, that the wires disappear for good. That the trick becomes real magic. I don't know if that day comes in ten years or fifty. Nobody does. Anyone who gives you a confident timeline is guessing.

But here's what I do know. Right now, today, the wires are still showing. You can still see the gap between what the machine produces and what the machine understands. That gap is your window. Not to compete with the machine. That race is pointless. Bole to, you don't race a forklift. You drive one.

Learn what this technology does in your line of work. Use it where it saves you time. Push back where it cuts corners you can't afford to cut. And stay awake. Because the day the wires stop showing is the day this conversation changes completely.

The trick is still a trick. You still have time. Don't waste it watching from outside the room.


References

Fišer, N., Martín-Pascual, M. Á., & Andreu-Sánchez, C. (2025). Emotional impact of AI-generated vs. human-composed music in audiovisual media: A biometric and self-report study. PLOS ONE, 20(6), e0326498. https://doi.org/10.1371/journal.pone.0326498

Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P. (2024). GPT-4 passes the bar exam. Philosophical Transactions of the Royal Society A, 382(2270), 20230254. https://doi.org/10.1098/rsta.2023.0254

Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. June 22, 2023).

National Institute of Standards and Technology. (2025). Brain-inspired computing can help us create faster, more energy-efficient devices. NIST Taking Measure Blog. https://www.nist.gov/blogs/taking-measure/brain-inspired-computing

Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain's energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237–10239. https://doi.org/10.1073/pnas.172399499

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.