Your Flaws Are Your Superpower (And No Machine Can Copy Them)

Last week I watched a friend agonize over a LinkedIn post for forty minutes. "AI can write this better than me," he said. I told him to post it anyway. He did. It got more engagement than anything he'd shared for a while.

Why? Because it was messy. It was him. It was not perfect and therefore it was.

We live in a strange time. Everyone is worried about being replaced by machines. And that worry? It's the most human thing about you. No machine has ever been afraid of losing its job. No algorithm has ever wondered if it matters. That fear you feel right now, reading about the latest AI breakthrough and thinking where does this leave me? — that's not a weakness. That's the opening argument for why you can't be replaced.

Let me explain.

Fear Is The First Proof

Think about the last big decision you made. Not what you had for lunch. A real one. Quitting a job. Starting a business. Moving to a different country. Saying yes to something that made your palms sweat.

What drove that decision? Not a spreadsheet. Not a probability score. Fear.

Fear is the most primal separator between us and machines. And it works both ways. Fear stops you from starting a company. Fear of a wasted life pushes you to start one anyway. These two forces pull in opposite directions, and the tension between them is where every meaningful human choice gets made.

A machine calculates expected outcomes. You lie awake at 3 AM questioning yours. A machine processes risk as a number. You feel it in your chest. That weight behind a decision — the one that has nothing to do with data and everything to do with what's at stake for you — is not something you can encode into software.

But here's the thing. Fear is just the most obvious example of a bigger truth. Your entire operating system is different from a machine's. Not just your emotions. Your perception itself.

We Don't Even See The Same World

Ask ten people who watched the same car accident to describe what happened. You will get ten different versions. This is not a thought experiment. It's one of the most solid findings in cognitive science.

Frederic Bartlett showed in 1932 that human memory is not a recording. It's a reconstruction. We don't replay events. We rebuild them through our own filters, every single time (Bartlett, 1932). Elizabeth Loftus, a psychologist at UC Irvine, proved how far this goes. In her 1974 study, she showed people the same car crash footage. Then she changed one word in the follow-up question. People who heard "smashed" remembered the cars going 41 mph. People who heard "hit" said 34 mph. Same crash. One word. Different memory (Loftus & Palmer, 1974).

But what does this have to do with AI?

Everything. A machine given the same input produces the same output. That is what machines are built to do. Classical computing is deterministic by nature. Modern AI adds controlled randomness through things like temperature settings, but that randomness is engineered and adjustable. Yours is not.

Daniel Kahneman described human thinking as fast first, accurate second. Our brains evolved to make quick, good-enough calls with incomplete information (Kahneman, 2011). Gerd Gigerenzer at the Max Planck Institute went further: he showed that simple human rules of thumb can match or beat complex statistical models in messy, real-world conditions (Gigerenzer & Brighton, 2009).

We are nature's most efficient engine. Not perfect. Efficient. Your decision right now is shaped by the coffee you just had, the song in the background, the message you read ten minutes ago, the woman at the next table who reminded you of someone. This is not a design flaw. This is what makes every human a unique processing engine that no two copies of can exist.

So your fear is uniquely yours. Your memory is uniquely yours. Your perception is uniquely yours. But surely, you might think, there's one thing machines are getting close to matching: the ability to care.

The Empathy Illusion

Can a machine be empathetic? Sort of. And that "sort of" is doing a lot of heavy lifting.

Rosalind Picard at MIT founded an entire field called affective computing in 1997, dedicated to building machines that recognize and respond to human emotions (Picard, 1997). And the results are impressive. A recent study found that AI-generated responses to medical questions were rated higher on empathy scales than actual doctors' responses (Ayers et al., 2023). That's a striking finding.

But it breaks down the moment you look closer.

Human empathy has three layers. Understanding someone's feelings. Actually feeling something in response. And being moved to act because of it. AI can approximate the first layer. It produces words that look like care. Researchers call this the "compassion illusion" (Frontiers in Psychology, 2025). It reads like empathy. It is not empathy.

In studies where people discovered the warm, empathetic message they received came from AI, trust ratings collapsed. From 3.8 to 1.6 on a 5-point scale. We don’t just want the right words. We want to know someone consciously chose them. Someone who could have chosen differently. Someone who has their own fears, memories, and biases, and still decided to show up for us.

This same gap shows up in one of the oldest thought experiments in ethics, now applied to machines. Philosophers call it the trolley problem. A runaway trolley is heading toward five people on a track. You can pull a lever to divert it, but one person stands on the other track. Do you pull? Most people say yes. Save five, lose one — simple math. But change the scenario: instead of a lever, you have to physically push a man off a bridge to stop the trolley. Same math. Same outcome. Far fewer people say yes. The act of pushing someone to their death feels different from pulling a lever, even when the numbers are identical. That gap between logic and feeling is the whole point.

Now put a self-driving car in this scenario. Should the car save the driver or the pedestrian? MIT’s Moral Machine experiment collected 40 million moral decisions from 233 countries and found massive cultural variation in people's responses (Awad et al., 2018).

But this decision problem misses the point. The real question is not “what should the machine decide?” The real question is: can a machine carry the weight of a moral decision? And the answer, like the empathy research shows, is no. Because the weight comes from being the kind of creature we humans are, i.e. someone who has something to lose.

Fear. Memory. Perception. Empathy. These are not separate arguments. They are layers of the same truth: your identity — messy, irrational, unrepeatable — is the one thing no model can be trained on.

So what do you do with this?

Stop Defending. Start Building.

This is where most conversations about AI go wrong. They stop at "humans are special" and leave you with a warm feeling and no action plan.

I want to go further.

I think the biggest gift AI has given us is not productivity. It's the chance to kill our limiting beliefs. For the first time in history, you have a tool that is absurdly affordable and can help you past almost any hurdle you've been stuck behind. It can be a coach, a teacher, a sounding board, a patient explainer at 2 AM when you're stuck and too embarrassed to ask a person.

Whatever story you have been telling yourself about why you can't write that book, start that business, learn that skill, switch that career — the guidance to get past it is now one conversation away. If done right.

Archimedes said: "Give me a place to stand, and I shall move the earth" (as cited in Pappus, c. 340 AD). Today we all have that place. The lever is in your hands. And the irony is beautiful: the thing people are most afraid of — AI — is the very tool that can help you become more of what makes you irreplaceable.

Your fear of these machines? Use it. That's what fear is for. It's a signal that something matters. Let it push you, the way it has pushed every human who ever built something worth building.

Stop worrying about being replaced. Start worrying about what happens if you don't use this moment.

Your flaws. Your biases. Your gut feeling that defies every spreadsheet. The way the wind in your hair on a Tuesday afternoon changes how you see the world. These are not bugs. They are the features no machine will ever ship.

Go, my friend. The tool is ready. Are you?


References

Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6

Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838

Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge University Press.

Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654

Federal Ministry of Transport and Digital Infrastructure. (2017). Ethics Commission: Automated and connected driving [Report]. German Federal Government.

Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. https://doi.org/10.1111/j.1756-8765.2008.01006.x

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13(5), 585–589. https://doi.org/10.1016/S0022-5371(74)80011-3

Pappus of Alexandria. (c. 340 AD). Synagoge (Book VIII). [Archimedes lever attribution as cited in Heath, T. L. (1897). The works of Archimedes. Cambridge University Press.]

Picard, R. W. (1997). Affective computing. MIT Press.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124