AI is soooo good at everything - except MY job, of course x
Expertise bias: when we give AI a free pass outside our own lane, because we don’t know enough to spot the mistakes
This essay was written by one of my fave writers and thinkers, Sacha Judd. You might know her as SYSCA’s Chief Harry Styles Correspondent and generally excellent contributor, and I’m telling you now, you’re being CRAZY if you don’t already subscribe to her newsy, What You Love Matters 𓆩♡𓆪
Here are some things I’ve asked chatgpt recently:
“why is Toblerone so popular at airport duty-free stores?”
“give me a recipe for a chickpea tagine - I only have a can of chickpeas, button mushrooms, a courgette and some capsicum”
“where should I go for dinner near the ace hotel in sydney?”
The tagine was okay. The restaurant recs from my friends were better. I have no idea if it was right about Toblerone.
This all got me thinking about something called Gell-Mann amnesia. It’s a term coined by the author Michael Crichton (yes, the Jurassic Park guy). The idea is: you read a newspaper article (remember those?) about your own area of expertise, and you spot all the mistakes that the journalist has made. You roll your eyes, turn the page… and then you believe everything else in the paper. You trust all the articles that are about topics you’re not an expert in. (Crichton named this after a famous physicist, ironically, saying that by dropping a famous name, he gave the whole thing greater weight than it should have had).
It’s weird, though, right? If someone you knew lied to you about one thing, you’d normally doubt them on other things. But with the media, we forget as soon as we turn the page.
Expertise Bias
I think we’re living through the AI version of this right now. I’m calling it Expertise Bias (fake-ironic names are a waste of time).
Here’s how it works: Ask a lawyer if large language models are impressive, and they’ll tell you all the ways they get the law wrong. But those same lawyers? They’ll happily say it’s amazing for marketing copy, or writing lesson plans, or whatever else they don’t usually do. The graphic designer thinks it’s bad at design but “incredible” for writing. Writers will tell you AI prose is bland and soulless. But they’ll use it for travel itineraries or, I don’t know, plumbing advice. The trip planner thinks it’s bad at itineraries but “killer” at coding. And so on and so on.
We give AI a free pass outside our own lane, because we don’t have the expertise to spot the mistakes there. And that creates a kind of halo effect. AI must be good overall, because it’s apparently good somewhere.
The problem? This makes it really easy to overestimate what these tools can do. If everyone assumes “well, it’s bad at my thing, but great at yours,” you end up with a whole society convinced the tech is more capable than it really is. My fave example is when I was writing about Shakespeare as the art that always seems to survive the apocalypse. I asked chatgpt where the most unusual places were that Romeo and Juliet had ever been performed. It straight up made up 30 performances, each of which was wilder than the last.
That’s how we get headlines like “AI doctors are here!” when the reality is that doctors using AI are getting worse at diagnosing. Or “AI can replace college!” when it’s actually eroding our thinking skills.
Test it on your turf
This is the first time in history a new technology can convincingly pretend to know everything. That’s both wild and a little bit dangerous. If you’ve grown up with the internet, you already know not to trust everything you read online. But AI feels different, because it’s so confident. That confidence is persuasive. It feels like authority, even when it’s just smooth-sounding nonsense. And here’s the kicker: the better AI gets at sounding right, the harder it gets to spot when it’s wrong. If you don’t have the expertise, you might never notice.
You don’t have to be an AI hater (like me), but at the very least you can:
Test it on your turf. Try it on something you know inside out. If it flubs the basics, assume it flubs elsewhere too.
Don’t outsource your workout. Ted Chiang has a great metaphor: essay writing is strength training for the brain. Using AI to do it for you is like bringing a forklift into the weight room.
Keep some things human on purpose. Not everything needs to be optimised. Some stuff should be messy, inefficient, and handmade — that’s where joy lives.
AI is like that one overconfident friend: great fun at parties, but maybe don’t trust them with directions to the airport. Expertise Bias is sneaky. It lets AI skate by on a reputation built from all the places no one’s double-checking. The risk isn’t just that it makes mistakes, it’s that we stop noticing when it does, because we’ve decided it must be “brilliant” in someone else’s domain.
So next time you catch yourself saying, “Oh, I’d never use it for my thing, but it’s amazing for X”… remember that someone else is saying exactly the opposite. And if we all keep doing that, we end up giving a pass to a tool that’s still learning while pretending it’s already an expert.





This was an amazing read and made so much sense.
AI right now is at the " jack of all trades master of none " stage.
This is very good. While I think some form of AI/human hybrid is inevitable there’s always been expertise bias. I grew up with teachers, married an engineer, worked with medical doctors, have lawyers for neighbors, I can tell you from experience each of these groups thinks they are the most brilliant&know everything about everything. I’m not surprised human IT (who also think it’s they who are the most intelligent)created AI thinks they are the experts.