AI has become the new scapegoat. It’s made imperfection the new hallmark of human authenticity, and conversely, perfection the suspicious signature of AI.
Recently, “some dude I know” meticulously crafted an incredibly detailed report, evaluating the capabilities of one product against another. As soon as the Product Owner (who he works with) saw the report and noticed an em dash (—), he was immediately called out for “using AI too much and just copied and pasted it without even editing it.”
Here’s the irony: the “dude” entirely wrote the report. No AI was involved. He is meticulous about correcting spelling mistakes in his reports, uses grammar tools to enhance readability—and (oh, that em dash!) habitually uses the em dash, even setting an exception in his word processor to prevent it from being corrected to a comma.
This incident highlights a perception issue surrounding AI.[1], [2], [3] There’s a growing tendency to attribute any polished, well-articulated, or even slightly unconventional writing style to artificial intelligence. It suggests a troubling lack of discernment, where human diligence and a nuanced writing style are suspiciously viewed as machine-generated.
There’s a growing tendency to attribute any polished, well-articulated, or even slightly unconventional writing style to artificial intelligence.
It’s rather a bind, isn’t it? The moment a human manages to produce work that’s truly polished – devoid of typos, grammatically unimpeachable, meticulously detailed – the immediate reaction isn’t applause, but rather a raised eyebrow and the inevitable whisper: “Did an AI write this? They just copied and pasted it without even editing it, did they?” Our own hard-won diligence, the very traits we’ve strived for in writing and reporting, are now twisted into evidence against us.


It’s as if our long, arduous journey to achieve precision, clarity, and error-free output has ironically rendered us indistinguishable from the very tools designed to assist us. We’re now in a strange new world where demonstrating human effort might necessitate leaving in a few deliberate imperfections. A rogue comma, perhaps, or a charmingly misplaced apostrophe, just to prove we’re not secretly a silicon-based wordsmith. The “human touch” has paradoxically become synonymous with sloppiness, while flawless execution is now considered prima facie evidence of algorithmic assistance. And, even if the “dude” used AI, so what? This level of uninformed apprehension is particularly concerning.
This readiness to jump to conclusions, often without bothering to understand the underlying process, creates an unfair burden on individuals and underscores a societal misunderstanding of what AI actually is. In fact, I would view individuals who say things like, “Our company shouldn’t use or leverage AI,” “You cannot trust anything AI says,” or “AI is prone to hallucinations” – with a certain degree of unimaginativeness. What they’re often inadvertently revealing is a fundamental lack of understanding in how to effectively use AI.
Such a stance, to me, indicates not only a resistance to progress but also a glaring deficiency in technical adaptability and foresight. An inability or unwillingness to engage with LLMs, GPTs, and AI models signals a closed-mindedness that could severely limit an individual’s and, by extension, an organisation’s, potential for innovation and competitive advantage. It’s a statement that speaks less to the limitations of AI and more to their own comfort zone.
Solutions
How can we prevent this misunderstanding from happening?
- Read the entire context of a work or document, instead of judging how “perfectly written” and detailed it is and jumping to the GPT generalisation. Focus on the substance, not just the perceived polish.
- Educate ourselves on AI’s true capabilities and limitations. Understand that AI is a tool, and like any tool, its effectiveness and ethical implications are largely determined by the human hand that wields it.
- Foster a culture of open dialogue and experimentation with AI. Encourage responsible adoption and training, rather than outright rejection, to harness its potential benefits while mitigating risks. This means moving beyond fear and embracing informed engagement.
AI is incredibly great—if you know when and how to use it
AI’s value stems from its ability to amplify human capabilities and boost efficiency. It processes vast data, identifies patterns, and handles repetitive tasks with speed and accuracy beyond human reach. This isn’t about replacing people, but augmenting them. For example, GPTs can make an experienced programmer’s work more efficient and faster, but in the hands of a novice, they can lead to code bloat or broken logic. I’ve also read a comment from a Registered Nurse stating that GPTs have significantly reduced her nursing documentation burnout by 90%. She isn’t simply copying and pasting; she diligently reviews the generated content to ensure its accuracy and alignment with her intended meaning.
Finally, AI democratises knowledge and skills. It makes complex analytical tasks, once exclusive to specialists, accessible to a broader audience. Consider my anecdotal experience: I successfully planted a cherry tree, and it thrived, all because I followed AI-generated instructions. For a day, I was a professional Arborist (apologies to actual arborists for momentarily stepping into their domain).
You hear claims that AI will eliminate jobs, create widespread misinformation, or even lead to an existential threat. AI can’t eliminate your job, it should help you in your job (even when looking for one). My comment about being an Arborist wasn’t about stealing their job, it’s about relieving them of trivial, simple work so they can focus on bigger things that matter. It’s akin to getting a plaster from the medicine cabinet rather than calling 9-1-1 to get EMTs to dress my boo-boo. Elon’s fantasy (ugh, Elon) of “AI wiping out the human race” feels borrowed from a Schwarzenegger blockbuster. It’s laughable. As someone who understands code, logic, and algorithms, I can tell anyone that the “self-aware” nonsense is all but fiction. The only true concern I have with the usage of Artificial Intelligence models is the concern shared by organisations that genuinely require privacy[4].
These LLMs are often underestimated, blamed for problems and mistakes humans make after using it. But AI is not to blame. It is a tool, and like any tool, it is neither good nor bad; its impact depends on the user.
…oh, and did you notice what I wrote in my posts’ excerpt? That was 100% written by me, but since I ended it with “It is essential…”, some blowhards will think I got ChatGPT working on that. 😏
- Can We Really Tell If a Text Was Written by AI? ↩︎
- AI-generated poetry is indistinguishable ↩︎
- The increasing difficulty of detecting AI ↩︎
- When handling classified government or military information, the use of AI is often discouraged to avoid putting potentially sensitive data or code into a private AI company’s data. This concern is easily mitigated by adopting open-source and self-hosted AI models. ↩︎