The Soulless Assistant: Why I Use AI (And Why My Kid Hates It)

The “Spinning” Reality: AI as a Tool I Need
For the last two days, my world has been a blurry mess. I’ve been stuck with a migraine that won’t quit, and this time, it brought a dizzy feeling along with it. When the room is spinning, even looking at a computer screen feels like a bad idea. It makes me feel like I’m on a ride that won’t stop, and it makes doing any real work feel impossible.
When you deal with constant medical “BS”—the shots, the pills, and the endless paperwork—it starts to feel like a wall between you and your goals. I have stories I want to tell and blog posts I need to write. But when my body is failing me, the distance between a good idea and a finished page feels like a hundred miles.
The Creative Prosthetic
This is where AI changes from a “fancy computer program” into something I actually need: a tool that helps me keep going.
To some people, using AI is just a way to be lazy. But when I’m too sick to sit up or focus my eyes, AI acts like a pair of crutches for my brain. When I can’t type, I can talk. When I can’t organize my thoughts because my head hurts too much, I can ask the AI to hold onto my ideas for me.
But I have a very strict rule: I am still the boss. I use it to help me find information, but I always go back and check the facts myself. I make sure the websites and links it gives me are real. For my stories, I use it to build the “bones,” like outlines or first drafts. It helps me get started when I’m too weak to do it all on my own.
It isn’t a perfect system, but it’s a helpful one. I don’t use AI because I want to stop being a writer. I use it because, when I’m dizzy and in pain, it’s the only way I can stay a writer. It doesn’t have a “soul” or my personal touch—that part still comes from me. It just gives me the extra set of hands I don’t have right now.
The Great Divide: Why My Family Hates the Machine
I’ve tried to talk about this with my kid, my kid’s wife, and my nieces. It’s not an easy conversation. To them, AI isn’t a helpful tool—it’s a villain. They are all young, born between 1995 and 2005. They have some very strong reasons for their hatred, and honestly, they aren’t wrong.
The first thing they’ll tell you is that AI is a thief. They argue that these programs only “learn” by stealing work that real people already created—artists, writers, and photographers who never gave their permission. Then there is the “green” argument. My nieces feel that using AI is a slap in the face to the planet because of the massive amount of energy and water it takes to run these programs.
The Safety of the Human Path
It’s easy to see why they feel so safe hating it, though. They have all chosen paths where a computer can’t really replace them. Two of my nieces are still students, working hard to become a veterinarian and a sports medicine doctor. My other niece is a social worker, and my kid is a comic book artist. You can’t ask a computer to perform surgery on a dog, help a person through a crisis, or draw with a human’s unique hand. These jobs need a real heart and a real person.
The only one who might have to worry is my daughter-in-law. She works as a receptionist at a doctor’s office. We are already seeing those jobs vanish. Everywhere you go, you see “self-check-in” screens or apps that do the work a person used to do. It’s scary to watch a human connection get traded for a piece of glass and some code.
So, we sit on opposite sides of the table. They see a thief that is hurting the earth and stealing jobs. I see a tool that helps me keep my head above water when I’m too sick to swim. We both agree on one thing, though: the human part of life is the most important part. We just disagree on whether the “machine” should be part of the process at all.
The “Living Actor” Rule
When the room is spinning and I can’t do much else, I’ve found myself watching “ReelShorts.” These are those fast-paced dramas you watch on your phone. Lately, I’ve run out of the regular ones and have started watching dubbed Asian stories, and even some made by AI.
It’s a strange experience. To be honest, the AI stories sometimes sound better than the real people in the low-budget shorts. This isn’t because the AI is a better “actor,” but because the voices are clearer. In many low-budget dubs, the voice actors use ridiculous voices that don’t match the person on screen. They even mispronounce words based on the censored captions on the screen—like literally saying “dmn” instead of “damn.”
The AI voices sound more professional and consistent, making them easier to listen to. But even if the voice is clear, there is still a huge problem: they often use an AI narrator to tell the story instead of letting the characters speak for themselves.
I never want that for my stories. If my writing ever gets turned into a show, I want living actors. A recent article by the World Economic Forum says we shouldn’t even call AI creations “actors.” They argue that acting is a craft built on “emotional truth.” An AI can’t bring its own life experiences to a role because it hasn’t lived one.
Think about the greats, like Meryl Streep, Benedict Cumberbatch, or Millie Bobby Brown. When they are on screen, they “breathe and bleed” for the role. Experts warn that if we replace these people with machines, we risk “flattening” our culture. Even when I’m binging these AI shorts during a migraine, I can feel that “hollow” spot. AI is great for the technical stuff, but it can’t live a story.
The Blasphemy of the Detectors
If the “hollow” acting of AI is frustrating, the tools meant to catch it are even worse. We are now seeing the rise of “AI Detectors”—programs that are supposed to tell if a human or a machine wrote a piece of text. The problem? These detectors are basically useless and, honestly, insulting.
It is a form of literary blasphemy. These programs have flagged some of the greatest writing in history as “AI-generated.” There have been reports of these detectors claiming that the U.S. Constitution or the works of William Shakespeare were written by a bot. I’ve even seen this failure firsthand. I recently took a college paper I wrote back in 1990—long before this kind of AI existed—and ran it through a modern detector. It claimed my original paper was 35% AI. Apparently, I was a robot decades before it was even possible.
I even tested this with a recent article from Medium titled “Shakespeare vs. the LLMs,” which is a deep dive into the very difference between human and AI writing. I ran it through a detector called TextGuard, and the results were laughable. It gave the article a 66% AI-generated score. It claimed the writing had “unnatural patterns” and “repetitive structures typical of AI.” Here’s the ultimate irony: a human writing a brilliant piece about why humans are different from machines gets labeled as a machine by a machine.
The Failure of the Human Reader

But the problem goes deeper than just bad software. A recent study, reported in Smithsonian Magazine, found something even more upsetting: when regular people were shown poems by famous writers and poems written by AI, they actually preferred the AI verse. They found the AI version easier to understand and more “human-like” than the actual human poets.
This is the real danger. AI isn’t just “Style Police”; it’s a “Style Flattener.” It creates content so smooth and easy to swallow that people are starting to find the real, messy, complex work of a human being—like Shakespeare’s—too “difficult.” We are training ourselves to prefer the fake over the real. We are being watched by “watchmen” who don’t know what real talent looks like, and we are being entertained by a machine that tells us exactly what we want to hear.
A Fragile Peace
At the end of the day, I am living in the middle ground. I am caught between my family, who sees AI as a thief, and my own body, which sometimes needs a tool just to keep working.
We do not have to love the “God in the Machine” to use it as a creative prosthetic. I will keep using it to hold my place and organize my thoughts when the room is spinning and the migraine is winning. It is a way for me to keep fighting for medical justice and sharing my story, even when I can’t physically type. But I will also keep my “Living Actor” rule. I will keep checking the facts and the backup links. And I will keep fighting for the “difficult” beauty of real human work.
The evidence is clear: whether it is a detector failing to recognize a paper from 1990 or a program like TextGuard flagging a human-written article as 66% AI, these machines are not the ultimate judges of truth. They cannot feel the “emotional truth” that the World Economic Forum says is the core of real acting.
AI can help us build the world, but it should never be allowed to be the soul of the story. That part belongs to us—the people who breathe, bleed, and occasionally get dizzy. We have to make sure that in a world of “smooth” AI stories, we don’t lose the rough, real edges that make us human. Only a living person can bring a life’s worth of experience to the page or the screen. No matter how good the code gets, the heartbeat has to stay human.
Discover more from Loudest Winchester
Subscribe to get the latest posts sent to your email.



You must be logged in to post a comment.