The Two Questions AI Can't Answer (But You Must)
Before the prompt--Part Four
You spent weeks finding your reader question. You pushed past the surface-level answers. You sat with the discomfort until you found the real question your story explores.
Now you’re ready. You open ChatGPT: “I’m writing a story exploring how ordinary people become complicit in authoritarianism. Help me develop it.”
AI delivers: three pages of suggestions, character archetypes representing different forms of complicity, plot beats showing the gradual erosion of freedom, themes about moral compromise. All of it sounds smart. All of it feels…plausible.
But as you read through the suggestions, something nags at you.
This could be anyone’s story.
The suggestions are competent. They’re not wrong. But they’re generic. AI gave you a story ABOUT your reader question, not YOUR story exploring that question.
Here’s what’s missing: You told AI what you’re exploring. But you didn’t tell it how you’re exploring it or why you’re the right person to explore it.
Those two pieces—the emotional experience you’re creating and your unique perspective—separate your story from the thousand other stories exploring the same human question. They’re what keep AI from turning your reader question into a Wikipedia entry with characters.
Most writers assume their reader question is enough. It’s not.
You need two more answers before you prompt anything. They take five minutes to articulate. They’re the difference between AI helping you write YOUR story versus helping you write A story about your topic.
Let me show you what I mean.
Question 1: What Experience Am I Creating?
Most writers focus on what happens in their story. Plot points. Character arcs. Rising action. Climax. Resolution.
All of that matters. But there’s something more fundamental that most writers never think to ask:
What do I want readers to feel when they finish?
Not “what should they think about.” Not “what should they learn.” But what emotional experience am I creating?
This isn’t your reader question. Your reader question tells you what human struggle you’re exploring. The emotional experience tells you how readers should feel while you explore it.
Why This Stops Generic Stories
When you don’t specify the emotional experience, AI defaults to genre conventions. Thrillers should feel tense. Romance should feel satisfying. Literary fiction should feel contemplative.
These aren’t wrong. They’re just generic.
Two stories with identical reader questions can create completely different emotional experiences. One makes readers cry. The other makes them think. One leaves them unsettled. The other leaves them hopeful.
That difference is what makes a story memorable instead of forgettable.
What This Looks Like in Practice
With The Maker’s Heir, my reader question was “What makes a life matter?”
The generic emotional experience: Readers should feel sad about loss, then hopeful about meaning.
My emotional experience: Readers should feel the weight of loss, then the warmth of small moments that matter despite their brevity, then the bittersweetness of memory fading while love remains.
See the difference? The first is a feeling. The second is a journey with a specific texture.
This shaped everything about how I worked with AI.
When I asked for scene suggestions, I didn’t say “make it emotional.” I said “this scene needs to slow down so readers can experience the tender moment before it passes—like holding onto something you know you’re about to lose.”
When AI suggested big dramatic reveals about R’s nature, I rejected them. Not because they were bad suggestions. But because they didn’t serve the emotional experience I wanted to create. I needed understated moments, not fireworks.
My Dystopian Novel
My reader question: “How do ordinary people become complicit in authoritarianism?”
The generic emotional experience: Readers should feel angry at authoritarianism and inspired to resist.
That’s not what I want.
My emotional experience: Readers should feel uncomfortably complicit—questioning their own compromises and wondering where they’d draw the line.
I don’t want readers to feel superior to my characters. I want them to squirm while recognizing themselves in the characters’ choices.
When I ask AI for character motivations, I don’t want villains. I want people making understandable compromises. When I ask for plot beats, I don’t want clear moral lines. I want gradual erosion that feels justified.
Without this clarity:
AI suggests: “Your character discovers the surveillance and becomes a whistleblower.”
Why this fails: It lets readers off the hook. They cheer for the hero instead of examining their own complicity.
With this clarity:
AI suggests: “Your character discovers the surveillance and convinces himself it’s necessary for safety—while his sister makes the opposite choice”
Why this works: Readers watch someone rationalize complicity while seeing the alternative. They have to ask themselves: “Which sibling would I be?”
Same reader question. Completely different emotional journey.
How to Articulate Your Emotional Experience
Before you open ChatGPT, complete this sentence:
“When readers finish this story, they should feel __________”
Good answers have specific texture:
“Haunted by their own capacity for complicity”
“The ache of beautiful things that can’t last”
“Grief mixed with gratitude for what remains”
Bad answers are too vague:
“Satisfied” (every story should do this)
“Emotional” (what kind of emotion?)
“Like they read a good story” (meaningless)
The test:
Can you describe the specific emotional texture? Not just “sad” but what kind of sadness? Not just “hopeful” but hope mixed with what else?
If you can’t articulate it, AI will fill in the blanks with genre defaults. And your story will feel like every other story in your genre.
When you’re clear about the emotional experience, AI suggestions that don’t serve it become obvious. You can redirect: “That’s a good scene, but it makes readers angry at the system. I need them to feel complicit. How do we adjust?”
Your reader question keeps you focused on the theme, and your emotional experience keeps you focused on how readers should feel while you explore it.
Both matter. Most writers only think about one.
But there’s one more question that ensures your story stays distinctly yours—not just emotionally resonant, but unmistakably written by YOU.
Question 2: What’s My Unique Angle?
Here’s the most challenging question:
What can you say about this reader question that AI—or another writer—can’t?
Most writers assume their reader question IS their unique angle. They think, “I’m exploring complicity in authoritarianism. That’s what makes my story mine.”
It’s not.
Hundreds of writers could explore that question. Dozens probably are right now. Some of them are better writers than you. Some have bigger platforms. Some will finish their books before you do.
So what makes YOUR exploration of that question worth reading?
The answer is the intersection of experiences, knowledge, and truth that only you bring to this question.
The Perspective Stacking Concept
Your unique angle comes from the combination of perspectives only you have.
Think of it as three layers:
1. Personal experience - What have you lived through that most writers haven’t?
2. Professional knowledge - What do you understand that took you years to learn?
3. Emotional truth - What are you trying to understand about yourself by writing this?
The intersection of these three is what makes your story yours.
My Dystopian Novel
Personal experience: I grew up watching people I respected make compromises I didn’t understand. I’ve seen good people rationalize bad systems. I’ve been fired for calling out unethical practices.
Professional knowledge: My cybersecurity background means I understand how surveillance systems work. Not sci-fi magic—real technology. I know how data gets weaponized, how systems expand incrementally.
Emotional truth: I’m terrified of becoming the person who stays silent. Who prioritizes safety over integrity. Who looks back and realizes I was complicit without meaning to be.
Another writer could explore complicity. But they won’t bring my technical understanding. They won’t have my specific experiences. They won’t be working through my particular fear.
The intersection is what makes it mine.
How to Find Yours
Answer three questions:
1. What lived experience am I bringing to this story that most writers won’t have?
Not “I’m a writer”—everyone reading this is a writer.
But:
“I worked in emergency medicine for ten years”
“I grew up in a fundamentalist religious community”
“I immigrated twice before I turned eighteen”
“I spent a decade in cybersecurity watching systems fail”
2. What do I understand about this topic that took me years to learn?
Your professional expertise.
Your hard-won insights.
The knowledge you didn’t get from a book.
For me:
How surveillance systems work.
How data gets collected, processed, and weaponized.
The technical reality behind the dystopian metaphors.
3. What emotional truth am I working through by writing this?
What keeps you up at night about this reader question?
What fear drives this story?
What are you trying to understand about yourself?
For me:
Would I be complicit?
When would I stop speaking up?
At what point does self-preservation become betrayal?
The test:
If another writer with a different background could tell this story the same way, your unique angle isn’t clear enough.
How This Changes AI Collaboration
Generic prompt: “Help me develop how the surveillance system works in my dystopian novel.”
Result: AI gives you sci-fi surveillance tropes. Cameras everywhere. Facial recognition. Thought police—the stuff every dystopian novel has.
Prompt with a unique angle: “I’m writing about surveillance with my cybersecurity background. The system should feel like a plausible extension of current technology—not sci-fi, but something that could exist with today’s tools. I want readers to recognize the surveillance methods from their lives, just taken further. What real surveillance techniques could be weaponized?”
Result: AI helps you use YOUR expertise. It suggests metadata analysis, social graph mapping, predictive policing algorithms. Real things that exist. Real things readers have heard about, made more invasive.
The technical plausibility comes from you. AI helps you explore the implications of what you already know.
When you’re clear about your unique angle, you recognize generic suggestions immediately. You know which parts to keep entirely human. You can’t be convinced to write outside your truth.
AI amplifies what you bring to the table. If you bring generic interest, AI amplifies that into generic competence. If you bring your unique angle, AI helps you explore it more deeply.
The 5-Minute Pre-Prompt Ritual
Before you open ChatGPT tomorrow, spend five minutes writing these three things down. I keep mine in a document called “Creative Intention” that I reference constantly.
1. My reader question is:
Write one sentence about the human struggle your story explores.
2. The emotional experience I want to create is:
Write one sentence about what readers should feel, not think.
3. My unique angle is:
Write 2-3 sentences about the intersection of your experience, knowledge, and emotional truth.
Here’s my complete example:
MY READER QUESTION IS:
How do ordinary people become complicit in authoritarianism?
THE EMOTIONAL EXPERIENCE I WANT TO CREATE IS:
Readers should feel uncomfortably complicit—-questioning their own compromises and wondering where they’d draw the line.
MY UNIQUE ANGLE IS:
My cybersecurity background gives me insight into how surveillance systems work and how data weaponization happens gradually. I’m exploring political history through technical understanding, making the dystopia feel plausibly current rather than sci-fi. My fear of becoming the silent bystander—of prioritizing safety over speaking up—drives every character choice.Keep this document open every time you work on your project. Load it into the LLM project instructions.
Before you ask AI for anything, read it. When AI gives you suggestions, test them against these three answers.
Five minutes now saves you hours of revision later. Most writers skip this step because they’re eager to start writing. But productivity without direction is just motion. You end up with competent but forgettable work.
Write down your three answers. Keep them visible. Use them ruthlessly.
What Changes Now
Over four posts, you’ve built your creative foundation:
Your reader question (WHAT you’re exploring)
Your emotional experience (HOW readers should feel)
Your unique angle (WHY you’re the right person to explore this)
Most writers open ChatGPT and type “Write me a chapter about X.” They get generic competence. Clean sentences. Forgettable stories.
You’re doing something different. You’re using AI to amplify your vision, not replace it.
But partnership requires clarity. You can’t collaborate if you don’t know what you’re building.
Next week: “Prompting Is Thinking: AI as Your Co-Conspirator.” I’ll show you how to transform this clarity into actual prompts. Not “write this for me” prompts, but “help me think through this” prompts. The difference between automation and amplification.
Before you open ChatGPT tomorrow, take five minutes. Answer three questions. Write them down.
Everything else starts there.
Send me your three answers: Reader question, emotional experience, unique angle. I’ll tell you if they’re strong enough to guide AI collaboration—or if you need to dig deeper.
And here are today’s books!
Deceived Mage
Betrayed by family, bound by magic—can Ella stop a deadly plan before it destroys everything she loves?
A dark two-year prequel to the Catenarian Chronicles
Born into a powerful ruling elite, Ella’s destiny and birthright seem clear—until she uncovers her father’s dark plan to secretly unleash a catastrophic Fusion Bomb that could wipe out her beloved hometown and blame the attack on his political opponents.
Determined to stop the impending destruction, Ella must secretly forge an alliance with the very opponents her family seeks to frame.
The Maker’s Heir
A literary Sci-Fi novel about AI, family, and what makes us human
What if the next step in artificial intelligence... was empathy?
R wasn’t built to change the world. He was built in a workshop—an artificial intelligence designed not for innovation, but for quiet companionship. But when his creator, Jacob, suffers a life-threatening injury, R must leave behind the safety of his programming and step into a human world full of noise, mess, and meaning.
As R learns to care for Jacob’s household, navigate unexpected parenthood, and adapt to a complex, emotional family, he begins to question his place in the world. Can a machine understand love? What defines personhood? And if an AI can feel grief, compassion, and connection—does it deserve a future of its own?





