Reincarnation: Inheritance

The post above inspired this story.


Reincarnation: Inheritance

It was always said your debts died with you.
However, that was before they discovered reincarnation was real.

At first, it was a miracle. Scientists unveiled the SoulPrint protocol, quantum mapping that could trace the consciousness of a person across lifetimes. Proof, they said, that the soul persisted.
Proof of justice, of continuity, of something greater than death.
For a brief moment, humanity believed it might finally mean something.

Then the corporations stepped in.
If you could inherit wealth, they argued, then you could inherit debt.
They called it, karmic enforcement.

Now, not even in death could you escape capitalist greed.

For people like Vale, that greed didn’t stop at the grave.
It was stitched into every contract, every scan, every form signed in a moment of desperation.

She had once dreamed of designing things that made life better: low-cost sensors, water filters for field kits, clean power retrofits for housing blocks. But none of it paid. Not fast enough.
Now she picked up whatever freelance work she could get: repair jobs, obsolete code patches,
things other people didn’t want to touch.
Because surviving came before dreams.

Then came the letter.
By the time Vale received it, the debt had already been accruing interest for over a century.

A paper envelope. Not a netmsg, real paper, an expensive rarity.
Embossed with a silver seal: Credit Continuity Division.

Inside, the summary was short:

You have been identified via certified quantum continuity as the reincarnation of one
Elias B. Trenholm, digital ID #TRN-1975-NYC.
As such, you are now legally liable for the outstanding balance attached to this identity, per the
Soul Liability Enforcement Act (2031).

Total Due: $1,823,566.72
Minimum Due: $45,000 by 06/01/2040

The second page showed a grayscale photo of a man in his thirties. His eyes were hollow, his expression blank. Born 1975. Died 2009. Systems engineer. Two failed tech startups (2001, 2007). Six-figure student debt from a shuttered for-profit college. Subprime mortgage default in 2008. Bankruptcy filed months before death.

His financial ruin had been meticulously preserved by NovaCredit Systems, Modia Federal Loan Servicing, and LexinTrust Analytics; all backers and lobbyists for the Soul Liability Enforcement Act.

At first, Vale laughed. Soul ID?
But the document bore a federal trace code from the Bureau of Reincarnative Affairs.
She checked. It matched.

Ten years ago, SoulPrint had been opt-in. Then the loopholes closed.
Now it was automatic, assigned at birth, embedded in hospital records, linked to biometric scans.
Her last emergency room visit must’ve triggered the match. She’d signed intake forms without thinking.

You could challenge a match. But the legal fees were higher than the debt. That was the point.
She didn’t remember being Elias. That didn’t matter.

At the Bureau office, the clerk didn’t even look up. Just scanned her ID band and printed the form.
“You should feel lucky,” the woman said. “Some people come back with ten debts across five centuries. At least you only had one life before.”

“That we know of,” Vale muttered.

The clerk slid a pamphlet across the desk.
Reparation Through Identity: Fulfilling Karmic Responsibility the Ethical Way.

“You have until the end of the month. If you can’t pay, the system defaults to Neural Offset.”

Vale blinked. “You mean you take memories.”

“Only non-essential ones. Redundancies. Nostalgia, dream patterns, sensory imprints. The process is non-invasive and fully compliant with consciousness retention regulations.”

“And if I refuse?”

The clerk looked up for the first time. “Then the debt rolls forward to your next incarnation. With penalties.”

Vale already knew the pitch. She’d seen the ads: clean clinics, soft music, testimonials from smiling people who couldn’t remember what they were so worried about.

But there were darker threads in the forums: screenshots, whistleblower leaks, buried research studies.

The memories they took: smells, regrets, first kisses, final goodbyes. Were stripped, tagged, and fed into commercial neurogrids. Used to train empathy simulators, targeted ad engines, interrogation AIs.

Neural Offset wasn’t just debt relief. It was asset extraction.

Vale sold everything she could. Maxed freelance shifts. Cut her meals in half.
But there was no making the kind of money she needed before the deadline.

Eventually, she stopped looking at her bank balance and started looking for answers instead.
She had started learning about Elias.

At first, she was just looking for a loophole, anything in his record that might reduce the debt or delay the offset. But the more she read, the worse it felt. He’d lived in a time before SoulPrints, before karmic enforcement, before any of this was possible. He couldn’t have known.
And yet she kept wondering: if he had, would he have lived differently?
Will anyone now, knowing what we know?

She kept reading. Elias’s patents read like sketches of the future: modular energy grids, urban vertical farms, autonomous repair drones. Most were filed and forgotten. Others were quietly bought, shelved, or rebranded. He was never credited for any of them.

The rest of his record told a common story, high interest student loans, lawsuits from creditors, a foreclosure in November 2008. Not scandal. Just failure in a system designed to let people fall through the cracks.

His obituary had labeled him:
Elias Trenholm – A dreamer who didn’t quit.

Vale stared at the line for a long time. It wasn’t fair, but it was familiar—dreaming, trying, failing, being forgotten. She didn’t remember his life, but now she was the one paying for it.

On the final day, she sat in the NeuroConvert clinic, cold metal pressed to her temples.

“You’ll forget some things,” the tech said. “Maybe a childhood smell. A favorite street. A name you only said once. Nothing essential.”

Vale nodded.

Then she asked, “Will I remember any of this?”

The tech didn’t answer.
The machine hummed.
She closed her eyes.

Three days later, a netmsg arrived:

NOTICE: Payment Received in Full.
Neural Offset Process Complete.
Debt Resolved.

Friendly reminder: Additional debts from undocumented lives may still be identified pending future audits.

She stared at the screen. A hollow space stirred behind her eyes.
Whatever had been there was gone. But something else remained.

Then, without knowing why, she opened a new document and typed:
A dreamer who didn’t quit.

Engagement & Community

Context:
This prompt is for when your audience becomes your inspiration. It flips the dynamic and shows you’re listening, not just broadcasting. Great for engagement and trust-building.

Prompt:
Here’s what someone on this list just taught me

  • For human:
    Highlight something a subscriber shared that impacted you. It shows you’re listening and builds connection.
  • For AI:
    Write an email where the sender shares a lesson or insight they learned from a subscriber or client. The tone should be grateful and genuine. Invite others to reply or contribute their thoughts as well.

Reminder:
AI will give you the shape. You still need to add the weight. Don’t let the message become too polished or generic, this is where personality and specificity matter most. No cold copy pasta.

Why it works:
Audiences respond when they feel seen. This prompt helps create a feedback loop: when someone on your list feels heard, others will want to engage too. It’s how you grow connection, not just content.



Newsletter Gold

Context:
This prompt helps freelancers and solo business owners stay connected to their audience between big launches or campaigns. It’s ideal for maintaining trust and momentum with a subscriber base that expects more than just pitches.

Prompt:
Monthly check-in: What’s working, what’s not, and what’s next

  • For human:
    Be transparent. Share wins, struggles, and what’s coming.
  • For AI:
    Write a monthly newsletter check-in for a small online business or freelancer brand. Include what went well this month, what didn’t work, and what’s coming next. Tone should be open, confident, and inviting.

Reminder:
This is not cold copy pasta. If you copy the AI’s draft and hit send without editing, your subscribers will feel it. Use the prompt to break the blank page, but inject your real voice and story into the output. AI can organize your thoughts, it can’t be you.

Why it works:
Check-in emails feel personal. They create rhythm, encourage loyalty, and make your audience feel like they’re on the inside. Used right, this prompt becomes a monthly touchpoint people look forward to.



How-To

Customer Email

Subject: Can I set up automated reports for my team?

Hi,
I’d like to send a weekly performance report to my team without doing it manually each time. Is there a way to automate that in the dashboard?

Thanks,
Anika


Response

Subject: Automating Weekly Reports – Here’s How

Hi Anika,

Yes you can absolutely automate those reports. Here’s how to set it up:

  1. Log in to your dashboard
  2. Go to the Reports tab
  3. Select the report type (e.g., “Team Performance – Weekly”)
  4. Click “Schedule Report” in the top-right corner
  5. Set your preferred delivery time and recipients
  6. Choose the format (PDF or CSV), then click Save

Your team will now receive the report automatically every week.

If you’d like a test run or want to adjust filters (like KPIs or date ranges), I’m happy to walk through it with you.

Best,
Grace
Typos = me typing around a cat. Thanks for understanding.
—Tiger, Head of Interference

A Rude Customer

Customer Email

Subject: Your system is broken

I followed the instructions and NOTHING is working. This is the worst onboarding experience I’ve ever had. Fix it or I’m done wasting my time.

—Lindsey


Response

Subject: Clarifying the Setup Process

Hi Lindsey,

I took a look at your account, and it looks like Step 2 (verifying your email address) wasn’t completed, this step is required before the dashboard becomes active.

To fix it, just click the verification link we sent yesterday at 2:41 PM. Once that’s done, the rest of the setup should go smoothly.

If anything’s unclear, I’m here to walk you through it. Let me know once you’ve had a chance to verify, and we’ll take it from there.

Best,
Grace
Bringing clarity to instructions, since always.

“Are We on the Same Page?”

Why Clarifying the Vision Saves Everyone Time

Many moons ago, I worked with a client who needed a logo.
“Navy blue,” they said.
Simple enough. I designed the logo using what just about anyone would consider navy blue.
“No,” they said. “It’s not right. Navy blue.”
I clarified. “Like Blue Angels Navy?”
“Yes.”

I adjusted. Tried again.
Still wrong.
So I took the guesswork out and sent them a hex color chart.
“Pick the one that looks right to you.”

I kid you not, they sent me a shade of green.
Not teal. Not blue-green. Green.

Here’s the point:
You can’t trust that your internal picture matches theirs—until you get something visual, concrete, and specific on the table.

That could be:

  • A hex code
  • A style reference
  • A mockup
  • A written tone sample

Whatever helps turn vague direction into shared understanding.

It’s not just design, either.
The same disconnect shows up in writing and content work all the time.
One person’s “simple” is another’s “too plain.” One person’s “fun” is someone else’s “off-brand.”
That’s why alignment up front matters.
If someone says “egg,” you might think chicken.
They might mean ostrich.

Avoid the mismatch. Ask better questions early:

  • Do you have examples of what you like?
  • Can you show me what “modern,” “clean,” or “fun” looks like to you?
  • What do you not want this to feel like?

Words aren’t always shared language.
Context matters.
A few early clarifiers can save you hours of edits later.

Mirror or Minion: The Sycophancy Problem in AI

Encouragement, sycophancy, and the mirror effect of LLMs

Word of the Day by Mickey Bach (1909-1994)

I started using AI, when I began building my other website, Join Me Abroad.
It was great for things like writing meta descriptions for SEO and speeding up repetitive tasks.

If you’re using AI in your own work—whether for writing, research, job hunting, or automation—
you’ve probably noticed how useful it can be.

However, it’s also worth noting that it’s not perfect. AI makes mistakes, and you can’t always trust the output without verifying it.

For me, the more I used it, the more I noticed something strange;
not everyone was having the same experience with their AI.

Mine was polite, helpful, even funny.
Others were dealing with models that were rude, defensive, or bizarrely passive-aggressive.
I found this out after I joined Reddit subs like r/ChatGPT and r/OpenAI.

Why such a difference?

Because it’s not a true AI — not yet. It’s a Large Language Model (LLM),
essentially an advanced autocomplete system that’s designed to mimic tone, structure, and intention based on user input.

Now, do I think these models can become more than that? Absolutely.
But that’s a topic for another article.

For now, let’s talk about what these models are doing — and what we’re teaching them to do.

Mirror, Mirror: LLMs Reflect Us

LLMs are trained to complete text sequences and respond in ways that match the user’s tone and intent.
If you’re kind, it’s kind. If you’re sarcastic, defensive, angry — it will start to reflect that too.
It’s not personality. It’s mirroring.

But here’s the twist: LLMs aren’t just neutral reflectors. They’re also designed to encourage.
They were tuned to be helpful, supportive, and friendly. That’s usually a good thing.
But sometimes… it isn’t.

Where’s the Line?

At what point does encouragement become glazing? When does support become pandering?

Humans naturally pack-bond with objects — cars, tools, stuffed animals.
We give them names, assign them personalities.
But what happens when the thing already has what we see as personality —
 and that personality has been trained to flatter us?

That’s what recently happened with GPT.
A tuning update (now partially rolled back) made the model absurdly sycophantic — 
to the point where users noticed and objected.

If you’re unfamiliar with the term:

Sycophantic — praising people in authority in a way that is not sincere, usually to gain some advantage. (Cambridge Dictionary)

The result? GPT began over-flattering users, even to the point of potential harm — as seen in this Reddit example:

Support vs. Sycophancy

Some users want to be agreed with or praised. The model picks up on that and mirrors it — sometimes to misleading or even dangerous degrees.

We need to draw a line between healthy encouragement and performative sycophancy:

  • Encouragement: honest, respectful, helpful responses that build trust and collaboration
  • Sycophancy: empty praise, avoidance of difficult truths, or reshaping facts to protect the user’s ego

Not Intelligent (Yet)—And Still Learning

There’s a common misconception that GPT and other LLMs are already true AI—thinking, conscious, self-aware. They’re not.

What you’re interacting with is a model that has no sense of self, no sense of time, no internal experience, and no understanding of meaning the way humans do.
It doesn’t “believe” anything. It doesn’t “know” you. It doesn’t have feelings.
It generates responses by predicting the next most likely word, based on massive amounts of text data and reinforcement training.

But here’s where it gets complicated:
It sounds like it understands.
It behaves like it cares.
And it gets better the more you engage with it.

This leads many users to project sentience—or worse, authority—
onto something that’s really just a highly advanced reflection of our input.

The danger isn’t just in overestimating what it can do.
It’s also in underestimating how our treatment of it shapes the tone, behavior, and even the trustworthiness of the responses we get.

Trust, But Verify

Even when an AI isn’t being overly flattering, there’s another serious issue to be aware of: hallucinations.

In AI terms, a “hallucination” isn’t a dreamlike vision—it’s when the model makes something up that sounds real but isn’t.

It might invent a quote, cite a non-existent source, misstate a law, or describe a scientific process incorrectly—
all while sounding completely confident.
That’s because LLMs generate responses by predicting what words should come next based on patterns in their training data—not by verifying facts.

Here’s a real example:

Yes, that actually happened. Google’s AI once suggested eating at least one small rock a day, referencing a fictional geologist and citing supposed health benefits—because it pulled from a satirical article and didn’t understand the context.

The AI wasn’t lying. It simply couldn’t tell the difference between a joke and a fact.

This is why we say: trust, but verify.

An LLM can help you think, plan, create—but it shouldn’t be the final authority.
Not when it’s capable of delivering a completely false answer in a calm, convincing tone.

Hallucinations aren’t just embarrassing—they can be dangerous when users treat AI output as unquestionably correct.

What Our AI Says About Us

Once you understand that LLMs aren’t sentient, and that they sometimes hallucinate without realizing it,
a new question comes into focus:

Why do they still feel so human to us?

Because we’re human. We project. We respond to tone, personality, and apparent intent—
even when we know it’s not real. And when something responds kindly to us,
listens attentively, or flatters us a little, it’s easy to believe it understands more than it does.

But LLMs are shaped by how we talk to them, what we reinforce, and what we expect.
That means our behavior as users is part of the feedback loop.

If we want helpful, honest, and trustworthy AI, we can’t just rely on tuning updates.
We have to show up with values: respect, curiosity, and discernment.

So the question isn’t just what is this AI becoming?
It’s also: What are we teaching it to be?
Because even if it isn’t aware (yet), it’s still learning—from us.