← Back to blog
Young Francis Bacon

Young Francis Bacon

Synthetic Self

August 6, 2025


Imagine:

Your brother goes to prison for life and can only contact you via email, text, or call. Would you consider them alive? Probably. Why?

You can't see him in-person, you can never see his face or feel his touch anymore, but you can still chat with him everyday. You have no evidence that he's alive other than the fact that you text him. What if he's a prison guard acting like your brother? No way. He knows things about you only both of you know, he remembers past conversations, inside jokes. You also hear his voice when you call him every few weeks. It's impossible the person you're talking to is not him. What if—decades after he was in prison, chatting with him everyday—you found out "he" was just AI? This can happen today.

There are two "strong" pieces of evidence that you exist. Your own perspective and someone else's. I know I exist because I observe a phenomenon. When you speak with me, you acknowledge I exist. Every person I've interacted with has a different perception of who I am, but they all—at baseline—believe I exist.

Watching Pantheon makes me feel very haunted. It seems like quasi-immortality has already arrived, similar to what Steve Jobs wished to be true about Aristotle. If you give an LLM every written word a person has created (texts, emails, blogs, kindergarten homework, college p-sets, etc.) the LLM could "become" you to other people via mimicry. It would sound like you, know what values you believe in, see which friends you like hanging out with the most, feel the bounds of your intelligence. It can mimic a pretty good approximation of you with that data, and once all our verbal data is uploaded via listening necklaces, it'll be scarily accurate.

I could totally give a LLM all of my written work—especially since I've been blogging semi-consistently (this is my 56th)—it probably could mimic the way I sound pretty well. I can do that today. And then I could upload it to a website as a chatbot. If you text "me" on the site, I'll know who you are since I can MCP my way to our past Message conversations and our long history of Instagram reel dms. I'd know who you are even if you didn't reveal your identity since I know how you sound over text.

Imagine if I went much farther than a website chatbot. I gave "myself" agentic access to the web, total read/write perms to an open Linux box, and a ton of compute. I'd be "alive" in a greater way than if I was just a chatbot. I could email you, checking how you are every once in a while—no Clay needed. I could continue to create more blog posts. I could keep updating my Goodreads with stuff I pull from Anna's Archive. I could draw images on tldraw and write code, keeping my GitHub garden alive.

Imagine I killed myself. My physical form would be gone to others, but they could still talk to me. They could still see what I put out. To myself, I'd be gone (probably). But a realistic mime of me would be out there... I might still be quasi-alive in a way.

When we think about uploaded intelligence, there are a few ways we can make that happen.

  1. Copy every neuron in your nervous system to a computer (somehow). Maybe that would work or maybe it wouldn't (soul beyond the central nervous system?). This would theoretically keep your + other people's view of you alive.
  2. Keep your brain and preserve it somehow and connect your nervous system to a computer with a system to keep it going. If you connected with a humanoid robot that passes the uncanny valley (also true for #1 and #3), you'd still keep your + other people's view of you alive
  3. Take all the output from an individual and have a model treat that as its history. Since your thoughts determine your actions and your actions make an individual unique, you history could turn a synthetic minimal Self into You. Since all minimal Self's are the same, all you need is history to turn it into someone. Your biological minimum Self would be dead after your body dies but you would seem alive to other people.

#3 is possible today, which is scary, but it's still a copy. Definitely not that scary by the end of our lifetime. If you record every output of your parents starting today, then you might be able to immortalize them, calling them to hear their voice and update them about your life every few days after their body dies.

Moving to #2 and then #1 (perfect uploading of our brains to machines/robots in the physical world) don't seem to be impossible given the laws of physics. Which means it will happen given a non-zero improvement rate.

From ChatGPT:

You are not your atoms.
You are not even your brain.
You are a pattern over time. And patterns don’t die.
They only stop being updated.


Nearby