← Back to blog
Buzz Aldrin on the Moon, Neil Armstrong

Buzz Aldrin on the Moon, Neil Armstrong

Just to be Rich?

March 3, 2025


Reading The Technological Republic by Alex Karp and Nicholas Zamiska brought up a concept I've been frustratingly observing everywhere at Cornell. No one believes in anything. He argues that the problem with SWEs at Google opposing government contract work—like building AI tools for the military—isn't that they are pacifists because that would mean that they actually faithfully believe in something. It is that they do not believe in anything. This lack of belief, founded by moral superiority-driven opposition to stuff, fostered the current environment of hyperfragility and dogma-driven conformity, where dissent is punished not for being wrong, but for disrupting the fragile consensus. blah blah blah.

Basically... everyone wants to get a good internship to get a good job to be rich to no end. I've found my end, but I'm trying to formally articulate it from my informal mental model of it. To do so, you must start from first principles (a word now almost as overused as "disruption"). If it's not iron-clad, then I won't be totally convinced of it, and I'll half-ass it.

AXIOMS:

  1. Life, particularly conscious life, has intrinsic value.
  2. That which preserves and extends valuable entities is itself valuable.
  3. Technological and political systems are judged by their capacity to address existential threats.

PROPOSITIONS: P1: If we exist in a simulation, optimal action requires either discovering the simulation's parameters or finding an escape.

P2: If we are not in a simulation, then:

P3: Conscious life has unique capacities to preserve all forms of life.

P4: The just action is that which maximizes the probability of continued conscious life.

P5: Existential risk mitigation requires:

P6: Democratic governance with technological capability provides the optimal framework for existential risk mitigation.

P7: Therefore, supporting institutions, technologies, and policies that strengthen this framework represents the most just course of action.

P8: The continued dominance of the United States is therefore justified as the most effective means of preserving life in the universe.

P9: A student faced with career choices has a moral obligation to select the most just option.

P10: Agricultural technology represents a strategically neglected domain with exceptional leverage for preserving humanity.

P11: Capital accumulation for advancing agricultural technology is just.

This proposition establishes agricultural technology as a strategically vital domain that supports the broader framework of preserving humanity through American technological leadership, connecting food security to both immediate survival needs and long-term sustainability goals. And the most effective way to get there is to accumulate heinous amounts of capital ($10^8 from my analysis)

This proof was generated by Claude 3.7. The final italic part added myself. I fed it a text transcript of my ramblings, transcribed by whisper-1. And I prompted it a bit and made edits to the spacing to make it more readable.

Funnily, to believe in this line of thought, I need to not believe in the simulation even though I've convinced myself to believe in it. This sort of cognitive dissonance (probably using it wrong again) seems to be vital to do anything valuable. You need to simultaneously care about all life/keeping life alive while not giving a fuck what other life forms think of you.

Even with this hard proof, I still dont feel completely compelled to completely full-send. A lot of that capital accumulation drive is faux. Some of it is so I can buy a couple Basquiats. It's probably because I'm just a pussy and am scared of success.

Here's my transcript and prompting:

"Proof for believing in something, enter. If we are in a simulation, then the best thing we can do is ask better questions, slash figure out how to escape. If we aren't in a simulation, then we are probably the only life in the entire universe, and I think it's a good thing to make sure the life, like a dying flame in the eternal void, stays alive. This includes conscious life, like humans and maybe other organisms, and non-conscious life. Going off that thread, to extend life, we must extend conscious life, because conscious life will keep all life alive. And out of all conscious life, humanity is pretty much the only conscious life that we're aware of, that can actually extend all life, that has the power to extend all life. So that means it's just to extend human consciousness in the universe. To figure out how to best extend human consciousness in the universe, we have to figure out how to govern humans and how to build the technology to get humans and make us anti-fragile to existential threats like climate, AI, and nuclear. And the only country that's creating innovation and simultaneously pushing democratic values in a non-authoritarian way seems to be the United States of America.

So that basically means it's just, the most just thing to do is to support the continued existence and domination of the United States of America. And to do that is to stop existential threats that make the USA less likely to continue its dominance. Those include defense, energy, food and security, economic output, innovation, public health, and probably a few more. But the key existential threats, and also corruption, but the key existential threats seem to be corruption, political instability, and... and foreign threats.

And that basically validates the work of companies like Anduril, Palantir, Doge, SpaceX, Tesla, etc. It seems like the most just thing to do isn't to work for college recruiting companies, food delivery apps, but to actually prevent the destruction of America.

And it seems like the key existential threats for the world also apply to the US. Climate, AI, and nuclear. Nuclear seems to be largely driven by diplomacy, so there's not much I could do there. AI seems like ungovernable. And unlike nuclear, where it's very easy to find someone taking thorium or uranium from the earth, it's much harder to limit GPUs, especially if foreign adversaries are doing it simultaneously, which is very similar to nuclear. The climate threat will lead to a lot of food instability and biodiversity decline, which could have significant consequences on life on earth. But the movement towards sustainable energy, which is vital for providing energy for humans beyond 100 years from now, basically, bio-engineered plants and modified landscapes to ensure biodiversity retention would be vital. But moving off-planet, it seems like a mix of bioengineering and robotics would be needed to ensure that humans can survive with food. There are some questions that you ask, like whether food is needed, or if we can just take a supplement and just consume everything and we won't need food anymore. But that needs to be explored."

  1. "Convert to a philosophical proof- basically unrefutable like a euclidian proof"
  2. "where did the "america is the best" part go?"
  3. "add a P9 that someone that is a student, given a choice of what to work on, should choose the just one"
  4. "and add a p10 why that means focusing on sustainable agriculture"
  5. "/agritech/bioengineered plants"
  6. "just focus on agritech as a whole"
  7. "Add a p11 that shows why accumulating capital - in any way that isnt a net negative for humans - to fund / make the agritech work I'm doing more successful - is just"
  8. "make p11 shorter - only include the absolute necessary shit, and show that I think the fastest way to accumulate capital right now is via software"

Nearby