What the Hell Happened to Me (and to Silicon Valley)
On Truth, Power, and the Silicon Valley I Helped Build
This is, somewhat improbably, a philosophy blog.
Until recently, I was a tech executive—helping build Cash App, among other things. But I’ve harbored a not-so-secret-secret my entire professional life: an obsession with reading, thought, and intellectual debate far outside my day job. My wife will roll her eyes if you ask her about it. I love the intellectual pursuits. I think it made me a pretty damned good engineer.
Funny thing about speaking truth to power: the recruiters stop calling. Terrified, I suppose, of upsetting the new regime. That’s its own commentary. So here I am, apparently a philosopher now. I guess when the tech world decides you’re radioactive, all those years of midnight reading finally become your profession by default.
The irony isn’t lost on me. After decades of balancing intellectual pursuits with building financial technology, I’ve been involuntarily freed to focus entirely on questions of democracy, power, and truth. Though I suspect this isn’t exactly what my teenage self had in mind when dreaming of a life of philosophical inquiry.
To be fair, this tension had been building for some time. Those who’ve read my essays can likely trace the growing disquiet in my views about the industry I once called home. Not about my previous employer, mind you—I remain genuinely appreciative of their ethical standards and mission. My concerns run deeper, toward the fundamental transformation of Silicon Valley itself.
What troubles me is how seamlessly the industry shifted from disrupting power to consolidating it. I still admire the entrepreneurial spirit that defined Silicon Valley’s earlier era—that drive to build, to solve problems, to reshape markets through innovation. But something profound has changed. The industry that once prided itself on “moving fast and breaking things” now seems primarily interested in accumulating power and breaking democracy. Or is in the main, suddenly indifferent to it after November of last year.
The transformation has been subtle but comprehensive. Where we once saw scrappy startups challenging established players, we now watch tech giants work to capture the machinery of state power itself. The entrepreneurial drive hasn’t disappeared so much as mutated.
I want to be absolutely clear: I am not anti-Silicon Valley. At my core, I remain a rational liberal who understands the critical importance of American technological leadership, particularly in an era of intensifying great power competition. The ability of American companies to innovate, to push boundaries in artificial intelligence, quantum computing, biotechnology—these aren’t just business concerns, they’re vital national interests. And if you haven’t noticed, I’m a very patriotic American.
But this brings me to artificial intelligence, where the stakes become existential in a very literal sense. While I lean toward technological optimism, I’ve found myself increasingly troubled by conversations with many of the brilliant minds driving AI development. It’s not just their technical theories that concern me—it’s their fundamental assumptions about ethics, human nature, and the meaning of consciousness itself.
Exile from the industry I helped build has given me a strange gift: the time to confront, with unflinching clarity, the existential questions we once relegated to late-night philosophy debates. And no question looms larger than this—what happens when the architects of artificial intelligence conceive of human values as mere optimization variables?
These aren't abstract philosophical concerns. When the people designing potentially transformative AI systems approach human meaning as merely an optimization problem, or reduce ethics to utility calculations, we should worry. I've sat in rooms with some of the smartest people in AI, and while their technical brilliance is undeniable, their theories about human consciousness and meaning often feel dangerously reductive. They're attempting to align artificial intelligence with human values while working from remarkably thin concepts of what human values actually are.
Let me be absolutely clear about something: Humans are not an optimization problem, and society is not a problem looking to be solved. Technology is tool, not telos. When we confuse technological capability with human destiny, we fundamentally misunderstand both. The purpose of technology is to serve human flourishing, not to reshape humanity according to its own logic. Yet increasingly, I watch brilliant minds reverse this relationship, treating human experience as raw material to be optimized rather than the very thing we're trying to enhance.
The realpolitik truth is that America’s competitive advantage has always rested on our ability to balance dynamic markets with democratic governance. But we’re now watching this balance dissolve at precisely the moment when we’re developing technologies that require the deepest possible understanding of human values and meaning. The same tech leaders who view democratic institutions as impediments to efficiency are also making crucial decisions about AI alignment based on troublingly narrow conceptions of human nature.
This isn’t about choosing between technological leadership and democratic values—it’s about recognizing that they’re inextricably linked. The ethical frameworks we bring to AI development can’t be separated from the broader systems of democratic governance and human meaning they’ll ultimately affect. When we sacrifice this deeper understanding in the name of competition or efficiency, we risk creating systems that are technically impressive but fundamentally misaligned with human flourishing.
For a brief moment, it seemed like America’s tech giants had recognized the fundamental prisoner’s dilemma at the heart of AI development. When Musk called for government regulation of AI, it appeared to signal a crucial understanding: that unrestrained competition in developing transformative AI systems could lead to collectively disastrous outcomes, even if individual actors were pursuing rational strategies.
Yes, the same Musk who once advocated for AI regulation is now actively working to dismantle the very institutions that could provide meaningful oversight. Through DOGE, he’s participating in the systematic weakening of federal agencies while simultaneously developing AI systems that could fundamentally reshape human society. The apparent recognition of the prisoner’s dilemma has given way to something far more dangerous—the belief that private actors should control both AI development and the mechanisms meant to govern it.
The supreme irony is that the same person who warns us about AI’s existential risks now seems to believe that the solution is centralizing unprecedented power in his own hands. Because nothing says “responsible governance” quite like combining control over global communications, space technology, artificial intelligence, and Treasury payment systems in the hands of someone who can’t manage his own Twitter feed without veering into conspiracy theories. It would be almost comical if it weren’t so dangerous.
But I digress. This is, after all, a philosophy blog. Though perhaps watching someone attempt to rewrite reality through sheer force of will does bring us back to fundamental philosophical questions. What is truth when power believes it can negotiate with basic facts? What becomes of human meaning when those developing transformative technologies reduce consciousness to optimization problems? How do we maintain that two plus two equals four when those controlling our digital mirrors insist the answer should be more flexible?
These aren’t just abstract concerns for late-night debates anymore. They’ve become urgent practical questions as technology and power reshape our understanding of reality itself. So while I may have stumbled into all this through the back door of tech industry exile, it turns out these ancient questions about truth, power, and human nature have never been more relevant.
This is why I insist that liberalism should be, at its core, an epistemic project. Liberal democracy isn’t just a system for choosing leaders—it is a system for discovering what is true. Free speech, free inquiry, independent institutions—these are not simply values; they are the instruments through which societies uncover reality itself. Beyond that, the rich tapestry of boundless meaning-making. When we lose these things, we do not merely risk tyranny—we lose our capacity to know what is real and experience what is possible.
This is the part of my work I will never abandon. Because no matter how bitter the polemic, or sharp the contempt, it is always built upon this foundation: that liberty and truth are inseparable. That self-government is not merely a political arrangement, but a method for discovering what is real together.
When tech oligarchs and their enablers treat democratic institutions as outdated code to be rewritten, they’re not just attacking abstract principles—they’re betraying every person who died believing that government of the people, by the people, for the people would not perish from the earth.
So yes, my prose has grown more pointed, more contemptuous. Because in times like these, measured academic distance can become its own form of moral failure. Some betrayals deserve our contempt. Some sacrifices demand we speak with the full force of moral conviction about what was purchased with their blood.
And so what began as a side interest in political philosophy has become, through an interesting series of events, my primary occupation. At least for now. When you're already radioactive for speaking truth to power, you might as well go all-in on defending the frameworks that make truth possible in the first place.
Because two plus two equals four.
And no power—not Musk’s, not Trump’s—will ever make it otherwise.
This is so interesting. ‘Tech is a tool, not a telos’ sums up very well one of the main things that is going wrong, here.
What’s very peculiar, watching the current catastrophe unfold, is that one can see not even the desire to build tools or use them or even to make them ubiquitous in harmful ways (though there is that) but something much more along the lines of a full-blown fantasy, some kind of gaming that is going on. The system of government is like some puzzle to be defeated within a game. The agencies are like challenges, little nodes you bust open, and the opponent in the game is society itself (this society but ultimately global society). Then, once you have busted that up, you rebuild a new structure on top, replacing all the bits with the things which you control, instead of those which your enemy controls. Then, you’ve completed the game. The vast complexity of an actual civilization or the possibility of consequences isn’t relevant, because you are the player—you are completely outside the game. The consequences aren’t yours, they are for others.
The people in charge of this, the termites chewing away at the wires, aren’t primarily the people that make things but the people that hire other people to make things, which they end up selling. That might be why the idea of destroying so many things that other people have made, millions of people over generations, that all people in the society are dependent on, comes so naturally. If you are primarily a salesman and not a maker, you are likely not to respect the work that’s gone into anything someone else has made. Indeed, that gets in your way, because you can’t make money off something somebody else has made—you need a space for the things you want to sell.
They are enamored of tech but they are not excited or impressed enough by science not to destroy in a few days much of what’s necessary for the practice of science. They like to throw ideas out there—but they use ideas for selling, not for making. The truth of the ideas isn’t of interest, only how people respond to the ideas matters. These salesmen don’t have to build, so they don’t have to deeply know the inner workings of things or think like engineers about what things were made for—this makes it much easier to perceive things that exist, including other people, as important to defeat as part of a game, obstacles to completion. Remove those things so you can sell your things instead.
I suppose this is why they don’t seem to understand how, in good science fiction, the futuristic gadgets are there to drive the narrative to explore fine grained details of the human situation, because it’s going to be the subjective beings that matter in the story, not the tech—which is a tool to explore something human (or if another conscious entity, the value of the subjectivity of whomever the character is). That’s the whole point—any of the twists introduced by technology are of interest if the subjectivities the story explores is valued for itself. The tech oligarchs speak of their interest in technology like bad science fiction where everything is perceived from the outside, characters are just an excuse to talk about gadgets, and the admiration goes to the objects, because they are excitingly futuristic.
But this caused me to have a scary thought. If the Nazi mass murder reflected the norms of efficiency and productivity of industrial manufacturing, I wonder if this crowd turns to mass murder, as one might suppose will become appealing from the way they talk, what form would it take? Possibly a lot of the destruction, and potential death will simply come from failure to understand most of the reality of a society, and mucking around to open up markets that they want. But if they get caught up in a urge for further and further control, then I suppose the models in their heads will dictate what fate they want to inflict on all the inconvenient humans. It wouldn’t be industrial and efficient like the Nazi murder factories but something else like cutting people off from the goods they need to survive—using some kind of algorithm to decide who is worthy of persistence.
I realize that thought’s a little far out, but these people genuinely seem far out—they imagine they can do things that they can’t possibly do, like live forever. They are like the Nazis in that they have invented deranged conceptions of the human body and humanity itself that they seem driven to want to experiment with, and they don’t seem inhibited by the usual moral constraints.
“cutting people off from the goods they need to survive—using some kind of algorithm to decide who is worthy of persistence”
An AI trying to optimize society without moral constraints in its training set would find, like politicians without moral constraints, that non-participants in the GDP are most expendable: the disabled, the non-working elderly boomers, the poor who have already fallen through the defunded safety net.