My Recent Anxiety Isn’t About the Singularity, It’s About the Slow Fade AI is creating

a man in a suit and tie asks what could possibly go wrong

AI represents the ironic first fruits of a supposed intelligence revolution: not a golden age of creativity, but an industrial-scale engine for producing plausible, yet fundamentally hollow, content. The promise was automation and insight; the initial deliverable is often just noise with a glossy veneer.

Stepping back, this torrent of slop forces a sobering question: what if this is the main output? The staggering investments—hundreds of billions in chips, data centers, and engineering talent—begin to look less like a foundation for the future and more like history’s most expensive experiment. We are burning vast capital and planetary resources to create systems that excel at mimicry but falter at genuine understanding, that automate the creation of mediocrity while destabilizing creative professions. The experiment is whether machine intelligence can truly integrate into the fabric of human endeavor without dissolving its value. The early results suggest we may not be building a bright new world, but rather a fantastically costly machine for generating the digital equivalent of fast-food packaging: instantly produced, uniformly generic, and destined to be discarded.

After years of looking exactly the same, this blog finally got a fresh coat of paint. I’ve switched things over to a darker look: black background, light text, and a more code‑friendly feel that matches how I actually spend my time these days. If you’re reading this at night or in a dim room, it should be a little easier on the eyes than the old blinding white theme I set up years ago and never touched again. Behind the scenes, I also moved the site off WordPress.com and over to a local provider. That gives me more control over backups, themes, and customization, and lets me treat this blog a bit more like the rest of my projects instead of something I only poke at every few years .Nothing dramatic is changing about the content, but the plumbing and paint are finally caught up with the present. If something looks weird or broken in the new setup, feel free to let me know—and thanks for still stopping by after all this time.

The Trillion-Dollar Lie: Ilya’s Testimony and the Altman Conundrum

Let’s be clear: we’ve always known that the builders of our world-class, society-altering infrastructure were flawed. The railroad barons, the telecom giants, the oil magnates—their ambitions were often matched only by their ruthlessness.

But in the age of AI, the stakes are different. We’re not just laying track or stringing cable; we’re building the potential substrate of all future human thought and society. The person steering that ship, we hope, would be held to a higher standard.

The recent testimony from Ilya Sutskever in the Elon Musk vs. OpenAI lawsuit shatters that hope, and reveals a problem that is both mundane and existentially terrifying: the man at the helm of this transformation, Sam Altman, allegedly has a huge lying problem.

And what’s most alarming isn’t just the accusation, but the collective shrug from his defenders.

The Testimony: Not a Misunderstanding, but a Pattern

The legal documents are dry, but the content is explosive. Ilya Sutskever, OpenAI’s former Chief Scientist and a board member at the time, stated under oath that the board’s decision to fire Altman in November 2023 was due to a “breakdown in the trust and communications between the board and Mr. Altman.”

He didn’t say “a disagreement over strategy.” He didn’t cite “differing visions for AGI safety.” He cited a breakdown in trust. Specifically, the board could no longer trust Altman to be consistently honest with them.

This wasn’t about one lie. It was about a pattern—a “multiplicity of examples,” as one report put it—where Altman was allegedly not candid, making it impossible for the board to govern effectively. The very body tasked with ensuring OpenAI’s mission-aligned governance felt it had to launch a corporate coup to perform its duty, all because it couldn’t believe what its CEO was saying.

The Stakes: This Isn’t a Normal Startup

We need to pause and absorb the dissonance here.

On one hand, you have Sam Altman, the global ambassador for AI, courting trillions of dollars in investment and infrastructure spending from governments and corporations. He is shaping global policy, testifying before Congress, and making promises about building a future that is safe and beneficial for all of humanity. The fabric of our future society is, in part, being woven on his loom.

On the other hand, you have his own board—comprised of aligned experts like Ilya and Helen Toner—concluding he is so fundamentally untrustworthy that he must be removed immediately for the good of the mission.

This isn’t a typical “move fast and break things” startup culture clash. This is the equivalent of the head of the International Atomic Energy Agency being fired by his own scientists for being loose with the facts about safety protocols. The potential consequences are not a failed app; they are, in the most extreme but not-unthinkable scenarios, catastrophic.

The Defense: “I Don’t Care, He Gets Shit Done”

Perhaps the most telling part of this whole saga is the nature of the defense for Sam Altman. As one observer aptly noted, you don’t see many people jumping to say, “He doesn’t have a huge lying problem.”

Instead, the defense maps almost perfectly to: “I don’t care, he gets shit done.”

The employee revolt that reinstated Altman, the support from major investors—it all signaled that the perceived ability to execute and create value (or, let’s be frank, monetary value) was more important than a deficit of trust at the very top. The mission of “ensuring that artificial general intelligence benefits all of humanity” was, in a moment of crisis, subordinated to the cult of execution.

This is a devil’s bargain that Silicon Valley has made before, but never with a technology of this magnitude. We’ve accepted the “brilliant jerk” genius to give us our next social network or smartphone. Are we really willing to accept it for the technology that could redefine consciousness itself?

The Precedent We’re Setting

The message this sends is chilling. It tells future leaders in the AI space that transparency and consistent honesty are secondary to velocity and fundraising. It tells boards that if they try to hold a charismatic, high-value CEO accountable for a “pattern of lying,” they may be the ones who are ousted.

We are institutionalizing a dangerous precedent at the worst possible time.

The Ilya testimony isn’t just a juicy piece of corporate drama. It’s a stark warning. It suggests that the architect of our AI future operates in a cloud of alleged deception, and that a large portion of the ecosystem building that future is perfectly willing to look the other way.

The question is no longer if Sam Altman has a lying problem. The question, posed by his own chief scientist under oath, is whether we should care. And in our collective answer, we are deciding what kind of future we are truly building.

My Split Heart: Why I’m Defensive of the Linux That Saved Me

There’s a war going on inside me, and it’s fought in terminal commands and neural networks.

On one hand, I am euphoric. The gates have been blown wide open. For decades, the biggest barrier to entry for Linux wasn’t the technology itself—it was the gatekeeping, the assumed knowledge, the sheer terror of being a “moron” in a world of geniuses. You’d fumble with a driver, break your X server, and be met not with a helpful error message, but with a cryptic string of text that felt like the system mocking you.

But now? AI has changed the game. That same cryptic error message can be pasted into a chatbot and, in plain English, you get a step-by-step guide to fix it. You can ask, “How do I set up a development environment for Python on Ubuntu?” and get a coherent, working answer. The barrier of “having to already be an expert to become an expert” is crumbling. It’s a beautiful thing. I want to throw the doors open and welcome everyone in. The garden is no longer a walled fortress; it’s a public park, and I want to be the guy handing out maps.

But the other part of my heart, the older, more grizzled part, is defensive. It’s protective. It feels a pang of something I can’t fully explain when I see this new, frictionless entry.

Because Linux, for me, wasn’t frictionless. It was friction that saved my life.

I was a kid when I first booted into a distribution I’d burned onto a CD-R. It was clunky. It was slow. Nothing worked out of the box. But for a kid who felt out of place, who was searching for a sense of agency and control in a confusing world, it was a revelation. Here was a system that didn’t treat me like a consumer. It treated me like a participant. It demanded that I learn, that I struggle, that I understand.

Fixing that broken X server wasn’t just a task; it was a trial by fire. Getting a sound card to work felt like summiting a mountain. Every problem solved was a dopamine hit earned through sheer grit and persistence. I wasn’t just using a computer; I was communicating with it. I was learning its language. In a world that often felt chaotic and hostile, the terminal was a place of logic. If you learned the rules, you could make it obey. You could build things. You could break things, and more importantly, you could fix them.

That process—the struggle—forged me. It taught me problem-solving, critical thinking, and a deep, fundamental patience. It gave me a confidence that came not from being told I was smart, but from proving it to myself by conquering a system that asked no quarter and gave none. In many ways, the command line was my first therapist. It was a space where my problems had solutions, even if I had to dig for them.

So when I see AI effortlessly dismantling those very same struggles, I feel a strange, irrational bias. It’s the bias of a veteran who remembers the trenches, looking at new recruits with high-tech gear. A part of me whispers, “They didn’t earn their stripes. They don’t know what it truly means.”

I know this is a fallacy. It’s the “I walked uphill both ways in the snow” of our community. The goal was never the suffering; the goal was the empowerment. If AI can deliver that empowerment without the unnecessary pain, that is a monumental victory.

But my love for Linux is tangled up in that pain. It’s personal. It’s the technology that literally saved me by giving me a world I could control and a community I could belong to. I am defensive of it because it’s a part of my identity. I feel a need to protect its history, its spirit, and the raw, hands-on knowledge that feels sacred to me.

So here I am, split.

One hand is extended, waving newcomers in, thrilled to see the community grow and evolve in ways I never dreamed possible. “Come on in! The water’s fine! Don’t worry, the AI lifeguard is on duty.”

The other hand is clenched, resting protectively on the old, heavy textbooks and the logs of a thousand failed compile attempts, guarding the memory of the struggle that shaped me.

Perhaps the reconciliation is in understanding that the soul of Linux was never the difficulty. It was the freedom, the curiosity, and the empowerment. The tools are just changing. The spirit of a kid in a bedroom, staring at a blinking cursor, ready to tell the machine what to do—that remains. And if AI helps more people find that feeling, then maybe my defensive, split heart can finally find peace.

The gates are down. The garden is open. And I’ll be here, telling stories about the old walls, even as I help plant new flowers for everyone to enjoy.

Great domain planning Microsoft

Why?

Microsoft, what on fucking earth are you doing?

How could you think this is a good idea —

https://tasks.microsoft.com → Outlook

https://tasks.office.com → Planner

Picture it:

“We’re really aligning the Tasks strategy under a unified vision of cross-platform productivity.”

“Great! So… two separate domains?”

“Exactly.”

Dozens of PMs, architects, designers, and engineers probably sat in Teams calls nodding at slides with flowcharts explaining why the Outlook Tasks experience needed to live under microsoft.com while Planner Tasks deserved its own shiny office.com home. Because, you know, user clarity.

Meanwhile every DevOps person on earth is just trying to figure out why half their integrations break depending on which URL someone fat-fingered into a webhook.

Somewhere there’s a PowerPoint deck titled “Unifying the Task Experience” that’s been in circulation since 2018.

The Hypocrisy of “Strengthening” 

The Leadership Immunity

So, if the business is so strong, why the massive layoff?

The answer is simple, and it has nothing to do with “strengthening culture” or “increasing ownership.” It’s about increasing shareholder value.

Layoffs, especially during periods of high profit, are a well-worn tactic to immediately boost stock prices and profit margins. By cutting 14,000 salaries, benefits, and overhead, Amazon can present a more favorable balance sheet to Wall Street. It’s a direct transfer of stability from employees to investors.

This is the heart of the hypocrisy: Framing a decision made for financial markets as a necessary step for innovation and customer obsession.

While the memo speaks of shared sacrifice and a leaner future, it’s vital to ask: who is actually sharing in the sacrifice?

The executive who signed this letter, Beth Galetti, is not feeling this “leanness.” According to Amazon’s own regulatory filings, her total compensation for 2023 was nearly $14 million, the vast majority of which came in the form of stock awards.

Let that sink in.

The executive presiding over the elimination of 14,000 jobs—jobs that provided mortgages, healthcare, and stability for families—was rewarded with a compensation package worth over 300 times the median Amazon employee’s pay.

This is the ultimate hypocrisy. The “tough decisions” are not being made by people who face any financial insecurity. Their multi-million dollar packages, tied directly to stock performance, incentivize short-term cost-cutting like mass layoffs. For them, “strengthening the organization” means boosting the metrics that directly inflate their own net worth.

AI’s Unintended Path to Self-Destruction

You feel it, don’t you? A low hum of unease beneath the surface of daily life. It’s there when you scroll through your phone, a feed of curated perfection alongside headlines of impending collapse. It’s there in conversations about work, the economy, the future. It’s a sense that the wheels are still turning, but the train is heading for a cliff, and everyone in the locomotive is just arguing over the music selection.

This isn’t just burnout. This isn’t just the news cycle. This is a collective, intuitive understanding that our world is barreling towards a future that feels… self-destructive. And nowhere is this feeling more acute than in our relationship with the breakneck rise of Artificial Intelligence.

We were promised a future of jetpacks and leisure, a world where technology would free us. Instead, we’re handed opaque algorithms that dictate our choices, social media that fractures our communities, and an AI arms race that feels less like progress and more like a runaway train with no one at the controls.

The path we’re on is not the only one. The feeling that something is wrong is the first, and most crucial, step toward changing course. It’s the proof that we haven’t yet surrendered our vision for a world that is not just smart, but also wise; not just connected, but also compassionate.

The future isn’t a destination we arrive at. It’s a thing we build, every day, with our attention, our choices, and our voices. Let’s start building one we actually want to live in.

Great a new Battlefield game that makes me revisit Windows…..

Let’s get one thing straight: I am not a Windows user.

My daily driver is a sane, rational operating system that treats me like a competent adult. I use it for work, for creativity, for everything. But once in a blue moon, the stars align in a way that requires me to boot into Windows. Maybe it’s a specific piece of hardware, a game with a draconian anti-cheat, or helping a less-technical family member.

It’s always a reluctant visit. A digital trip to a noisy, crowded mall after years of tranquility. And every single time, without fail, Microsoft finds a new, more aggressive way to make me regret my decision.

My latest foray into the world of the Blue Screen of Life™ was no different. I was greeted not by a welcome screen, but by a full-court press of psychological manipulation.

First, it’s the begging for a Microsoft account. The “Sign In” screen is giant and in-your-face, while the “Offline Account” or “Domain Join” option is now a ghost—a tiny, greyed-out link you have to scour the screen for. I’ve heard on Windows 11 Home, they’ve even removed the ethernet trick. You literally have to pretend you have no internet to access the basic human right of a local account.

Let that sink in. To exercise a fundamental choice over your own machine—the choice to keep your data local and your identity separate from a corporate cloud—you have to trick the operating system. Since when is my computer my adversary?

But it doesn’t stop there. Oh no. Once you’ve navigated the labyrinth and carved out your pathetic little local account, the onslaught begins.

“Get OneDrive!” “Back up to the cloud!” “Your files aren’t safe here!” “Don’t you want to be connected?”

It’s a constant, dripping faucet of anxiety-driven marketing. It’s in the setup. It’s in the file explorer. It’s a notification, a pop-up, a brightly colored button where the “Save” button should be. It’s the digital equivalent of a street vendor following you down the block, screaming in your ear about a timeshare.

Why is this so messed up?

Because it’s a blatant, cynical power grab. Microsoft isn’t just selling you an operating system anymore; they’re selling you a subscription to an ecosystem. Your data, your identity, your habits—that’s the product. A local account is a leak in their revenue stream. A user who isn’t tethered to their cloud is a user they can’t monetize as effectively.

They are systematically removing user agency and calling it a “feature.” They are framing the desire for privacy and local control as an archaic, difficult-to-access “legacy option,” like changing the BIOS or editing the registry.

This isn’t progress. This is enclosure. They are fencing off the digital commons of personal computing and telling us we have to pay a toll—in data, in dependency, in our very user identity—to simply use the machine we own.

I don’t want my operating system to be a service. I don’t want my files automatically synced to a server I don’t control. I just want to install a program, save a file to the hard drive I paid for, and be left the hell alone.

Every time I use Windows, this is what I’m reminded of. It’s not an operating system; it’s an advertisement with delusions of grandeur, desperate to handcuff you to its ecosystem before you can even get anything done.

So, congratulations, Microsoft. You’ve succeeded. You’ve made your platform so hostile to casual, privacy-minded users that my next blue-moon visit will be even more reluctant. And my main operating system? It looks better every single day.

Q-Day Isn’t Just Coming—It’s Underestimated

Why quantum computing’s inflection point is a far bigger deal than most assume, using Bitcoin as a case study.

For the past several months I’ve found myself thinking about “Q-Day”—the hypothetical moment when a large-scale, fault-tolerant quantum computer can break today’s public-key cryptography in practical timeframes. The term gets thrown around casually, but the underlying assumptions are almost always shallow. Industry conversations typically hover at the level of “someday RSA and ECC will be broken, we’ll swap in post-quantum crypto.” But if you model the actual math and network effects, the impact is far more systemic than most realize.

The Cryptographic Assumption Underneath Bitcoin

Bitcoin’s security rests on two primitives:

  • SHA-256 for proof-of-work hashing,
  • secp256k1 ECDSA for digital signatures and address control.

Most casual observers treat SHA-256 and ECDSA as equally “quantum resistant,” but they’re not. Grover’s algorithm only gives a quadratic speed-up for symmetric hashes (meaning SHA-256’s effective strength drops from 256 bits to ~128 bits—still enormous). Shor’s algorithm, on the other hand, annihilates ECDSA once a quantum machine has enough logical qubits and error-correction throughput. The moment Shor’s algorithm crosses that threshold, every unspent output on the blockchain tied to a public key rather than a hashed address becomes trivially spendable. That’s not a niche corner case—it’s a significant slice of historical Bitcoin, and the attack model is radically different from anything in classical cryptanalysis.

“Q-Harvesting” and Retrospective Attacks

Even more underappreciated is harvest-now, decrypt-later economics. On a permissionless network like Bitcoin, every signature and public key is permanently recorded. A future attacker with quantum capabilities can retroactively sweep old UTXOs or forge transactions if private keys are exposed. Unlike TLS or ephemeral session keys, Bitcoin signatures don’t vanish after a handshake—they’re immutable history. This is precisely the kind of environment where Q-Day isn’t just a forward risk; it’s a retroactive event horizon.

Migration Isn’t a Patch Tuesday

Post-quantum migration in a blockchain isn’t like rotating TLS certificates. To transition Bitcoin to a quantum-safe signature scheme (Dilithium, Falcon, SPHINCS+, etc.), you need to:

  • Introduce and standardize new script opcodes,
  • Achieve miner consensus on soft/hard forks,
  • Incentivize holders to proactively move coins to new address types,
  • Handle the “long tail” of lost keys or inactive wallets.

Any laggards become low-hanging fruit the moment a capable quantum adversary exists. The coordination problem dwarfs the mere cryptographic problem.

Why “Q-Day” Is a Deal No One’s Pricing In

The typical narrative assumes Q-Day is some distant, binary switch—“crypto breaks overnight.” In reality, we’ll likely see a gray zone where early quantum machines can break small-bit ECC but not 2048-bit RSA, then gradually scale. The first usable machine doesn’t need to break all of ECDSA at once; it just needs to cherry-pick vulnerable addresses or chain states to create cascading trust failures. That asymmetric phase could destabilize systems long before a headline “RSA broken” moment.

The upshot is that the cost-benefit math of preparing early versus reacting late is inverted for public, immutable systems. For Bitcoin (and any long-lived ledger), the ROI on proactive migration is enormous, because the tail risk isn’t “some downtime,” it’s mass asset exfiltration at quantum speed.

What Can Actually Be Done Now

Technically, we’re not helpless:

  • Quantum-safe address formats can be introduced today and incentivized with lower fees or higher priority.
  • Hybrid signatures (ECDSA + PQC) could offer defense-in-depth during migration.
  • Wallet UX could default to never revealing public keys until absolutely required (minimizing harvestable data).
  • Research funding into quantum-safe primitives optimized for constrained environments (hardware wallets, embedded nodes) is critical, not academic.

The bigger challenge is social, not mathematical: coordinating a global network with trillions of dollars at stake before the adversary is visible.

I keep circling back to the same conclusion: “Q-Day” isn’t a far-off curiosity—it’s a pricing error in the security model of every immutable public ledger. Bitcoin is the clearest illustration because of its permanence and economic weight, but the same logic applies to PKI, code-signing, IoT firmware updates, and even archived TLS traffic. The longer we treat quantum risk as tomorrow’s problem, the more we guarantee it becomes a retroactive catastrophe instead of a forward-looking migration.

If you’re in a position to influence protocol roadmaps or asset custody, the optimal time to act was yesterday. The second-best time is now.

An Open Question to Microsoft: Let Me Get This Straight…

Let’s rewind the tape for a second.

It’s March 2020. The world screeches to a halt. Offices empty out. A grand, unplanned, global experiment in remote work begins. We were told to make it work, and we did. We cobbled together home offices on kitchen tables, mastered the mute button, and learned that “I’m not a cat” is a valid legal defense.

And you know who thrived in this chaos? You, Microsoft.

While the world adapted, you didn’t just survive; you absolutely exploded. Your products became the very bedrock of this new, distributed world.

Teams became the digital office, the school, the family meeting space.
Azure became the beating heart of the cloud infrastructure that kept everything running.
Windows and Office 365 were the essential tools on every single one of those kitchen-table workstations.

And the market noticed. Let’s talk about the report card, because it’s staggering:

  • 2021: You hit $2 trillion in market cap for the first time.
  • 2023: You became only the second company in history to reach a $3 trillion valuation.
  • You’ve posted record-breaking profits, quarter after quarter after quarter, for four consecutive years.

Your stock price tripled. Your revenue soared. You, Microsoft, became the poster child for how a tech giant could not only weather the pandemic but emerge stronger, more valuable, and more essential than ever before.

All of this was achieved by a workforce that was, by and large, not in the office.

Which brings us to today. And the recent mandate. And the question I, and surely thousands of your employees, are asking:

Let me get this straight.

After four years of the most spectacular financial performance in corporate history…
After proving, unequivocally, that your workforce is not just productive but hyper-productive from anywhere…
After leveraging your own technology to enable this very reality and reaping trillions of dollars in value from it…
After telling us that the future of work was flexible, hybrid, and digital…

You are now asking people to return to the office for a mandatory three days a week?

What, and I cannot stress this enough, the actual fuck?

Where is the logic? Is this a desperate grasp for a sense of “normalcy” that died in 2020? Is it a silent, cynical ploy to encourage “quiet quitting” and trim the workforce without having to do layoffs? Is it because you’ve sunk billions into beautiful Redmond campuses and feel the existential dread of seeing them sit half-empty?

Because it can’t be about productivity. The data is in, and the data is your own stock price. The proof is in your earnings reports. You have a four-year, multi-trillion-dollar case study that says the work got done, and then some.

It feels like a profound betrayal of the very flexibility you sold the world. It feels like you’re saying, “Our tools empower you to work from anywhere! (Except, you know, from anywhere).”

You built the infrastructure for the future of work and are now mandating the past.

So, seriously, Microsoft. What gives? Is the lesson here that even with all the evidence, all the success, all the innovation, corporate America’s default setting will always, always revert to the illusion of control that a packed office provides?

It’s not just wild. It’s a spectacular disconnect from the reality you yourself helped create. And for a company that prides itself on data-driven decisions, this one seems driven by something else entirely.