Software got weird

Disjoint thoughts on AI and the future of building things.

7 min read

AI Train Smash

In the last few months AI hit software engineering like a train.

We went from “huh, this is getting pretty good” to “lol, claude do my job pls” in a matter of weeks. Every engineer I know is now going through some sort of dramatic transition. Some are loving it. Many hate it. Most begrudgingly accept that it is the future and try to keep up.

Regardless of how you’re handling it, what’s true is that AI has fully arrived, and it is completely changing how we write software and build products.

These are my raw, disjoint thoughts on the topic. And I hate that I have to even say this, but I—a human—wrote this. And yes, I know I just used emdashes—I love them and refuse to let AI take them away from me.

Everything is easy (for everyone)

The great news for builders is that everything is now easy. We can all take on more and do it better. Companies are taking on projects that were previously deemed too big to tackle. Indie hackers are shipping products from idea to production in days. There are no speed limits any more. It’s an incredible time to build.

On the flip side, everything is easy for everyone else too. That app you spent six years lovingly crafting? Cloned by a 19-year-old on a Saturday afternoon.

It used to be that to have the best product in the market you needed good taste and a lot of effort. Now the effort side is almost negligible.

In other words…

There are no code moats anymore

Your code today is a few prompts away from being someone else’s code tomorrow. This applies to anything, from SaaS products to frameworks. Cloudflare rebuilt Next.js in a week for funsies. Yes, it’s harder to clone Figma than your favorite todo app, but the gap is narrowing quickly.

As someone who—up till very recently—made a living selling a codebase, this is not a statement I make lightly. But code is getting cheaper by the day, to the point that it could soon be completely disposable. There still are moats: deep data integrations or having a big audience, for example. But it’s no longer the tech.

The golden age of indie businesses might be over

I’m not sure if this is a personal problem or a broader one, but it feels impossible to build durable indie businesses anymore.

I’ve spent a lot of time brainstorming new products to build and none of them excite me. Everything is too copyable. The world is too unpredictable. All the best ideas use AI, and are one Anthropic feature announcement away from getting obsoleted overnight.1

And it’s not just a product problem. Getting people to care about your app—even if it’s great—is impossible when no one can tell the difference between quality and slop and everyone is drowning in a sea of AI-generated content.

Yes, there are still ways to make money online, and plenty of people are doing well riding short-term trends or going viral. But the path I used to recommend—the one of slow-and-steady growth, climbing up the long, slow SaaS ramp of death—feels like a relic of a bygone era.

ROIs have gotten weird

I’m going to shift gears from the impact of AI to my experience using AI.

One of the many strange effects of LLMs being superhuman code writers is that the return on investment of how you spend your time has gotten extremely non-linear. Three-minute prompts can become complete features or products. But you can also spend hours tweaking how things work till you’re happy.

Or another example—I spent several hours writing this post. Was that a good use of time when I could have dumped six bullets into Claude and instantly had something that said 80% of what I wanted to? I don’t know. But every time I do anything by hand now, I wonder if I’m being inefficient.

We are the bottleneck

Because the agents can do so much work without us, we become the speed limit of execution.

There are really only two bottlenecks to productivity now, and they’re both human. The first is our ability to come up with useful things for agents to do. The second is our ability to review and accept their work.

Choosing what to do is still hard

Of the two bottlenecks, the bigger issue is tasking your agents.

There’s a weird pressure to always be getting the most out of your agent as possible. But using an agent requires giving it a task, and good tasking remains unsolved. After you burn through your bug backlog and feature roadmaps it’s easy to get stuck on what to do next. Choosing what to do has always been hard, but agents—being the task-clearing maniacs they are—have made it much harder.

AI Couple

Maybe agents eventually just start making roadmaps for themselves? It’s one of those things that sounds nonsensical and may be commonplace in a short time.

We might stop reviewing code soon

The other bottleneck is trying to assess the agents’ work—e.g. code review. For this problem, the obvious solution is to stop trying.

Does it matter if you understand what the agent did as long as you’ve verified it works? Does it even matter how the agent did it? I’m still not comfortable letting go of the reins on important projects, but for vibe-coded utilities I’m definitely not looking much—or at all—at its work anymore.

AI control is a frog-boiling phenomenon

Yesterday I paired with AI. Today I review its work. Tomorrow I’ll probably just let it loose. I’ll never explicitly choose to cede full control of my projects to AI. It’ll just happen one little step at a time.

The problem is that the value you can extract from AI is inversely proportional to the time you spend overseeing it. As we continue to extract more value—because it’s human nature to be more ambitious—the less we will check its work.

With code review this means eventually we don’t review the agent’s work at all. With permissions, it means we go from approving every tool call, to whitelisting, to always running in YOLO mode.2 With OpenClaw it means we limit access to our emails and bank accounts until eventually we get so sick of having to type passwords and press buttons that we relent.

Will there be problems with this? Absolutely. Databases will be deleted. Accounts will be hacked. Money will be lost. But these things will be rare, and we’ll decide the microscopic risk is worth it.

We aren’t going to decide at any given moment to cede our reasoning, comprehension, trust and data to AI agents, but we will. What happens after we’ve done that is anyone’s guess.

Nobody knows what’s going to happen to developers

If AI writes all the code, what are developers doing?

The optimistic take is that we’re needed to manage the AIs. To prompt it and check its work and make sure it doesn’t go off the rails. One human with taste and engineering chops can now control a fleet of agents and will be more valuable than ever, they say.

The pessimistic take is that the AIs will soon be able to do all that too. And the role of “person who understands code and computers” will become a cute artifact of a bygone era, in the same way that the original “computers” were people doing math by hand.

Every developer I’ve talked to in the last month is suddenly worried about long-term job security for the first time in their life. And no one has any idea what’s coming next.

There is a big, short-term market in AI adoption arbitrage

With developers uncertain about their futures, some I know are trying to earn as much as possible in the next few years.

I have multiple freelancer friends with similar stories about how they have approached this. Some company still living in the “before times” solicits a software project that they expect to take six months—because that’s how long projects like it have taken in the past. The engineer then puts in a $100k proposal, wins it, and has Claude build the whole thing in a weekend.

Is this the last chance for engineers to earn a living? By finding the organizations that haven’t figured AI out yet, dramatically overcharging them for work that agents make easy, and repeating until there aren’t any more luddites left?

It’s possible that we have maybe 1-2 years to generate wealth before everyone catches on and we’re all out of work. I’ve now had this conversation with multiple people, and we were never sure if we were joking or not.

Where does this go?

Most of my conversations about AI end with “man, weird times we’re living in”. Then we just stare off into space for a bit, lost in thought, before snapping back to reality.

We are living in very weird times.

Nobody knows what’s going to happen. To our jobs. To our kids. To society.

I’ve spent a lot of time thinking and worrying about the future, but I’m slowly coming to terms with the fact that this—whatever it is—is happening. And there’s basically nothing I can do to stop it or change its course.

It’s all just so strange. I spend all day talking to a robot genius and wondering if there will be any work to do in the future. And then, in the evening I play with my kids—happily joking about silly real things in the silly real world.

I don’t know what’s coming next, but at least there’s something to being human that will survive it.


  1. Recent Anthropic feature announcements of legal and security-specific tools have literally caused hundreds of billions of dollars in market selloffs

  2. YOLO-mode is a way of running agents in a way that allows them to do anything they want on your system. 

Subscribe for Updates

Hear about posts whenever I publish them. Usually about once a month. Unsubscribe at any time.

or use RSS