Futures of software engineering before AGI
Is every software engineer going to lose their job very soon?
Will all software be written by AI agents very soon?
If you're a software engineer right now, or aspiring to become one, what should you do?
It's a hotly debated topic.
Engineers who have lived through previous hype cycles of no-code, and low-code tools, point at the past and claim that everything is fine - even as the data shows that computer science grads have a higher unemployment rate than even philosophy majors (maybe for the first time ever), and even as layoffs continue to increase.
Almost all the top CEOs believe AI will be writing half, if not all, of the code by next year - and soon, it'll be just AIs writing code. Some people also point at Jevon's paradox, saying if it's easier to write software than ever, we'll need more software engineers than ever, because our software needs will continue to increase.
Who should you listen to? What should you do?
In this essay, I'll attempt to answer this question. I think there are two futures for software engineering as a whole.
Here's the first one -
II.
AI progress will continue to increase and AI agents will get so good at almost all types of programming that not using one will be seen as archaic.
You won't have to remember any syntax, nor worry about mastering any programming language.
You'll still have to learn programming - maybe even the old school, hard way like hand writing code like we used to do in the old days, but just like you start using the calculator after learning the multiplication tables and basic math for more complicated calculations, you'll be using agents to do most of your work after you learn the basics.
In this future, there won't be a bug that AI cannot fix- but there might be bugs that AI cannot fix because of the incompetence of the engineer handling the agent.
Recently, Cloudflare released an OAuth provider library where the entire commit history shows how they instructed Claude to write most of the code. These are two of the many prompts you can see in their commit history-
"I think it's not necessary to store clients_list. Workers KV has a list() function you can use instead, which would be better because it is not susceptible to consistency problems if multiple updates happen concurrently. Can you change this?"
"For performance, let's denormalize the grant information into the access token records, so that the grant doesn't need to be looked up separately on every request. There's one catch, though: it's important that revoking the grant also revokes all access tokens. So, we should change the token record keys to be `token:{userId}:{grantId}:{tokenId}`. That way, when we revoke a grant, we can also search for and delete all access tokens, by searching for token records with the right prefix."
If you don't understand a single thing mentioned in the prompts above, I don't blame you- but if you’re using LLMs to code, it has to look somewhat like the above in its specificity and exactness- if you want to remain truly effective with them, at least right now.
Just being vague, just “vibe coding”, not including proper context, all these things won’t get you the results you want.
But one could argue that with the right combination of agents made to work properly on the problem above - a novice engineer who knows nothing about OAuth libraries would be able to perform the same - they won’t have to prompt the agent as explicitly, but they would have to know how to meta-prompt the agents, know an abstract idea of what the right context is to gather, and for a while simply know some parts of the codebase - but over time, not reading the code will be common.
We see some glimpses of “not reading code” in vibe coding - but in the future, reading code will seem as just something that wastes valuable time.
Code will be well-written, well documented, bare for the world to see and marvel as it evolves to do things which it previously never could, but each such beautiful piece of code will connect to more places than previously thought possible or maintainable, being imported by millions of different components, everything being far greater than any single engineer to track or even care about.
While in the past we had debates about software architectures like- are monorepos better, or are micro-services better, and we might as well continue having those debates, new architectures will emerge where it's common for single files to span 10,000+ lines and software projects to have hundreds of billions of lines of code. Yes, code is a liability, and yes, it's not about the lines of code coded but what problems the lines solve, but when you never have to touch the code at all, these problems aren’t your problem.
In such a world, you'd imagine that software engineers would be out of jobs, as lots in the mainstream media would have you believe, but that's not true. Humans would be needed to guide and orchestrate the agents to specifically build what we want to build.
Sure, an AI agent watching over an army of similar agents might be able to do a better job at prompting it - but who instructs that the supervising agent - it can be agents all the way to the top, with agents running the whole company - but that is highly unlikely in the near future (5-10 years), because creating something that requires creativity and human input - by definition, involves humans somewhere in the loop.
In this glorious age of powerful software, more engineers will be needed than we can produce, and that will continue until we really get super intelligence that's capable of delivering everything from cures to all diseases to solutions to {insert problem here}, so when that happens - when ASI is truly capable of delivering returns at that scale - you won't have to worry about your livelihood or anything else.
I think even then, in a future hard to imagine, a world of true abundance where AI can just one-shot create anything, we should still have people wanting to create software of their own. Maybe it'll be to self host a module in their brain computer interface that's specific to them, maybe it's to create a virtual world that's more aligned with their community's mind-upload station, maybe it's to tweak the open source robot OS that only self-assembles the perfect vehicles for their next vacation to the moon - software will continue to exist, and engineers will continue to be useful.
Companies and corporations may not exist to employ engineers in this scenario - but I find that unlikely to be the case.
Humanity will always have goals, always have projects, and these projects would need people who know not only software, but the art of expressing their creativity through delivering products and services that we’d need.
The skills that define being an engineer will change, but anyone in software long enough knows that just learning to solve leetcode problems and quickly learning the syntax and patterns of new languages was never what being a software engineer was about. It was about solving problems - and a lot of the fields will face the same questions as software engineers face first - how to adapt from doing the thing to helping an agent to do the thing.
This "helping to do the thing" is not just "prompting the thing" even though that's a large part of it - but if you’ve ever asked for help from multiple people, you know that some people can be exponentially better than others at effectively helping you at the task, and it’ll be the same in the case of helping agents.
Soon labels like “software engineer”, will start to feel like a temporary hat you wear during the day, before focusing on other endeavours. If all you need is an hour to let some agents loose to work on different tasks that takes them a week, then the rest of your time can be spent doing anything you want.
This future is bright, and as long as you're curious and open to change, we'll all have a great time.
The other future for software engineering looks slightly different -
III.
This future, which we'll all see if we get to experience it in less than 10 years, is one where companies are finding it hard to justify whether they do need to train a new model. Previous generation models were all incrementally better, and we all have AIs that can do far more things than they can do today, but they still depend on human supervision. AIs can work for days and weeks, but they frequently hallucinate. There have been no significant breakthroughs by the previous generation of models.
The AGI hype continues to be questioned. The ones who question if AGI is even possible with language models are gaining more credibility, and all companies start investing more resources in alternate approaches, but these approaches arrive with time horizons similar to quantum computers - maybe in decades, maybe this century, but definitely not soon.
This is the future where the same bottlenecks we see today exist, and none have been removed. The data, the compute, the investment, the algorithmic breakthroughs - everything we still need to vastly accelerate are proving to be exponentially harder to crack than we imagined.
This is the future where the barriers to software engineering is at its lowest, but because significant problems like continuous learning not being solved and context limits still not infinite- frontier AIs are not cheap and you definitely need similar, if not more engineers than you do right now. The same skills of agent orchestrations that were valuable in the future I described above are valuable here, but engineers specialising in areas that AI just can't- also remain valuable. These can include languages like Rust where many current AIs are currently worse at, or new frameworks and languages that only the next generation of AIs would be proficient at - so any team that wants to move fast with the latest and greatest - has to hire engineers.
I think even though this paints a more secure future for software engineers, it's not as bright a future for humanity as a whole. Living in this future doesn't mean that all is bleak- and we'll certainly see progress continue, but it won't be as rapid, the journey through the singularity will just be slower, and what we imagine to be weekly progress in the previous scenario will be progress we make in years.
In both scenarios outlined here, we'll need software engineers. We'll need them more in the second scenario, but at the end of the day - we must understand what a software engineer is.
IV.
To me, a software engineer is just someone who translates requirements to instructions. For a long time, this translation required you to learn a lot of things. From abstraction to inheritance, from knowing how loops work to understanding how to navigate a server using nothing but a terminal, software engineers - just to translate what they wanted to getting the computer to do understand that want - needed to be able to write the code that machines could understand.
Now, it's slowly becoming apparent that the same software engineers can just ask an agent to do exactly what they want, and watch as an agent try (and sometimes fail) to do it. Over the years, they've seen the agent become more competent, more self-correcting, and more obedient as a whole. It's not perfect, and they have to get their hands dirty - but they often look at what they're seeing and wonder how much time is truly left before they're not even needed.
In both scenarios I've outlined, they'll always be needed - in the first, for human creativity which LLMs can't, by definition, replicate - and in the second, for the last 1-5% that needs engineers to still know and understand every line of code.
While everyone can be an engineer, in both scenarios I’ve outlined, not everyone or even a majority of people will choose to. Maybe this is because doing the things that software engineers do requires a certain kind of mindset - maybe it’s a love for all things scientific and technical, maybe it’s curiosity, maybe it’s just the quick feedback loop - but I can certainly argue for dozens of things which are considered more fun than typing code all day - be it anything from gardening to woodwork, from gaming to reading, there will always be things which are subjectively considered less uncomfortable than writing software - which is another reason why there will always be demand for the people who find it a worthwhile craft. I do hope to be wrong about this- because a world where everyone chooses to be engineers, where everyone can make things is a world of true abundance- one with software and robots and flying cars and everything we’ve dreamed about, but the reality is that not everyone will choose to do that, and that’s fine. We also need artists and scientists and writers and doctors and we’ll continue to need professions that will only exist in the near future.
V.
Finally, both scenarios I’ve outlined might seem totally impossible to you. That’s okay.
If you could go back in time a few decades when people imagined we'd all have flying cars, and told them that our buildings and cities look almost the same - not the Jetson-esque utopia, they'd be shocked and disappointed. Yes, the internet and smartphones would blow them away, but other than that, just the fact that man still haven't reached Mars, let alone colonised the solar system, whereas they saw their generation reaching the moon within just 12 years - would deeply disappoint them.
They saw the progress they were making, but they would fail to realise how the rate of change of progress changed and how only some areas still progressed exponentially, in the land of bits, while progress in atoms stalled. In the same way, for the second scenario, if the progress stalls around AIs which still need human supervision and orchestration, it's hard for us to imagine what another AI winter might look like - just as hard if not more, for us to imagine what the other side of the super intelligence singularity looks like.
Regardless of which timeline we're in, it’s never a bad idea to learn to make things.
Whether it’s the second scenario or the first, you’d need to learn the basics just so you know can either go deep at an esoteric area where LLMs would need a lot of guidance - or learn how to engineer the agents and the context so that they can do what you want.
As a new engineer, you should try to do both - just so you have both the leverage of managing agents and the fulfilment of having expertise in a field that you love.
As a senior engineer, you probably don’t need me to tell you this, but don’t ignore AI- the same way you didn’t ignore previous shifts in technology. Leverage its strengths now, and learn how to nudge them along. Learn how to stop your mind from wandering as you watch LLMs do 70% of what you want. Learn not to be frustrated, and try to extrapolate whatever you’re leveraging agents for right now- into a future where you can imagine the same task getting solved at a fraction of the cost, with minimal steering. If that doesn’t excite you, try to find a workflow that does.
Thus, the formula for the future is mostly the same and I think it applies to every profession and field, not just software engineers - keep an open mind and never shy away from learning new things. Master the basics. Recognize leverage and stay optimistic.
The future is bright, whichever way you look at it. Don’t listen to those who tell you otherwise.
I started this essay by asking whether every software engineer going to lose their job very soon?
By now, you know that the answer is no.
Software engineers won’t lose their jobs, but it increasingly feels like they will start creating new ones.