Between 2014 and 2023, Computer Science bachelor's degrees in the US doubled from 50,000 to 112,000 annually during a brutal job market. The same surge happened in Singapore. These students face an unexpected reality: AI systems that automate code writing, memorize syntax, and increasingly handle entire projects autonomously.
To understand this shift, I surveyed power users of agentic AI tools—systems that perform multi-step, multi-file programming tasks without constant human prompting. This deliberately biased sample of early adopters previews what's possible as adoption spreads across the industry.
The results are striking. Almost all people reported that the majority of their code is written by an LLM. Many noted that they had become much more productive after the recent release of agentic AI tools in the first half of 2025, and a senior engineer who has been programming for almost 30 years even said he had written a production app with almost entirely prompts.
AI tools are changing the profession rapidly
LLMs were used primarily in a conversational format until recently and weren’t particularly good at autonomous programming, with even the best models originally solving less than 5% of real world software engineering tasks on SWE-Bench.
Most developers didn’t find that so useful. Nirant, a consultant who works with AI systems told me that the old GPT-3.5 only got about 10 to 20% of questions right, and John Loeber, the founder of an insurance company said that they were at best useful for small scripts that were less than 50 lines. Many coding assistants in the IDE did reduce the friction of copy-pasting, but they only helped cut about fifteen minutes a day on complex codebases according to Grant Slatton, a former senior engineer on AWS S3.
But the real breakthrough came when more intelligent models like Claude 3.5 Sonnet were combined with agentic tools that could work directly within code repositories. Agentic tools (like Claude Code and Cursor Agent) were outcome driven compared to chat-based tools which required specific instructions. Users could instruct these tools to perform multi-step, multi-file edits that weren’t possible before.
The capabilities quickly evolved to near-complete autonomy. Sidu, a founder of an AI agent startup who has been programming for 30 years, described a dramatic transformation using Claude Code, a terminal-based agent from Anthropic: "Everything was done by the agent—writing the spec, milestoning, sequencing the tasks and writing the code. This was the first time I've had all of it done autonomously." He tweeted his result out showing a complex Model Context Protocol server that he says would have taken weeks, but he did it in about two days with Claude Code.
Rishabh Srivastava, founder of Defog (YC W23), tweeted that he completed a moderately complex feature change in two hours using Claude Code—work that would have taken him a week or two normally. He linked to the actual pull request that Claude Code authored, complete with a detailed markdown document showing his inputs to the system.
Amruth, founder of a neuroscience startup, tracked the rapid change in his own workflow in a large codebase: "Earlier in 2025, it was 80% manual coding and 20% AI assistance. Right now it's 40% manual and 60% AI-generated code”. All fifteen of my interviewees reported large gains in productivity and expressed surprise that so much of it was happening so fast.
One might think that because of this level of automation, coding skills are becoming obsolete. They're not. Programming has always been the art of being specific. In the 1940s, programmers had to be specific about 1s and 0s. Then they became specific about assembly instructions to the CPU. Then about instructions to compilers in high-level languages. Now they must be specific about instructions to AI agents. The target has changed, but the need for precision hasn't.
But agentic tools present a new challenge: they're over-enthusiastic. Unlike traditional compilers that fail predictably when given imprecise instructions, AI agents will confidently make sweeping changes based on small prompts. They might hardcode test cases to appear correct, refactor entire file structures when asked for minor tweaks, or pursue elaborate solutions to simple problems. The game has fundamentally shifted from writing the code to ensuring the agent does the right thing.
This requires more precision and vigilance than traditional programming ever did. Developers now create detailed "meta instructions"—configuration files (like Cursor Rules and Claude.md) that provide specific guidelines to AI assistants that tell the AI how to behave when interpreting code and generating suggestions.
Sidu Ponappa exemplified both the power and the demands of this new paradigm. He tweeted: "I've been pulling all-nighters with claude code and Sonnet 4 for a week. I built this in 2 days and burned ~$150. I contributed less than 1% LoC - all AI written." The productivity gains were so dramatic that he chose to work through the night to take advantage of them. But, as he told me in an interview, this came with a requirement for "very aggressive active steering" because the AI "has a tendency to make subpar decisions and pursue them down rabbit holes...I can't back off for even a minute."
The shift from thinking about the code, which is much faster than writing the code explains why senior engineers say they’re so much more productive now because of AI. They have the skills to evaluate code, encourage useful directions and stop the wrong ones. This means that when they get an incredibly powerful AI tool that is at their beck and call, they can use it to the right ends. When they get several hundred lines of code at one shot to review, they can check it for bugs, trace edge cases, and evaluate the quality of the work that AI coding agents give them much better than junior engineers can. Programming itself is becoming less valuable, but knowing how to program is becoming more valuable by the day.
Satnam Singh, a former ACM SIGPLAN executive committee member and current Fellow at Groq, put it succinctly: 'You still need to be an expert engineer. Even if AI writes the code, someone has to understand it. It's just a magnifier for existing expertise.'
There are some important caveats to understand here. The first is that nearly every single person reported that AI was very good at greenfield projects, and not so good at working with large, complex existing codebases.
John Loeber explained this distinction clearly. At his company, a mature organization with a non-trivial engineering team, "there's a lot of internal complexity, and so it's harder to use AI agents on a mature codebase." The challenges include configuration, AWS integrations, and third-party APIs where "not everything is in publicly accessible docs." While AI excels at building from scratch, it struggles with the complexity of established enterprise systems.
The second is that on problems that are out of distribution compared to the LLM's training data, the models struggle significantly. Clement, who works on mechanistic interpretability research, explained that "the code doesn't exist in the wild, so it's very difficult for LLMs to work on things they weren't trained on, and the models perform poorly in these situations." For his research work, he has to provide extremely detailed context, including past papers and codebases with design patterns, to get the AI to understand what he's trying to accomplish.
What now for junior developers?
Smaller startups with newer codebases and AI-savvy senior engineers will dramatically reduce junior hiring. Before agentic AI, hiring multiple junior engineers was the fastest path to market, followed by scaling headcount to reach funding and revenue milestones. Now these startups can achieve the same speed with AI tools; writing more code, reviewing faster, and shipping to users faster. They need fewer people initially (potentially just one senior engineer) and fewer people as they scale to reach the same milestones. The economic logic is compelling: when AI provides the productivity boost that previously required a small team of juniors, mass junior hiring makes little sense. Some founders are even planning for future AI improvements, reasoning that if tools will be significantly better in 6-12 months, hiring additional people now is counterproductive.
The shift has been dramatic. John Loeber explained how fundamentally expectations have changed: "It's clear that the junior engineering skill set of 6 years ago doesn't add anything at all anymore. The bar is so much higher. In 2018, you could hire a smart CS grad who'd never done any programming, and the expectation would be that you'd get productive soon enough."
John also observed that "most teams are trying to stay as lean as possible. Lots of teams that could hire tons of people are wanting to cap it at a small number and they’re only going to hire if stuff's on fire." This reflects both the leverage AI provides and post-ZIRP caution about headcount, especially given how difficult it is to reduce staff once hired. The traditional junior hiring model, as Sidu put it, where you could "take a React course and do it," is "completely gone."
For larger companies (like FAANGs), this shift will hit them more slowly. Unlike resource-constrained startups racing to their next funding round, large companies have the luxury of hiring more people, thinking long-term about talent retention, and tackling challenging problems that AI agents can't yet solve. Grant, a former senior engineer at AWS's S3 team, observed that while large companies hire for competence, junior engineers rarely generate immediate business value equal to their salary cost. Their real value emerges after they learn the company's codebase, become economically productive, and advance through promotions over several years. Since larger firms already operate on this long-term investment model for junior talent, they may continue this approach longer than startups.
What might change however is the skill bar, and how fast they expect people to be productive. Eleanor, a former engineering manager at Microsoft and Google, emphasized how dramatically the timeline has compressed: "Now they care about direct impact …they want to know if you can be useful, maybe not on day 1, but on day 10." Expectations are rising, even if larger companies don't want to stop hiring junior developers. However, large companies have reasons to be more cautious about this transition. Their codebases present unique challenges: they're often too large for current context windows, built on custom ontologies that weren't part of AI training data, and filled with proprietary integrations and configurations. As one developer who asked to remain anonymous noted, AI tools haven't yet reached "takeoff" on the kind of complex legacy codebases that characterize enterprise systems. Moreover, Grant observed that large companies consistently lag behind in AI tool adoption: "Copilot didn't come to AWS till a year" after it was widely available elsewhere.
Caption: The author when thinking about this
What should you do?
For junior developers entering the market now, the math is straightforward but harsh. Over time as more people use better AI agents, there will be fewer opportunities available in the market, even as agentic AI takes longer to diffuse in larger companies.
But this doesn't mean all opportunities disappear. Junior developers with strong fundamentals who can effectively direct AI tools are more valued than before.
John Loeber noted this bifurcation: "Some juniors I know are really good with AI, and by their skillset and proactiveness in figuring out how to use these tools, these juniors are doing way more heavy lifting than a senior 5 years ago."
I outline two paths below, and the key factors that make them resistant or complementary to AI capabilities.
Path A: Code is just a means to an end
One thing multiple people noted in my interviews was that in the end, customers buy products and services to solve problems, and companies hire software engineers to write code that is a means to the end of solving problems.
Several interviewees highlighted how AI is fundamentally changing the relationship between product management and engineering. Rishabh put it most directly: "a lot more value is in being a builder/PM. Previously engineers would look at the issue given to them by the PM, but at this point if you can write that level of detail about the product level document, then you might as well give it to Claude Code."
This observation reveals a crucial insight: if the specification is detailed enough for an AI agent to implement correctly, why do you need a human implementer? The traditional handoff from PM to engineer could be a handoff from PM to AI. From a company's perspective, this shift is logical. Companies don't actually want to write code - they want to solve customer problems profitably. Code is just the mechanism. As John Loeber noted, "if it becomes cheaper and cheaper to produce the product/service, then more of the differentiation comes from customer acquisition" rather than technical execution.
This creates two crucial new skill areas that blend traditional PM and engineering roles:
Writing Specifications for AI Agents The ability to translate business problems into detailed, unambiguous requirements that AI can execute becomes critical. This isn't traditional PM work (which was often high-level) or traditional engineering work (which was implementation). It's a new hybrid skill: understanding business needs deeply enough to specify them with the precision that AI requires.
Design and Ideation The creative, strategic thinking that was traditionally a PM's domain becomes even more valuable. What should we build? What problems are worth solving? How should the user experience work? AI can implement ideas very well, but it can't generate business insights or creative product solutions from scratch.
The point of path A is to move where the bottleneck in creating value for customers is. When it was in writing code, then writing code was what was economically productive. Another possibility is that in the coming years, if product differentiation goes down, what becomes even more valuable than writing code is sales and marketing. The ability to sell and acquire customers could be more valuable than making the product that they want to buy.
Path B: So Good They Can’t Ignore You
The second viable path involves achieving world-class expertise in a specialized technical domain where you can direct AI capabilities while staying ahead of what AI can handle independently. This isn't about avoiding AI, but about becoming skilled enough to leverage it as a powerful amplifier while working on problems that remain beyond its reach.
Recent developments illustrate both the dramatic capabilities and sharp boundaries of current AI systems. In cybersecurity, the CyBench evaluation revealed that models like o1-preview and Claude 3.5 Sonnet can now solve professional-level Capture the Flag challenges that previously required expert human teams. These models successfully completed CTF tasks that took human teams up to 11 minutes to solve, handling complex vulnerability identification and exploit development across cryptography, web security, and reverse engineering.
But the results also expose a striking capability cliff. While AI excels at challenges in the sub-11-minute range, it hits a complete wall on harder problems. The most difficult task took human teams 24 hours and 54 minutes to solve - 136 times longer - and no AI model could make meaningful progress even with guidance. Similarly, security researcher Sean Heelan used o3 to discover a real zero-day vulnerability in the Linux kernel, but the process revealed significant limitations: a 1:50 signal-to-noise ratio and the need for his expert evaluation to distinguish genuine vulnerabilities from false positives.
Clement, who works on mechanistic interpretability research, describes a similar dynamic. AI has become incredibly useful for implementing research ideas and handling routine coding tasks, but struggles with the core creative work because "the code doesn't exist in the wild." His domain requires developing novel approaches to understanding neural network internals - work that's fundamentally out of distribution from typical AI training data. AI amplifies his productivity on implementation, but his expertise remains essential for framing research questions and evaluating what constitutes meaningful progress.
These examples point toward a sustainable strategy for Path B: become expert enough in a specialized domain to effectively direct AI on routine work while focusing your cognitive energy on frontier problems.
My basic idea is that technical domains naturally generate an endless supply of increasingly difficult problems. As AI becomes capable of solving today's hard problems, researchers and practitioners respond by tackling even more ambitious challenges. When CTF competitions notice that AI can solve their 11-minute problems, they'll design harder challenges that push beyond current AI capabilities. When AI can optimize existing systems or possibly one day understand neural networks like itself, then we will move on to harder problems that it cannot solve and cannot be trained for. There is the possibility that we will have superhuman AI researchers, but in that case, the world will have much more serious problems than finding employment.
General parting thoughts
Both of the paths I recognized above are explicitly demanding a higher level of quality and competence from computer science students and younger developers. At a more meta level, nearly every interviewee said that younger people have to be more agentic. It is not sufficient to wait for someone (an employer, or a university course) to give you a well-scoped task, complete it to their satisfaction and call it a day. Instead, you need to demonstrate that you can identify problems worth solving, build complete solutions from scratch, and take ownership of outcomes. Rishabh put it bluntly: "if someone hasn't actually built a complete project from scratch, it's a no go from the beginning." The cost of building has become so low that there's no excuse for not having substantial proof of work.
The bitter truth is that AI will lead a two-tier world. Those who do grasp the opportunities, combine their competence with initiative and learn to direct AI effectively will become dramatically more productive than previous generations. Given the rapid pace of change most specific career advice, including mine, is probably wrong. The only thing we know is that the bar is higher, and you have to move faster.
awesome very informative. Just what I needed
Amazing!