The grand prophecy of agentic programming goes something like this: in the not-so-distant future, we puny humans—limited as we are by our tiny working memory and limited cognitive abilities—will no longer build software “by hand.” Programmers will no longer write code as we once did in days of yore, keystroke by painstaking keystroke. Instead, we will orchestrate the work of software development, which will be carried out by willing hordes of semi-autonomous AI coding agents. We will write detailed specifications, and graciously dole out the work of programming to our untiring AI minions, guiding them ever forwards towards the successful delivery of bug fixes and new functionality. Programming will cease to be a craft of hands-on production; it will instead be a craft of management.

You could say that this prophecy has been fulfilled, and that this not-so-distant future has already arrived—it’s just not evenly distributed. After all, agentic programming tools already exist in abundance—in well-known commercial tools like Cursor, Copilot, and Claude Code, but also in wild Rube Goldberg-like open source orchestration systems with zany pop culture-inspired names like “Gas Town” and “Ralph Wiggum.” For a professional developer with money to burn on credits, an ever-evolving all-you-can-eat smorgasbord of options awaits. No, the uneven distribution has little to do with access, and far more to do with willingness and attitude: not everyone wants to be an AI manager.

An abstract, painterly landscape of a lush swamp or wetland rendered in rich greens, teals, and muted yellows. Dense foliage and trees frame a murky waterway that recedes into the background, the forms blurred and impressionistic.

As a manager of humans who make software—a manager of would-be AI managers, you might say—I’m both keenly interested in and sympathetic to this particular attitude. After all, my job involves understanding not just what my teams produce, but how they produce it, and more importantly, what keeps them happily producing it over months and years. Happy engineers ship; miserable engineers either quit quietly or polish their resumes. So if people are reluctant or outright resistant to adopting new tools and ways of working, it’s my job to understand why.

There are, of course, plenty of legitimate ethical concerns surrounding generative AI: the staggering energy and water consumption of data centers, the thorny questions concerning consent and provenance of training data, and the concentration of power in a tiny handful of corporations. Engineers raise these concerns. But when I dig a little deeper, something else often surfaces: what AI does to the feel of programming itself. And it is this—this change in the feel of the work—that I want to focus on in this essay.

From what I see, most of the conversation around agentic programming seems to ignore this topic. A lot of it is your typical capitalist/Taylorist discourse fixated on output and productivity—how much more you can do with these grand new tools, and how to operate them. But you can’t really separate output from experience. Anyone who has spent time with programmers knows that productivity is a second order effect of flow state. A hacker in the zone, armed with the right tools, nerd-sniped by a challenge at just the right balance of difficulty and achievability, can conjure code at astonishing velocity.

Flow state in programming depends on the texture of the work itself. The specific challenges. The hours of uninterrupted focus. The deep-in-the-weeds feeling of wrestling with arcane syntax and implementation details while simultaneously trying to hold complex mental models in your head. In an absolutely brilliant article on the phenomenology of agentic coding, Novatorine calls the experience of this weird and wonderful texture (in a sly nod to Heidegger) “being-at keyboard.”

We need to better understand what “being-at-keyboard” feels like under the agentic paradigm.


A couple years ago, Andrej Karpathy tweeted: “the hottest new programming language is English.” It’s a sticky quip, one that has steered much of the global conversation about AI-assisted programming. (Hey, it came from Karpathy!) On the surface it feels right—just as compilers allowed us to leave assembly behind in favor of compiled languages, AI allows us to climb up another run on ye olde ladder of abstraction.

But agentic coding is about more than moving upwards in abstraction. The compiler gave us abstraction without ambiguity. You wrote C, and it became assembly, deterministically. The layers were clean, and you remained a programmer in the traditional sense of how we’ve always understood the word.

What’s happening with agentic coding might better be captured by a term coined by Venkatesh Rao: “oozification.” Oozification, as Rao describes it, is the tendency of technological systems to evolve from structures built of large, rule-heavy building blocks to ones composed of smaller, more fluid, less constrained components.

Imagine, if you will, the difference between a man-made, plantation forest and a swamp. The forest has legible structure: tidy rows, canopy, understory, floor. The swamp is murkier, richer in evolutionary possibility, but also much harder to read. Oozification is the transformation of the forest into the swamp. The number of possibilities increases, while the number of certainties decreases, and that combination tends to make people downright nervous.

A natural language prompt doesn’t compile into code. Instead, it gets interpreted, completed and sometimes second-guessed by a probabilistic system. Intent blurs into elaboration and precise control gives way to fuzzy suggestion. It’s oozy and messy programming, and the role of the programmer blurs as well into something with unclear boundaries—part orchestrator, delegator, babysitter, designer, reviewer. People have always struggled to call software development honest-to-goodness “engineering,” and with the oozification of the practice, that highly-esteemed label has only become more ill-fitting.

Informal conversations with my own teams reveal attitudes that run the full gamut for zealous embrace to tepid acceptance to absolute mourning for their lost craft. Of course, the mourners are almost always those who appeared to love that indescribable feel of programming the most—the ones who’d spend extra time digging into the obscure internals of some library, keeping abreast of the latest developments in their favorite programming language, or finding new ways to write the same function more elegantly. For them, agentic coding threatens to eliminate the very texture of work that made programming worth doing in the first place.

We all know the difficulty of software development has never really been the writing of code. It has always been the figuring out of what to build and how to build it. Decades ago, Fred Brooks made this distinction in “No Silver Bullet”: essential complexity (the hard conceptual work of specification and design) versus accidental complexity (the incidental difficulties of syntax, boilerplate, and tooling). LLMs are remarkably good at reducing accidental complexity, but essential complexity remains irreducibly human. If anything, the skills it demands matter more now than they did before: clear thinking, precise communication, systems-level reasoning, and product-mindedness.

But this distinction offers little comfort to someone who spent years cultivating a deep fluency in, say, C++ or Rust, who took pride in knowing exactly what was happening at the lowest levels of operation, and why. The good news is that these skills still matter. Memory management is still a thing. AI will continue to do things poorly or wrong, either due to “hallucination” or imprecise specification.

Nevertheless, it’s clear at this point that we’re not going back to the way things were. At least, if you’re a software developer employed by a capitalist enterprise of some sort. Writing and optimizing code by hand will still be necessary at times, but it will likely be done sparingly, an increasingly niche practice. The practice is oozifying, and it’s going to be very hard to stop the spread. As Kailish Nadh declares: “Software development, as it has been done for decades, is over.”

So how does anyone still find joy in a world of ooze?


In 1934 the German biologist Jakob von Uexküll coined the term umwelt to describe the perceptual world of an organism: the slice of reality that is meaningful to it given its particular sensory and cognitive apparatus. His famous example is the tick, a blind creature whose entire world reduces to the smell of butyric acid, the warmth of mammalian skin, and the feel of hair. Apologies, but I have to quote von Uexküll’s description of the tick from A Foray into the Worlds of Animals and Humans, both for its sheer beauty and grotesqueness:

The eyeless creature finds the way to its lookout [at the top of a tall blade of grass] with the help of a general sensitivity to light in the skin. The blind and deaf bandit becomes aware of the approach of its prey through the sense of smell. The odor of butyric acid, which is given off by the skin glands of all mammals, gives the tick the signal to leave its watch post and leap off. If it then falls onto something warm—which its fine sense of temperature will tell it—then it has reached its prey, the warm-blooded animal, and needs only use its sense of touch to find a spot as free of hair as possible in order to bore past its own head into the skin tissue of the prey.

In a lovely little essay called Discovering Your Software Umwelt, Rebecca Wirfs-Brock and her co-authors extend this obscure biosemiotic concept to the practice of programming, arguing that tools, languages, and paradigms shape a programmer’s sub-umwelt in much the same way that physiology shapes the tick’s. A Lisp programmer perceives and acts on problems differently than a C++ programmer, guided by different salient features and different functional possibilities. When the tools change, the umwelt must shift to accommodate them, and that shift takes time. As Wirfs-Brock writes: “When we encounter something unfamiliar in our environment or enter a new environment, our umwelt can no longer be a reliable guide. It may even mislead us.”

Agentic coding is precisely this kind of environmental disruption. The programmer’s umwelt, as it has existed for decades, is being reshaped. The salient features of the work are migrating from individual lines of code to the shape of a system, from implementation details to architectural intention, and from production to review and correction. The new umwelt foregrounds different things entirely, and requires that we pay attention in different ways. What are your agents doing right now? Do they have the right context? Did that last run drift from your intent, and if so, where? Answering these questions requires a different way of being. The current breed of agentic tooling, modeled as it is on the traditional IDE, is probably the wrong interface for this new umwelt. We’re trying to wade through a swamp still using tools designed for the forest.

Those I know who have found their footing in agentic work are the ones who got curious about this shift in perception rather than resisting it outright. They seem to have embraced this new texture of the work, forcing themselves to confront this new umwelt even when it feels rough, strange, and uncomfortable.

For them, the pleasure of a perfectly optimized function has given way to other pleasures. Sometimes this is the satisfaction of decomposing a sprawling problem into pieces an agent can actually execute. Or it might be the quiet thrill of recognizing that an agent’s output has a subtle architectural flaw before it cascades into something worse. These are still acts of taste, judgement and skill.

Novatorine points to something else worth noting: agentic coding can actually improve certain conditions for achieving flow state. When an agent handles the lookup you’d otherwise alt-tab to Stack Overflow for, or generates the boilerplate that would have broken your concentration for twenty minutes, you get to stay in the high-level space of architecture and intent. The accidental complexity that used to yank you out of the zone becomes, in Heideggerian terms, ready-to-hand. You move through it rather than stumble over it. This strange inversion—the loss of direct control for the gain in smoothness—is a possible benefit of the umwelt shift.

Oozification means there is no clean before-and-after; that’s why it’s called ooze. You will still write code by hand sometimes. You will still have moments of being-at-keyboard in the old sense. But these moments will alternate with longer and longer stretches of reviewing, specifying, redirecting, and evaluating work that an agent produced. Attending to that texture, rather than dismissing it as a degraded version of what came before, is itself a form of craft.

After writing all this, I realize it sounds like I’m trying to sell you, dear reader, on this transformation. But that’s not my intent. If you can find a way to operate the old way, and that brings you joy, go for it! Odds are, however, it will be increasingly difficult to do so in exchange for wages.

As a manager I think part of my job right now is making it OK for engineers to talk about this shift honestly—especially the weirder, more personal stuff. What does the new work actually feel like? Where are you finding moments of satisfaction you didn’t expect? What still feels like a loss? The engineers who are handling this well aren’t doing it alone. They’re talking to each other, comparing notes, and sometimes (though they might not quite put it this way) finding that the new work has its own unexpected pleasures.

But the path forward, for those who choose to walk it, looks like the curiosity Wirfs-Brock describes: a willingness to sit with an unfamiliar environment long enough to discover what is significant in it. The hard, satisfying problems are still there. They’re just waiting to be perceived.