AI-assisted software development has quickly reshaped the software industry. Junior and experienced developers alike are experiencing productivity and ergonomic improvements by integrating AI assistants into their workflow. Junior developers can quickly produce demonstration applications and learn by example for features and libraries that would otherwise be outside their comfort zone. Experienced developers benefit from AI assistance as well, as AI can rapidly scaffold boilerplate code, produce requested tests and documentation, and create useful “first draft” changes they can then iterate and polish into production-ready material. Although AI-assisted code development is only a few years old, it will assuredly become an integral part of how software is developed from now on.
Other industries are starting to benefit from AI assistance as well. Consider drug development, where LLMs can now read genetic code and effectively recommend candidate molecules, even predicting their efficacy and potential side effects. The inevitable end-state of AI assistants is that they will become inescapable. Every activity, every skill will have an intelligence ready to offer advice, answer queries, and pave the way on your request.
However, software developers are starting to discover the downsides of AI-assisted code development. Junior developers experience the 70% phenomenon, where AI creates a code base that accomplishes the easy path of their application, but afterwards they are left with code they don’t comprehend and an assistant that doesn’t understand enough to transform it into production-grade software.
Honest reflections from coding with AI so far as a non-engineer:
It can get you 70% of the way there, but that last 30% is frustrating. It keeps taking one step forward and two steps backward with new bugs, issues, etc.
If I knew how the code worked I could probably fix it myself. But since I don’t, I question if I’m actually learning that much.
Life is incomprehensible enough as is. Imagine if on top of everything else, you felt the 70% problem within everything you care about. Your first experiences in some new arena would be exhilarating, but you’d never feel any ownership of what you accomplish—never knowing how to navigate these new landscapes that AI assistants happily prepared for you. Any time you want to fix, change, or be creative and unique within your life, you are trapped by your assistant. Because you don’t have the ability to reason through the technical details of your domain, you can’t even ask the right questions to appropriately describe your problem.
Senior developers are more likely to actually benefit from AI assistance precisely because they already have sufficient comprehension of their problem domain to perceive what is actually going on and to ask the appropriate questions. Because they understand what the assistant prepares for them, they can guide the process, ensuring their code does not rot—that it maintains its function and becomes more comprehensible, not less, over time.
If we are to have AI assistants for everything, I want the typical AI-assisted experience to resemble that of a senior software engineer, not that of the junior developer stuck in the 70% trap.
The principal difference between how junior and senior developers interact with AI assistants is that seniors use the assistant as a tool that augments their personal capabilities, whereas juniors outsource their needs, abdicating responsibility for the underlying complexity to the assistant, similar to hiring a consultant or contractor. In essence, these are the choices we face as we lean into our AI-assisted future: Are these assistants here to augment our capabilities or to offshore our responsibilities?
Clearly, we want to preserve both options. In many cases I do not want responsibility for a desired outcome. Going to a restaurant instead of cooking dinner is a great example, or hiring an electrician to install an EV charger instead of installing one yourself. Yet the ability and freedom to augment instead of abdicate our responsibility must be preserved as sacred. The natural market trend is to create products with a reliable future revenue, so products that capture specialization are more lucrative than ones that democratize those specialties. As such, we must view the ability to augment our capabilities instead of abdicate them as a right that must be preserved in order for humans to live fulfilling and meaningful lives within an AI-in-everything future.
How, then, might we pave the way for a future of senior developers—of humanity capable of using AI assistance first to accelerate learning, and only when sufficient mastery is reached, to augment our abilities in order to scale our agency, our ability to creatively and freely act within the world?
Senior developers have a deep understanding of both their code and their problem domain. Being able to jump between domain representations, the code domain, and the actual execution environment is crucial. This dual representation ensures each constituent representation can be described in relation to the other, guaranteeing that correctness or sufficiency can be clearly communicated even when we share limited mutual context with our communication partner.
To use an AI assistant as an augmentation to our capabilities and not an abdication of them, both the code and the use cases of those capabilities must be well understood. Therefore, to augment everything, we need to develop two new things: the maturity to use assistants as teachers and coaches rather than as consultants before we reach expertise in our field, and a lingua franca for describing action—a structured language that is easily understood by both humans and machines broad enough to describe the purpose and mechanism of action for whatever endeavor we can imagine.
If current trends continue, we are likely to interact with AI assistants primarily through unconstrained natural language. This is not optimal because it is difficult to ensure natural language stays precise. To act as a mutually understood code for describing action, statements must be clear, require minimal interpretation, and be repeatable. Natural language is powerful because it is evocative. It unlocks our intellect’s full power for imagination and interpretation. Because it is evocative, it is very hard for natural language to be right or wrong.
Code, on the other hand, is structured language; code can be well-formed or not. It would be difficult to write poetry with a coding language without breaking its syntax. Yet its brittleness is also its power, for there can be as much information within a wrong line of code as in a well-formed one. A wrong line points to a discrete lack of understanding, an element that can be investigated and improved upon.
If we are to maintain comprehension of the underlying domains that universal AI is assisting us with, we need both evocative language as well as structured language. Combined, these styles provide a universal language of action—language that can both evoke an intention and be correct or not. Such a language is surely closer to resembling natural language than machine code, yet still requires constraints to ensure it describes action in a repeatable manner that is clear to both humans and machines.
A language of action must fulfill three properties in order for it to be evocative of an intention as well as be correct or not. Languages that fulfill these three properties satisfy my definition of a procedure. I believe there can be many different implementations of such languages but if AI assistance is to improve human agency and not to erode it, AI assistants must always be able to offer their advice through a procedural protocol that provides these three properties:
The intention. This natural language statement evokes the purpose of the action. The action succeeds or fails based on whether what was evoked matches reality for the user after the action is performed.
The requisite starting state. Who or what must be available to begin the action. This is a structured statement that can clearly identify persons, roles, places, and things both specifically and categorically.
The sequence. How the constituent initial elements must interact to evoke the intended result. This is a structured statement of interaction, which treats the initial elements as a comprehensive context and uses a well-formed grammar to define their intermediate and end states.
Notably, this structure is self-referential, as it must describe intermediate actions. This means action can be described at any scale, from the most general to the most specific. Like any good code, the correct scale of any procedure involves a tension between abstraction and description. Successfully balancing these two countervailing properties is one of the hallmarks of senior developers. Abstractions make language fulfill multiple needs at the cost of additional symbolic complexity. Overtly descriptive language makes it difficult to convey complex actions clearly to a human mind, as the barrage of details overwhelms the narrative and becomes difficult to remember.
AI assistance for everything could create a beautiful future that dramatically improves human agency, capability, meaning, and happiness—but only if we can take the lessons of AI software development to heart early on. A future full of individuals stuck within 70% traps is one to avoid. There are design choices we can make for our AI-enhanced systems now to help avoid that fate.
First of all, we must learn to be responsible junior developers, for AI-enhanced capabilities are already exhilarating, and it is hard to know when to pump the brakes and do the hard work of ensuring our own comprehension. By ensuring comprehension, we can unleash our creativity and unique perspective, creating radically transformative, fun, and rewarding art, technology, and social engagement from our augmented agency. Without comprehension, we abdicate our agency, becoming nothing more than spectators in a future outside of our control.