Align to work with AI
AI-powered code assistants are becoming an integral part of modern programming, influencing how developers write and structure their code. These assistants perform well with commonly used languages and frameworks but struggle with less familiar ones, raising questions about their broader impact. As AI-generated code becomes more widespread, programming patterns may shift in ways that favor certain technologies over others. This trend could reshape coding practices, prompting discussions about the long-term effects of AI on software development.
tl;dr: There are preferred languages and frameworks. Use them and you'll have better GenAI code assistant experience. Don't use them, and things will likely be fine, though different.
Many people who "code" have likely used generative AI (GenAI) recently to assist them with "coding." At the moment, their text generation part via large language models (LLMs) is arguably best utilised in programming as code writing assistants. If you've used those assistants, e.g. Github Copilot, then you've probably noticed that they handle easy or common queries quite well. However, for more complex tasks --- especially those requiring an understanding of the existing logic --- code assistants often struggle. The common solution is "just pass more context," but if the relevant information is spread across multiple large files, you're likely to get slow and inaccurate suggestions1.
That's probably what you already know or have heard. You've also likely encountered people swearing by code assistants, saying they make them significantly more productive. You likely also heard that soon "software engineering won't be needed anymore" or that we'll only write in prompts. Well, maybe? I don't think so but undoubtedly the domain is changing. Why is that? Here's my informed (to a certain degree) speculation.
Why is that?
Large language models (LLMs) have been trained on vast amounts of data, likely including GitHub repositories. This means they've "seen" a lot of code examples. What is the most common type of code on the internet? Mostly simple "getting started" examples, students' coursework, "I'm learning" snippets, and implementations of the same basic functionality across different languages and frameworks.
Of course, there's more than that. Many large projects, like Linux, represent the essence of "software engineering." However, the distribution of code quality and complexity follows a long-tail distribution pattern, with the majority of examples being simple and frequently repeated1.
Beyond what type of code is written, there's also the question of how it's written --- specifically, which programming languages and frameworks are most commonly used. On this front, we have accessible popularity metrics, such as the PYPL Popularity of Programming Language and the TIOBE Index. These often shows that Python, C/C++, Java and Javascript/TypeScript are the most popular languages. As for frameworks, the most popular ones are often those that have been hyped for some time2.
When asking a code assistant to generate missing code, we're essentially prompting it to execute: given my existing code, what is the most likely next part?
. For popular languages and frameworks, there's a high chance that similar code has already been written somewhere, leading to relatively good completions. For less popular languages and frameworks, however, the probability of encountering similar code decreases, and the likelihood of hallucinated parameters or functions increases3.
How does this impact the way we write and will write code? It suggests that there are some preferred ways to code with code assistants and that certain kinds of projects are going to be preferred (if not by the developer then by the code assistant). The more an approach, language, or framework is used, the better it is documented, the better the AI can assist with it, creating a reinforcing cycle. This raises an interesting question: if AI is shaping how we code, what will coding look like in the future?
The Future
Given this understanding, can we extrapolate and forecast what might happen in the future of coding? I see two major trends emerging.
1. Increased Adoption of Popular Languages and Frameworks
The usage of popular languages and frameworks is likely to rise even further. One reason is that these are the easiest to start with, and code assistants know exactly how to work with them. Compared to lesser-known languages and frameworks, working with popular ones will result in significantly less friction. Some developers may abandon their favourite niche tools in favour of mainstream options simply because the code assistant makes them more productive.
This shift will create a feedback loop: developers use popular technologies -> they contribute to the common codebase → future code assistants become even better at those technologies. Additionally, AI-powered "artificial developers" will enable non-developers to create proof-of-concepts, further reinforcing the dominance of popular languages and frameworks.
2. Wording and Syntax Normalization
Another possible outcome is the normalization of terminology across languages and frameworks. Lesser-known frameworks may start adopting naming conventions similar to more popular ones to ensure better compatibility with AI-generated code. Standardizing function names and parameters across languages would allow code assistants to merge general knowledge with specific use cases more effectively.
In fact, we may see frameworks designed explicitly with AI-assisted coding in mind. It could even become standard practice to test how easily digestible a framework's documentation and code are to AI before releasing it. I wouldn't be surprised if developers start including example prompts in their docstrings to make AI-assisted coding even more seamless.
That said, despite the strong pull toward popular options, I don't believe we'll see a complete "regression to the mean4," i.e. everything won't collapse only to the common. While popular languages and frameworks will become even more dominant, niche and exotic options won't disappear entirely. Unique problems require specialised solutions, and new tools will continue to emerge in response to evolving challenges. Some languages are created for fun, others are crafted for specific efficiency needs based on decades of experience. As of now, no language (to my knowledge) has been explicitly designed to be easily generated by AI models. Perhaps, we're due for one soon.
Silver Lining
As long as humans remain involved in coding, obscure languages and frameworks will continue to exist. People love to create, challenge themselves, and prove that a particular approach is better (regardless whether that's true). When faced with problems, some enjoy the achievement of solving them, while others find joy in exploring multiple ways to tackle the same issue. The shift in the coding landscape is undeniable, but it won't completely transform the domain --- at least, not yet.
Yes, more elaborate solutions are there, like pass relevant functions' documentations, and the bits and pieces are going to be what differentiate code assistants. For the purpose of this post, it's irrelevant.
Note for the inquisitive: I don't have hard data to back up these claims. These are observations from self-sampling the internet and reading around. However, this is a blog post, not a peer-reviewed or influential academic article, so I'm going to continue riding the wild assumption train.
Sometimes hyped for no apparent reason.
Another nuance is that LLMs don't simply "memorize" everything they've seen. They seem quite capable of interpolation, i.e. filling in gaps between observed data. However, their ability to generalize in areas they haven't seen before, particularly when adhering to strict syntax rules, remains unclear.
"mean" here means "common" everywhere else in the text.