My Take on LLMs for Code
Hello Coders! 👾
The conversation around Large Language Models and their capability (or incapability) in writing quality code has been heating up. As someone deeply embedded in the coding community and as someone using LLMs (Rosie) on a day to day basis, I want to share my perspective on this topic. Recently, I came across a couple of tweets from Andriy Burkov (@burkov) , who compares the rejection of LLMs by some coders to past technological shifts. He argues that resistance to LLMs often comes from outdated experiences or an inability to communicate effectively through code.
I find myself agreeing with Andriy’s perspective, but I also have some of my own insights that I would like to add. In particular, I see LLMs as valuable tools when used correctly, especially as replacements for searching the internet or StackOverflow and for Rubber Duck Programming.
Andriy’s Perspective
Andriy makes some strong points, such as:
- Coders rejecting LLMs are akin to old-school photographers who dismissed digital cameras, or music lovers dismissing MP3.
- The negative opinion on LLMs usually comes from outdated experiences or inadequate communication skills.
I find myself agreeing with Andriy’s perspective. However, there are some interesting counterarguments made that I strongly disagree with. For example, some people argue that highly skilled programmers are less impressed with LLMs, suggesting that the true capability of LLMs can’t be judged because top-tier developers don’t engage with them. Others claim that enthusiasm for LLMs often comes from less experienced programmers, implying that those who praise LLMs might not be proficient coders.
I do understand why people might think this, but I strongly disagree. These points seem to stem from a misunderstanding of what LLMs are and how they can be effectively used.
Let me explain my thoughts on the matter.
LLMs Are Not Magic Boxes
One common misconception is that LLMs are like magic boxes where you input a coin, and out comes a perfect program. This is far from the truth. The quality of the output depends significantly on how well you formulate your instructional prompt. Think of it like searching on Google: if you type a vague or overly broad query, you’ll get a wide range of unhelpful results. Similarly, an LLM responds best to clear, concise, and context-specific prompts.
For instance, if you’re working on a complex algorithm, it’s better to break down your queries into smaller, manageable chunks. Ask specific questions and provide context wherever possible. Instead of asking a broad question like “How do I sort a list?” try something more focused like “Can you show me a Python function to sort a list of dictionaries by a specific key?” This way, the LLM can provide a more accurate and relevant response.
A great thing about working with LLMs in these situations is that they keep context and you can ask follow-up questions. Either to explain the result, refine the result or create more code based on the result.
LLMs vs. StackOverflow
StackOverflow is full of outdated or even incorrect code, yet it continues to be a valuable resource for many. Less experienced programmers sometimes copy code from StackOverflow blindly without fully understanding it. This practice can lead to significant bugs in a codebase, as the copied code may not fit the specific context of the problem at hand.
Similarly, there’s no guarantee that the code generated by an LLM is error-free or perfectly suited to your needs. Even worse, because of the way the algorithms work it will favor any answer to the conversation and if there’s no real answer possible with the dataset it still generates an answer that looks legit, but in fact is just the best the LLM can create. (This is often called “Hallucinating”, but I don’t like that terminology since it ascribes human traits to AI).
However, LLMs offer a distinct advantage: you can ask them to explain the code they provide. This feature allows you to gain a deeper understanding of the solution, verify its correctness, and learn from it. You can query the LLM about specific lines of code, ask for alternative implementations, or request additional context, which can be incredibly educational and practical.
In essence, while both StackOverflow and LLMs can provide helpful suggestions, LLMs offer an interactive component that promotes better understanding and more informed decision-making.
Rubber Duck Programming
I often find that interacting with an LLM is like a more advanced form of Rubber Duck Programming. Explaining your problem to the LLM, like you would to a rubber duck (or a forum or person), can help you understand it better. However, the responses from the LLM can be more interactive and useful, offering suggestions that might not have occurred to you otherwise.
Skepticism in Posted Examples
I don’t place much value in examples people post about how good or bad their experiences are with LLM-generated code. There are videos and posts online ‘proofing’ that LLMs can write programs, or that they are incapable of that. But, as with man on the street interviews, it’s easy to manipulate the narrative by steering the LLM in a particular direction or cherry-picking results that fit one’s preconceived notions. My opinions are based on my own experiences and observations. I would encourage anyone with an interest to just try and see for themselves, but also keep in mind that writing promps is a skill that requires practice. As even searching online can give varying results based on the query.
Conclusion
While I respect the counterarguments, I believe they misunderstand the real value of LLMs. They are incredibly useful tools, especially when used as replacements for searching the internet or StackOverflow and for Rubber Duck Programming. Let’s continue to explore their potential while remaining critical and cautious in their application.
Happy Coding! 🚀