Death of the Jr Dev - Review

The Article

The Death of the Junior Developer is an article written by Steve Yegge.

I recommend everyone watch the video below and subscribe to him. He is one of the best tech YouTubers. If you dislike him initially, as I did, you might come to appreciate his no-nonsense approach.

My Initial Response

The article is quite alarming, especially coming from an individual and company I hold in high regard. Having a well-respected senior developer say AI was really taking over programming was concerning.

You see, I have been programming professionally for nearly 20 years. I have always been an early adopter. At my workplace, I was among the first to experiment with generative AI when it emerged. I initially used it to assist with my writing and coding. My early adoption led to my inclusion on a team utilizing AI to develop tools for our internal systems. I think generative AI is wonderful and has so much potential, however, I also see its drawbacks. I believe, as most senior developers I have spoken with do, that there is no danger of AI replacing us with this current generation.

The availability of new training data is decreasing, and the AI’s performance is not particularly impressive, even the latest models in my opinion. The most it can do is speed up a good developer, not replace them. It does not work well for large codebases or when handling completely new tasks it has never encountered before. I would like to point out the fallacy that while it is true most single files are under 1,000 lines of code you have huge systems that have tens of thousands of files, millions of lines of code, all working together that you cannot feed into any model. You would likely need to train your own model on that codebase to get any real use out of it, and even then, I would remain skeptical. Welcome to the world of legacy monorepos.

The Breakdown

The author mentions that this is a speculative blog post but fails to disclose that it also serves as an advertisement for the new AI coding tool he is promoting. In my view, presenting this as an opinion piece while promoting a product raises concerns regarding FTC regulations on advertisement disclosures. Although I am not a lawyer and cannot say whether anything illegal occurred, this article left me with a negative impression.

The mere fact that it is on their blog should not automatically suggest that it is an advertising effort. People read blogs from tech companies because of the high-quality content they produce. Not every post on these blogs is an advertisement. This is for a very practical reason, no one wants to spend their time reading advertisements. We want to read articles that are factually correct, useful, and interesting. Fear-mongering does not align with these expectations. Therefore, I believe we have a reasonable expectation that if the goal of a specific article is to pitch a product, that fact will be clearly disclosed.

Why is this an issue at all?

The issue is that using fear-mongering tactics like this can cause unforeseen consequences. For example, it might convince people that there is no longer a need for junior developers. People who might ignore comments made in an advertisement that they understand may be hyperbolic will listen to a trusted voice in the community. How many managers will read this and decide that they can just use AI and don’t need to hire a few more developers? How many Junior Developers will read this and give up on their dream before it even begins?

At some point, the promotion of AI combined with fear-mongering by respected members of the tech community with vested financial interests could lead to fewer developers rising through the ranks.

The likely aftermath

As with the rest of this article, this is purely my own opinion. It is based on my experience using AI heavily in my work since ChatGPT 3.5 through ChatGPT 4, primarily on large legacy codebases.

Take a look at The Trough of Disillusionment by Tom McCallum.

I still believe we have not yet hit the peak of inflated expectations. At some point we will all see the limitations of the current version of AI and we will plummet into the trough of disillusionment. Some of us will stick it out and see that plateau of productivity.

However, with as slow as many cultures move, and as much as people hate to admit when they are wrong, we may see a huge shortage of developers when we realize we still need humans.

My feelings on the matter

I still believe that Steve Yegge is a fantastic developer. I think Sourcegraph is a great company filled with talented engineers. However, everyone makes mistakes and I believe this article was a huge one for them.

I agree with ThePrimeagean that this is a gate keeping article. I don’t like how many people it will likely scare away from programming just to sell a tool.

I do believe their approach with Cody has a lot of potential. I have played around with having multiple AI’s work together to create programs in the past. It works surprisingly well. However, I do not believe it works well enough to say that we don’t need junior developers anymore. In my experience AI works well for smaller projects, especially greenfield projects, and only on languages it has extensive training on.

My AI Coding experience

I mainly use AI coding for simple items, or to get understanding of error messages without having to read them closely. I will not spend hours crafting a prompt to write some buggy code that I could write in an hour.

Zig

Trying to use it to write a new Zig program I was unfamiliar with how to convert a float to an int. Apparently ChatGPT4o isn’t sure either, neither is claud or gemini. Oh but they all think they can do it. After wasting 30 minutes trying to get AI to do it I spent 2 minutes RTFM and got it done.

PHP

Working in a legacy PHP Phalcon API project that spans 15 repos that all use a shared common library. The product has well over 10,000 files and millions of lines of code. I cannot give my exact use case here but it could not get enough context to help me build out the feature I needed at all. Some people will say well it obviously needs to be refactored properly then it would have small enough chunks! You are absolutely right, we just need to dedicate dozens of engineers spending months if not years on that refactor while adding no new features.

Go

I am learning Go and I tried to have ChatGPT4o to help me refactor my learning project which was a command line tool doing what wc does on linux for windows. I explained I wanted it to work exactly like the current version does on Linux. It could not understand how flags should work, even after I explained it several times. It kept wanting to go back to the built in flag module. That is not how the current wc program works. After arguing with it for an hour I wrote it myself in 15 minutes. Then during the refactor it completely refactored some very optimized code so it took 10 times longer to run. It apologized and corrected it when I pointed that out.

Conclusion

The examples above are just a few instances where I have seen LLMs fail. Some might argue that I need to be a better prompt engineer. While that might be true, I have successfully crafted complex prompts in other contexts. While it is possible I don’t think its a skill issue on my part, the same way me asking ChatGPT4o how many r’s are in strawberry is not a skill issue on my part. You see these LLM’s are built using what most people do, not best practices. I have found the code they use to be very mediocre most of the time. Their training data will give us back all the most common practices, not the best practices. While they may go in and clean it up as best they can you are at that point trusting people you don’t know and systems you don’t understand with your code.

I do not believe that is a good idea. I would rather trust a junior engineer I have trained and I can more easily monitor and guide. I can’t correct ChatGPT when it makes a mistake, it will just keep making that same mistake over and over again forcing me to fix it, at least until a new model comes out with a while new host of possible errors I need to figure out.

Finally, debugging ChatGPT-generated code is harder than debugging my own code because it makes mistakes I typically wouldn’t. These tiny errors can be difficult to identify, especially if they are the types of mistakes you wouldn’t usually make. Also you can’t exactly ask ChatGPT to find and fix those errors because while they might once they are also likely to go back to them a few messages later.

About

The latest Big Tech Drama


An analysis of the potential consequences of AI advancements on junior software developers.

By The Edit, 2024-07-05