Ok let’s give a little bit of context. I will turn 40 yo in a couple of months and I’m a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write “good” code, readable and so.
However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don’t sleep at night because of this.
I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.
For now, I’m not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I’m sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive…
Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.
AI allows us to do more with less just like any other tool. It’s no different than an electric drill or a powered saw. Perhaps in the future we will see more immersive environment games because much of the immersive environment can be made with AI doing the grunt work.
Have you seen the shit code it confidently spews out?
I wouldn’t be too worried.
Well I seen, I even code reviewed without knowing, when I asked colleague what happened to him, he said “I used chatgpt, I’m not sure to understand what this does exactly but it works”. Must confess that after code review comments, not much was left of the original stuff.
If I am going to poke small holes in the argument, the exact same thing happens every day when coders google a problem and find a solution on Stack Exchange or the like and copy/paste it into the code without understanding what it does. Yes, it was written initially by someone who understood it, but the end result is the exact same. Code that was implemented without understanding the inner workings.
The difference being that googling the problem and visiting a page on stackoverflow costs 50-500 times less energy than using ChatGPT.
Really? I haven’t done the ChatGPT thing, but I know I have spent days searching for solutions to some of the more esoteric problems I run into. I can’t imagine that asking an AI then debugging the return would be any more intensive as long as the AI solution functioned enough to be a starting point.
That’s the thing, how do you determine whether or not the “AI solution functions enough” without having a human review it?
The economics aren’t there because LLM outputs aren’t trustworthy, and the kind of expertise you’d need to validate them is functionally equivalent to that which could be employed to write the code in the first place.
“Generative AI” is an inefficient solution to a problem that’s already been solved by the existence of coding support forums like StackOverflow. Sure, it can be neat to ask it for example code or a bedtime story, but once the novelty wears off all you’re left with is an expensive plagirism machine that won’t even notice when it confidently lies to you.
I have a strong opinion that the problem is more one of people attempting to solve every problem with their shiny new hammer. AI, in the current incarnations, is very good at many things. When implemented properly, LLMs are great at filtering huge amounts of text data or performing semantic analysis. SD does produce images and can be directed.
LLMs are not a replacement for thought. SD is not a replacement for an artist. They are all tools for helping people do things.
I am designing a hypothetical LLM architecture for analyzing the relational structure of a story and mapping it out. I am hoping that it will be capable of generating a meaningful relationship network at the end. It is a very specific goal and a very specific structure. It won’t write a story; it won’t produce dialog; it won’t build a plot. What it will do is build a network of places and characters that can be used to make decisions for all of those things. I want something that helps with internal consistency of models doing other things. So if a GPT model were to write something, it could be fact-checked against the world network to see if what it is saying is reasonable.
Don’t worry, if you got even a quarter as much experience as you say, your job is safe or you can find another not working for an idiotic company that would invest into ai instead of engineers, let them fail.
Anyway have a look what ai can do for you and see just how secure your job is. Pointless worry
As a fellow C++ developer, I get the sense that ours is a community with a lot of specialization that may be a bit more difficult to automate out of existence than web designers or what have you? There’s just not as large a sample base to train AIs on. My C++ projects have ranged from scientific modelling to my current task of writing drivers for custom instrumentation we’re building at work. If an AI could interface with the OS I wrote from scratch for said instrumentation, I would be rather surprised? Of course, the flip side to job security through obscurity is that you may make yourself unemployable by becoming overly specialized? So there’s that.
Honestly, if I was still working in C++ I would be more worried about the language being replaced than about AI.
I got started in C, worked for years in professional C++ development, and after having worked in other languages for a while I tried to go back to it for something just recently, and this was about my reaction.
.first
and.second
const
OO constructs but manual memory management
TemplatesDude just get me the fuck out
Ironically, it is in understanding the nuances of language semantics and library usage where AI can help a fair bit.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
It won’t replace coders as such. There will be devs who use AI to help them be more productive, and there will be unemployed devs.
AI is a really bad term for what we are all talking about. These sophisticated chatbots are just cool tools that make coding easier and faster, and for me, more enjoyable.
What the calculator is to math, LLM’s are to coding, nothing more. Actual sci-fi style AI, like self aware code, would be scary if it was ever demonstrated to even be possible, which it has not.
If you ever have a chance to use these programs to help you speed up writing code, you will see that they absolutely do not live up to the hype attributed to them. People shouting the end is nigh are seemingly exclusively people who don’t understand the technology.
Yeah, this is the thing that always bothers me. Due to the very nature of them being large language models, they can generate convincing language. Also image “ai” can generate convincing images. Calling it AI is both a PR move for branding, and an attempt to conceal the fact that it’s all just regurgitating bits of stolen copywritten content.
Everyone talks about AI “getting smarter”, but by the very nature of how these types of algorithms work, they can’t “get smarter”. Yes, you can make them work better, but they will still only be either interpolating or extrapolating from the training set.
Haven’t we started using AGI, or artificial general intelligence, as the term to describe the kind of AI you are referring to? That self aware intelligent software?
Now AI just means reactive coding designed to mimic certain behaviours, or even self learning algorithms.
That’s true, and language is constantly evolving for sure. I just feel like AI is a bit misleading because it’s such a loaded term.
I get what you mean, and I think a lot of laymen do have these unreasonable ideas about what LLMs are capable of, but as a counter point we have used the label AI to refer to very simple bits of code for decades eg video game characters.
AI is the correct term. It’s the name of the field of study and anything that mimics intelligence is an AI.
Neural networks are a perfect example of an AI. What you actually code is very simple. A bunch of nodes that pass numbers forward through the system applying weights to the values. Their capabilities once trained far outstretch the simple code they run and seem intelligent.
What you are referring to is general AI.
It’s a misnomer, but if you want to pass off LLMs as “artificial intelligence” on technicality of definition, you’d also have to include
advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix)
etc.
Indeed you do.
Neural networks are some of the original AIs.
Yes those are also examples of AI, see relevant Wikipedia article:
AI is whatever hasn’t been done yet
We need better terms to specify exactly what we mean, e.g. a numeric scale of intelligence or maybe even something more complex like a radar chart.
I’ve never had to double check the results of my calculator by redoing the problem manually, either.
I am om the product side of things and have created some basic proof of concept tools with AI that my bosses wanted to sell off. No way no how will I be able to sevrice or maintain them. It’s incredibly impressive that I could even get this output.
I am not saying it won’t become possible, but I lack the fundamental knowledge and understanding to make anything beyond the most minor adjustments and AI is still wuite bad at only addressing specific issues or, good forbid, expanding code, without fully rewriting the whole thing and breaking everything else.
For our devs I see it as a much improved and less snide stackoverflow and Google. The direct conversational nature really speeds things up with boilerplate code and since they actually know what they are doing, it’s amazing. Not only that but we had devs copy paste from online searches withoout fully understanding the snippets. Now the AI can explain it in context.
Our company uses AI tools as just that, tools to help us do the job without having to do the boring stuff.
Like I can now just write a comment about state for a modal and it will auto generate the repetitive code of me having to write const [isModalOpen, setIsModalOpen] = useState<boolean>(false);.
Or if I write something in one file it can reason that I am going to be using it in the next file so it can generate the code I would usually type. I still have to solve problems it’s just I can do it quicker now.
But thisbis OPs point. People are getting fired from tech companies because they don’t need as many people any more. Work is being done faster and cheaper by using AI.
removed by mod
This is a real danger in a long term. If advancement of AI and robotics reaches a certain level, it can detach big portion of lower and middle classes from the societys flow of wealth and disrupt structures that have existed since the early industrial revolution. Educated common man stops being an asset. Whole world becomes a banana republic where only Industry and government are needed and there is unpassable gap between common people and the uncaring elite.
This is exactly what I see as the risk. However, the elites running industry are, on average, fucking idiots. So, we have been seeing frequent cases of them trying to replace people whose jobs they don’t understand, with technology that even leading scientists don’t fully understand, in order to keep those wages for themselves, all in-spite of those who do understand the jobs saying that it is a bad idea.
Don’t underestimate the willingness of upper management to gamble on things and inflict the consequences of failure on the workforce. Nor their willingness to switch to a worse solution, not because it is better or even cheaper but because it means giving less to employees, if they think that they can get away with it.
removed by mod
White collar never should have been getting paid so much more than blue collar and I welcome seeing the Shift balance out, so everyone wants to eat the rich.
Rich will have weapons and technology. I see 1984 + hunger games scenario more likely.
White collar never should have been getting paid so much more than blue collar
Actually I see that the other way around. Blue collar should have never been paid so much less than white collar.
If this follows the path of the industrial revolution, it’ll get way worse before it gets better, and not without a bunch of bloodshed
I think your job in your current form is likely in danger.
SOTA Foundation Models like GPT4 and Gemini Ultra can write code, execute, and debug with special chain of thought prompting techniques, and large acale process verification on synthetic data and RL search for correct outputs will make this 10x better. The silver lining to this is that I expect this to require an absolute shit ton of compute to constantly generate LLM output hundreds of times for each internal prompt over multiple prompts, requiring immense compute and possibly taking longer than an ordinary software engineer to run. I suspect early full stack developer LLMs will mainly be used to do a few very tedious coding tasks and SWEs will be cheaper for a fair length of time.
I expect it will be 2-3 years before this happens, so for that short period I expect workers to be “super-productive” by using LLMs in the coding process, but I expect the crossover point when the LLM becomes better is quite soon, perhaps in the next 5 years as compute requirements go down.
I don’t think software developers or engineers alone should be concerned. That’s what people see all the time. Chat-GPT generating code and thinking it means developers will be out of a job.
It’s true, I think that AI tools will be used by developers and engineers. This is going to mean companies will reduce headcounts when they realise they can do more with less. I also think it will make the role less valuable and unique (that was already happening, but it will happen more).
But, I also think once organisations realise that GPTx is more than Chat-GPT, and they can create their own models based on their own software/business practices, it will be possible to do the same with other roles. I suspect consultancy businesses specializing in creating AI models will become REALLY popular in the short to medium term.
Long term, it’s been known for a while we’re going to hit a problem with jobs being replaced by automation, this was the case before AI and AI will only accelerate this trend. It’s why ideas like UBI have become popular in the last decade or so.
I use AI heavily at work now. But I don’t use it to generate code.
I mainly use it instead of googling and skimming articles to get information quickly and allow follow up questions.
I do use it for boring refactoring stuff though.
In its current state it will never replace developers. But it will likely mean you need less developers.
The speed at which our latest juniors can pick up a new language or framework by leaning on LLMs is quite astounding. It’s definitely going to be a big shift in the industry.
At the end of the day our job is to automate things so tasks require less staff. We’re just getting a taste of our own medicine.
I mainly use it instead of googling and skimming articles to get information quickly and allow follow up questions.
I do use it for boring refactoring stuff though.
Those are also the main uses cases I use it for.
Really good for getting a quick overview over a new topic and also really good at proposing different solutions/algorithms for issues when you describe the issue.
Doesn’t always respond correctly but at least gives you the terminology you need to follow up with a web search.
Also very good for generating boilerplate code. Like here’s a sample JSON, generate the corresponding C# classes for use with System.Text.Json.JsonSerializer.
Hopefully the hardware requirements will come down as the technology gets more mature or hardware gets faster so you can run your own “coding assistant” on your development machine.
That’s been my experience as well, it’s faster to write a query for a model than to google and go through bunch of blogs or stackoverflow discussions. It’s not always right, but that’s also true for stuff you find online. The big advantage is that you get a response tailored to what you’re actually trying to do, and like you said, if it’s incorrect at least now you know what to look for.
And you can run pretrained models locally already if you have a relatively beefy machine. FauxPilot is an example. I imagine in a few years running local models is going to become a lot more accessible.