Maria Mateescu • Engineering Log

AI: A Prediction

Everyone, from their mother to their dog, seems to have an opinion on AI these days. Some CEOs are chasing it like gold, some engineers are disgruntled about its effect on the job market, and newscasters are making predictions left and right...

Allow me to add my voice to the pool. Not that anyone asked me to0. Not that my opinion is any more reliable than others.

We are witnessing the early stages of this shift. Overconfidence in, or misunderstandings of, AI’s capabilities -- fueled by a recession and major corporations' attempts to save face -- have led to widespread layoffs justified by AI adoption. More and more people seem to believe that AI can write all their code flawlessly, allowing anyone to bring their ideas to life without ever having to learn how to code. The problem is, LLMs generate code that is only statistically likely to be correct based on patterns in their training data. But correctness is never guaranteed -- and if you check the fine print, they don't claim it is.

What we are already seeing is that 'statistically correct' does not mean 'actually correct'. And eventually it will lead to an error so significant everyone will panic. Then companies will start hiring humans again. The access to and trust in AI will decrease, but reading code will become a more important skill than writing code.

Why LLMs work

Before I go into this tirade, I will start off to say I love ChatGPT. I love running ideas by him and getting it to check things, and many other applications. That being said, it is also incorrect, a lot. Sufficiently many times that I would not trust it with anything significant.

But when it does work, it works so quickly and wonderfully one can't help but be in awe. It is a useful tool. Statistically speaking. To oversimplify why it works, it's for the same reason randomness algorithms work a lot of the time1. Sometimes the problem is so complex and difficult we don't know how to solve it. So we throw some randomness at it, and sometimes we get a solution, and it works. If you want a fancy computer science reason as to what makes it work, one way to explain it: it's because coming up with a solution has a much higher complexity than verifying the solution is correct.

LLMs generate different results for the same input due to probabilistic sampling methods, in other words, you are unlikely to get the same result twice. Would you trust a doctor who gave a different prescription every time someone came in with the exact same ailment and same symptoms? While this can be prevented, the understanding of an LLMs inner workings still escapes us, though there has been progress, and until that is solved we cannot claim the reasoning is reliable.

I guess you can see where I am getting at. We are starting to trust LLMs more than we should, as we seem to be forgetting what they actually are, or maybe those who do, never even knew. While LLMs are nowhere near as simple as a Monte Carlo machine, and all randomness is controlled, they are predictive models that use probability distributions. And in absence of someone to validate their output, things will start to fail.

Why do I believe this?

Because it's already happening.

  1. I saw a post on LinkedIN of a founder who paid a freelancer some money to design her website then went bankrupt because of a bug. While I cannot find and verify this story it does sound plausible. The freelancer put it through an LLM and delivered the project. The website worked, but there was a bug in the cloud setup, leading to millions of dollars in cloud service fees. There was even a lawsuit to get the money with the question being: who is liable, the founder, the freelancer or the LLM? Imagine this, but at a larger scale. Yes AI’s can multiply the good people create, but also the bad. We’ve seen AI mishandle simple tasks like: which bank should I go to? What would happen when we trust it with critical infrastructure? Would you like the confidence rate of ChatGPT (96%) on how often your car doesn’t crash?
  2. The CEO of Klarna recently posted about going back to human customer support after two years earlier decimating its customer service team. Klarna's attempt to replace human agents with AI chatbots led to customer dissatisfaction, proving that automated systems struggle with nuance, problem-solving, and empathy.
  3. One of the ways people seem to benchmark AIs seems to be against competitive programming questions. That’s what they’re being optimised against in their definition of problem-solving. Have you ever worked with a former competitive programmer? They’re smart, but dear god do they reinvent the wheel every single time, because they are taught to not accept even the smallest of black boxes. And the competitive programming environment poises one to have to constantly prove themselves2. Some learn to work with humans and develop people skills, some don’t. That’s what we’re teaching the AI. Human ingenuity comes from the desire to do less. AI doesn’t mind doing a lot of extra work every time. And there are a lot of cases when that is needed, and that's ultimately the core of software engineering.

And thing is, I don’t think people will even see the boost of productivity they are expecting from AI. I used to be a fairly productive engineer. The bottle-neck wasn’t how quickly I could write code, it never was3. AI will speed up proofs of concept, but when nuance and stability is needed it will still be humans to do it. There are many more processes inside of organisations that are slower. There will be time for a human engineer or two.

Bringing back the engineers

I touched on this briefly on my disambiguation post, but ultimately humans will still need to disambiguate for machines. Personally, I do not think LLMs are in a place where they can be trusted fully. Or likely will ever be due to their nature. They imagine, they create, not out of creativity but out of statistics. Among things ChatGPT has imagined it has been: sentences in an article I asked it to proofread, a physical address, immigration laws. Upon further verification they were never there. And you want it to write your production code? Trust but verify. And to verify you need experts.

So, that sounds good for senior engineers and experts, what about the rest? Honey, where do you think senior engineers come from? Stop training junior engineers now, and in 5 year we’ll start wondering where all the senior engineers have gone. Plus, if you've worked with either, you'll notice there are certain things junior engineers have always been better at. They learn new frameworks faster, they are not yet jaded, they are more willing to explore and ask smart stupid questions.

Job market wise, I think things will get worse for a while, before they get better. Companies will only want to hire people with a lot of experience. And cut costs along the way. They will only hire "senior" engineers and then wonder where are all the senior engineers gone after the seniors have burnt out because they are unable to work 60 hour weeks, or 24/7 like a machine.

The Hindenburg moment

While I don't know what the blow-up moment will be with AI, I do think if things don't change there is bound to be one. People are trusting it more and more, and forgetting along the way that it actually is.

The European Parliament gets a lot of shit in the media for slowing down progress in AI. But they're right on this one -- AI needs guardrails and regulations. But because regulations don’t increase shareholder value, companies will never self-regulate. That pressure has to come externally. The real question is whether lawmakers have the ability to implement the right regulations.

Software engineering, unlike other fields of engineering, has historically lacked safety procedures. There are no rigorous standards like in civil, aerospace, or nuclear engineering. And without them? A Hindenburg-level disaster is just waiting to happen. But it will likely have to be even bigger, as it seems plane crashes are not enough these days.

About the Author

Maria is an ICF coach who combines their experience as a software engineer with their ability to build an open and honest environment for their clients in order to help people reach the transformative growth they know is possible through coaching.


0 This is actually false, a few people asked me. Most notably, this one guy at the bar on NYE who couldn't understand that I didn't want to talk about work, just because he did.

1 This area of algorithmics started with the Monte Carlo method that was even used in the creation of the atomic bomb. Monte Carlo methods use randomness to approximate solutions for complex problems, much like how LLMs generate responses.

2 I can say this because I used to be one. I have first-hand experience of how misguided and potentially insufferable I was.

3 Well except when I was in operations fighting proverbial fires, but I would say that was an issue of time and context switching.

© 2025 Maria Mateescu, Built with Gatsby