Meet the Other Phone. Flexible and made to last.

Meet the Other Phone.
Flexible and made to last.

Buy now

Please or to access all these features

AIBU?

Share your dilemmas and get honest opinions from other Mumsnetters.

To worry that there may be no hope for a good future thanks to AI

199 replies

Designless · 11/02/2026 12:26

I use it, it up skills me a lot, I am at the top of my game but.... I think I'll be lucky to reach retirement age still in work and I despair for young people trying to get entry level jobs. Everything that I did to get on the ladder is done by AI now.

I know the nebulous cope response is "that's what the luddites said - NEW jobs will arise" but I think this is different. AI can think. AI allows a handful of unbelievably wealthy people to control everything.

Someone please post something hopeful before I pop from despair thanks :(

OP posts:
Thread gallery
6
thisisplanetearthapparently · 11/02/2026 17:50

I work in law but only use it occasionally to tidy up my own work. Considering it couldn't even resize a short row knitting pattern I wanted help with without getting the maths right, I'm not confident to use it for anything else right now! :)

Canonlythinkofthisone · 11/02/2026 17:51

Curious as to why you've bothered posting to simply counter everyone's thoughts and opinions with your own "fear".

FlowerFairyDaisy · 11/02/2026 17:51

Pretty sure every generation has said this about the latest technology.

I don't know how things will evolve but they will and mankind has a way of adapting.

Irren · 11/02/2026 17:52

Whowhatwerewolf · 11/02/2026 16:17

That's a good article. If you doubt what AI can do, I recommend reading it. It explains how quickly it's improving, what it's impact is likely to be and how you can prepare.

Look at his job. Can you not see that he has an incentive to talk this stuff up?

A lot of people who stand to financially benefit from AI are giving out these sorrowful and dire warnings, the upshot of which is that we all need to make sure we are using their products to get ahead of the curve. They are creating their own demand. It's cynical. And it has happened repeatedly.

I hadn't even got to this point in the article when I wrote the above but what a surprise;

""Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using."

Yep, sure.
Do not comply in advance.

He mostly just says in this that AI is good at writing code. We already know this. "Writing and content." Already largely impossible to make a living in here because low-level work was undersold to writers overseas on Fiverr and shit sites like that.

"Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor."

This shit is EASY to use. It doesn't take practise. That's the POINT of it. Someone telling you you need to start working at it now is someone trying to sell you a bridge they're planning to build. This vaguebooking about how AI is "about to change everything!" is not new, it's been going in for years now. What is he actually, substantially saying about what it can do, in detail? He's not. He's not really saying anything much.

Irren · 11/02/2026 17:55

Also, with the massive amounts of energy and water AI takes, you should be more worried that climate change will take us AND Skynet out before it replaces us all.

DownhillTeaTray · 11/02/2026 17:56

It is absolutely not a waste of time to learn who to write a good prompt, or create an agent. And learn how AI is now being used in software you might have used before.

I think that a lot of the dodgy responses from AI are people just not realising how to use it effectively.

Cankerousa · 11/02/2026 18:01

Designless · 11/02/2026 13:33

I think a lot of people are still working on the basis of the old free bots from a year ago. It's really good now (really good).

I hope I'm wrong but I think we might be screwed.

It's still not that good.

It has been shown to take software devlopers around 20% longer to work using AI (though ironically when questioned they all felt it had sped up their work).

It isn't even cost effective at replacing customer service jobs. It can handle 90% of the workload, unfortunately that remaining 10% is what takes up 90% of the human customer service agents time. So no money is saved and actually now you have the added expense of an AI system.

Not a single AI company has figured out a way to turn a profit. They are all losing money hand over fist, propped up only by buying and selling to each other.

Add to this the sheer amount of energy, water and trillions of pounds needed to make AI even a tiny fraction better, and it is rapidly looking like a losing bet.

Irren · 11/02/2026 18:01

Designless · 11/02/2026 15:09

Mmm I am afraid to say that current ai definitely could write that. The rife hallucinations of a year ago are nearly under control. Human mistakes get people killed all the time.

That's quite the assertion, would you care to back it up?

Whowhatwerewolf · 11/02/2026 18:01

Incentives are worth noticing, but they don’t automatically invalidate the argument. Lots of people writing about emerging tech have skin in the game. The relevant question isn’t “does he benefit?” but “is he wrong about the direction of capability?”

You can roll your eyes at the $20 plug — I did — without dismissing the broader point. AI being easy to use at a basic level isn’t the same as understanding how it’s evolving or how it might change workflows over time. Most people aren’t systematically testing its limits; they’re dabbling.

“Don’t comply in advance” makes sense when someone is demanding loyalty. Experimenting with a general-purpose tool isn’t compliance — it’s staying informed. You don’t have to buy the hype to think it’s sensible to pay attention.

Also, we’ve all lived through enough hype cycles to be sceptical. That’s healthy. But dismissing it entirely because the author benefits feels as simplistic as uncritically believing him.

And yes I do worry about climate change also.

Irren · 11/02/2026 18:05

Designless · 11/02/2026 14:48

What sort of writing job can't be replaced by AI? I can't think of one

Novelist, for a start. Sure it can write novels. It can't write good novels.

walkingaroundsostrenegrene · 11/02/2026 18:07

Irren · 11/02/2026 17:55

Also, with the massive amounts of energy and water AI takes, you should be more worried that climate change will take us AND Skynet out before it replaces us all.

I came here to say this. Surely AI will have to be regulated / limited because of the impact on the planet.

DownhillTeaTray · 11/02/2026 18:09

Irren · 11/02/2026 18:01

That's quite the assertion, would you care to back it up?

You don't think that human error kills people all the time? Seriously?!

Irren · 11/02/2026 18:13

Whowhatwerewolf · 11/02/2026 18:01

Incentives are worth noticing, but they don’t automatically invalidate the argument. Lots of people writing about emerging tech have skin in the game. The relevant question isn’t “does he benefit?” but “is he wrong about the direction of capability?”

You can roll your eyes at the $20 plug — I did — without dismissing the broader point. AI being easy to use at a basic level isn’t the same as understanding how it’s evolving or how it might change workflows over time. Most people aren’t systematically testing its limits; they’re dabbling.

“Don’t comply in advance” makes sense when someone is demanding loyalty. Experimenting with a general-purpose tool isn’t compliance — it’s staying informed. You don’t have to buy the hype to think it’s sensible to pay attention.

Also, we’ve all lived through enough hype cycles to be sceptical. That’s healthy. But dismissing it entirely because the author benefits feels as simplistic as uncritically believing him.

And yes I do worry about climate change also.

"The relevant question isn’t “does he benefit?” but “is he wrong about the direction of capability?”

Well, I disagree. If someone is incentivised to tell you something, that something is worth questioning, is it not? I am not "uncritically disbelieving him." I am deeply suspicious of his motives AND unconvinced by an article which tells me "you haven't seen what this can do but I can and it's huge." Yeah, maybe. Maybe not. I can see why he'd like us to think so, though. We'll see whether events back him up or not but there's not enough evidence for anything in that article for me evaluate it "critically."

Of course experimenting with the tool is complying in advance when you are a) helping to refine said tool and b) buying into the idea that it is essential to use it because everyone else will, therefore creating a self-fulfilling prophecy. I don't see much difference between that and "loyalty", in practise.

I will not be experimenting with it or using it in any way that is in any sense within my control. I do not want to test its limits or stay aware of what it is doing. I disagree with it on a fundamental level. We cannot afford it environmentally and it is primed to do little in most contexts except make the rich richer. If necessary I will completely switch fields into the field least related to AI I can find. At least if all this shit comes to pass I'll know I didn't opt in and help it get there by, yes, complying in advance. I know some people have no choice and that sucks. But some of us do.

3oldladiesstuckinalavatory · 11/02/2026 18:13

So here's what I worry about. Humans have just invented this thing that requires exponentially more of our planet's limited resources than has ever previously been required. This thing can turn itself off and on, and will soon be able to regulate and control its own energy and water needs. So what happens when it works out that those are finite resources, resources which are being run down at speed by this new invention, AND by the humans that invented it? What if this thing (or things)is programming and controlling weapons that would solve the problem for it, by destroying the human drain on its own vital energy and water sources? What do you think it will do? And how could you stop it?

Irren · 11/02/2026 18:14

DownhillTeaTray · 11/02/2026 18:09

You don't think that human error kills people all the time? Seriously?!

No, this part. " The rife hallucinations of a year ago are nearly under control." Are they?

NemesisInferior · 11/02/2026 18:18

Whowhatwerewolf · 11/02/2026 17:06

"Just statistics” is technically true, but so is most of modern predictive modelling. Weather prediction is statistics. So is credit scoring. So is much of quantitative medicine. And modern engineering more generally. The point isn’t whether it thinks — it’s whether the outputs are good enough to change workflows and therefore have widespread impacts on the job market (and elsewhere).

The internet was “just packet switching.” Electricity was “just current.” Sometimes the underlying mechanism sounds unimpressive until the applications compound.

My point being, that AI cannot think and personally I think the impact it will have on workflows is being vastly overstated because it suits the likes of Microsoft to do so - and even MS are now coming out and saying that they think AI has only a short time left to really prove itself before being dismissed as a mainstream product.

Whowhatwerewolf · 11/02/2026 18:19

Irren · 11/02/2026 18:13

"The relevant question isn’t “does he benefit?” but “is he wrong about the direction of capability?”

Well, I disagree. If someone is incentivised to tell you something, that something is worth questioning, is it not? I am not "uncritically disbelieving him." I am deeply suspicious of his motives AND unconvinced by an article which tells me "you haven't seen what this can do but I can and it's huge." Yeah, maybe. Maybe not. I can see why he'd like us to think so, though. We'll see whether events back him up or not but there's not enough evidence for anything in that article for me evaluate it "critically."

Of course experimenting with the tool is complying in advance when you are a) helping to refine said tool and b) buying into the idea that it is essential to use it because everyone else will, therefore creating a self-fulfilling prophecy. I don't see much difference between that and "loyalty", in practise.

I will not be experimenting with it or using it in any way that is in any sense within my control. I do not want to test its limits or stay aware of what it is doing. I disagree with it on a fundamental level. We cannot afford it environmentally and it is primed to do little in most contexts except make the rich richer. If necessary I will completely switch fields into the field least related to AI I can find. At least if all this shit comes to pass I'll know I didn't opt in and help it get there by, yes, complying in advance. I know some people have no choice and that sucks. But some of us do.

I think we probably agree more than it sounds like we do. Incentives absolutely make something worth questioning — I just don’t think they automatically invalidate it. You’re right that the article itself is fairly light on concrete evidence, and reasonable people can read it and come away unconvinced.

Where we differ is on what counts as “opting in.” I see experimenting with a widely available tool as staying informed; you see it as reinforcing something you fundamentally oppose. That’s a values difference, not a factual one, and I respect that you’ve thought through your position.

I don’t think we’re going to persuade each other further, so I’m happy to leave it there. Time will tell which parts of this are hype and which aren’t

DownhillTeaTray · 11/02/2026 18:19

Irren · 11/02/2026 18:14

No, this part. " The rife hallucinations of a year ago are nearly under control." Are they?

A year is a long, long time in AI development.

We don't drive the same cars we did 100 years ago, do we? Not even 50 years ago!

Ditto computers.

Emotionalsupporttissue · 11/02/2026 18:24

We have a blanket ban on using AI at work (Built Environment Consultancy) because it can't be trusted 100%.

poetryandwine · 11/02/2026 18:28

It is telling that the book The Coming Wave mentioned by @Whowhatwerewolf is written by a founder of Deep Mind, who has skin in the game.

I think AI is fundamentally a dramatic example of tech changing the way many people work, and making a swathe of workers obsolete. This has been happening for a long tome. A key difference is that with AI many of those displaced workers are highly educated. That makes the phenomenon feel new to them, but it really isn’t.

I will give two countering anecdotes in a separate post.

Whowhatwerewolf · 11/02/2026 18:37

I don’t especially enjoy debating this, but as my name was mentioned I’ll respond once more.

Yes, the author of The Coming Wave has skin in the game. That’s relevant — but it doesn’t automatically invalidate the argument. Proximity can create bias, but it can also create insight.

I agree tech has always displaced workers. Where we differ is that I’m not convinced this is entirely business-as-usual in terms of speed or scope.

I’ll leave it there.

Gobacktotheworld2 · 11/02/2026 18:37

Sartre · 11/02/2026 16:18

It’s already taken lots of junior positions which will eventually have a knock on effect for middle jobs, since no one will have the experience required to do them. It can comfortably do lots of jobs like, for example, copywriting or editing positions.

It is worrying. My DH is obsessed with it and he thinks eventually we’ll barely need humans for anything. I admit I feel concerned by moltbook.

It can do them extremely badly, yes.

RichardOnslowRoper · 11/02/2026 18:44

Rather alarmed by this too. Head of Anthropic's safety research team quits to become a poet! After hinting at concerns.
futurism.com/artificial-intelligence/anthropic-researcher-quits-cryptic-letter

poetryandwine · 11/02/2026 18:50

I mentioned above that I have two anecdotes illustrating the limitations of AI - these go beyond what we all know about the limitations we find in a daily way.

The amusing one: in the Guardian/Observer over the weekend, the brilliant and prominent cartoonist Martin Rowson described conducting an AI search to identify his DW, who makes a point of staying away from SM.

Was current AI remotely up to the task? No way. More importantly, was AI able to admit this? Ha! Rowson was maritally linked to several women, including a famous lesbian author and, IIRC, some he had never met.

More seriously, a group of eminent mathematicians last week dropped a preprint at the science preprint server arxiv.org, in the Computer Science/AI section. This is freely accessible to anyone. The paper is called First Proof and two of the authors are Martin Hairer (Imperial, Fields Medallist) and Lauren Williams ( Harvard) - it may be easiest to find the paper by searching on the title or one of their names.

The gist of the paper is not technical. It describes how the group solved a series of very small research level problems (the descriptions are technical), then asked state of the art AI to do the same. The paper asks the maths community to consider the problems.

This Friday 13 Feb another paper will be dropped describing what AI made of the problems, but it is already clear the answer is ‘not very much’ .

Unlike when developers are guiding their LLMs, the authors took the decision not to help AI when it got stuck and that may be why it performed badly. But if AI needs human intervention to perform well, what does that say?

notimagain · 11/02/2026 18:54

@AliveAndLicking

AI has been flying planes for years (auto-pilot) but pilots still exist.

Thing is autopilots really aren't even close to beginning to be AI..they're dumb automation, half decent at doing mechanical stuff most of the time (e.g. holding a height) but incapable of decision making.

As for AI in the aviation enviroment due to the way commercial aviation works, for example level of demonstrated reliability required, need for systems to work even if isolated from outside world, and then the lead time between projects being a gleam in someones eyes and the prototype flying we're decades off the pilot job going.

There's certainly no sign of Airbus or Boeing taking pilots out of the loop in the near future and the two pilot airliners coming of the production line today will be around for 20 years plus....

One thing that might change in the near term is that more reliable automation might allow single pilot operation once in the cruise.