Meet the Other Phone. Flexible and made to last.

Meet the Other Phone.
Flexible and made to last.

Buy now

Please or to access all these features

AIBU?

Share your dilemmas and get honest opinions from other Mumsnetters.

To worry that there may be no hope for a good future thanks to AI

199 replies

Designless · 11/02/2026 12:26

I use it, it up skills me a lot, I am at the top of my game but.... I think I'll be lucky to reach retirement age still in work and I despair for young people trying to get entry level jobs. Everything that I did to get on the ladder is done by AI now.

I know the nebulous cope response is "that's what the luddites said - NEW jobs will arise" but I think this is different. AI can think. AI allows a handful of unbelievably wealthy people to control everything.

Someone please post something hopeful before I pop from despair thanks :(

OP posts:
Thread gallery
6
SpaceRaccoon · 11/02/2026 18:58

RichardOnslowRoper · 11/02/2026 18:44

Rather alarmed by this too. Head of Anthropic's safety research team quits to become a poet! After hinting at concerns.
futurism.com/artificial-intelligence/anthropic-researcher-quits-cryptic-letter

Guy's having a nervous breakdown.
The whole thing is very on-brand for Anthropic.

KitsyWitsy · 11/02/2026 19:14

As someone with a masters in AI; this thread is fucking hilarious. The scariest thing to me is how stupid people are. The ignorance in this thread is a great example. People thinking because some AI models have performed poorly, they will always remain that way so no need to worry? Come on. It is improving at a rapid rate and WILL take over many jobs.

MigGirl · 11/02/2026 19:17

Designless · 11/02/2026 13:33

I think a lot of people are still working on the basis of the old free bots from a year ago. It's really good now (really good).

I hope I'm wrong but I think we might be screwed.

Nope I never used AI a year ago, I've literally just started to use it now. And while I think AI software that has been specifically programed to do certain tasks is a lot better, the current large language models I've tried don't seem very good.

What your talking about is the development of true independent AI (it actually got a specific name but I can't remeber it right now) and not these large language models which are limited to the input of data avaible on the Internet. They are trying to develop them but they aren't yet avaible. Only then will I be truly worried about AI.

Whowhatwerewolf · 11/02/2026 19:21

I don’t think people are stupid. I think a lot of this is fear. When something feels destabilising, it’s very human to default to status-quo bias or optimism bias as coping mechanisms. That doesn’t make people ignorant — it makes them human.

There’s a difference between not understanding something and protecting yourself emotionally from an uncertain future. Threads like this tend to blur the two.

Sofado · 11/02/2026 21:19

Irren · 11/02/2026 18:05

Novelist, for a start. Sure it can write novels. It can't write good novels.

That isn’t a job, for most people, in the main. Novelists tend to have another job to pay the bills.

LorenzoCalzone · 11/02/2026 21:52

A couple of thoughts on this

If we ended up with 5 mega rich people and their ai workforce...surely they still need consumers with money to buy whatever ìnfo/service they are selling, they need dentists to fix their teeth after gorging on millionaire Shortbread, tailors to make their gold suits, medical staff to heal them. Humans need humans.

Yes people buy mass manufactured bread but many seek out artisnal loaves - the same will happen with information - human produced content will be the desirable premium.

Working life and education might have to be reconsidered. If work can be done in 4 days rather than 5 then make the 4 day week the norm, let people retire earlier to open up the workforce, redesign the curriculum to help us master relationships, art, being human instead of grammar rules and handwriting. These things aren't bleak, they are liberating.

Gobacktotheworld2 · 11/02/2026 22:04

Who decides what is a good use of other people's time, especially if we are to have so much leisure?

Grammar is bliss to some of us.

Handwriting is an art I never mastered but I still don't think it is worthless.

BridgetJonesDaiquiri · 11/02/2026 22:05

MigGirl · 11/02/2026 19:17

Nope I never used AI a year ago, I've literally just started to use it now. And while I think AI software that has been specifically programed to do certain tasks is a lot better, the current large language models I've tried don't seem very good.

What your talking about is the development of true independent AI (it actually got a specific name but I can't remeber it right now) and not these large language models which are limited to the input of data avaible on the Internet. They are trying to develop them but they aren't yet avaible. Only then will I be truly worried about AI.

Artificial general intelligence / AGI - it’s what the big AI players are ultimately aiming for - AI with the same cognitive ability as humans. That is followed by Artificial super intelligence (surpasses humans’ cognitive abilities).

Also it is not true to say that current AI is limited to what is available on the internet. Many eminent researchers/scientists are already testing these models and using to hone their own scientific research. Their “input” is far more than just scraping what’s available on the internet and if they’re excited by the output then that speaks volumes about some of these models’ capabilities. It is already and will revolutionise the life sciences for example.

StandFirm · 11/02/2026 22:10

Designless · 11/02/2026 13:31

It can do pretty much every cognitive function you can imagine and it can instruct other AI. It's a depressing insight into how predictable people are - even our brain patterns.

I agree with you. I think the issue with AI is that if (and admittedly it's a big 'if' because we just don't know) it continues to improve at the rate it's currently doing, there is a real shock on the horizon. The industrial revolution was a brutal enough shock but it was much more gradual than what we're looking at, and all the more junior jobs that don't require high expertise - but are a way for humans to train - are at risk. This compromises our future as a species because, simply put, we might lose our ability to think and not training the next generation will cost us dearly. As AI is NOT actually intelligence, we might end up stagnating as a species if we let it take over too many areas. Intelligence is like a muscle. If you're bedridden for long enough, your muscles will atrophy. Same with brain power. There's a real evolutionary danger tied to AI.

Designless · 11/02/2026 22:22

MigGirl · 11/02/2026 19:17

Nope I never used AI a year ago, I've literally just started to use it now. And while I think AI software that has been specifically programed to do certain tasks is a lot better, the current large language models I've tried don't seem very good.

What your talking about is the development of true independent AI (it actually got a specific name but I can't remeber it right now) and not these large language models which are limited to the input of data avaible on the Internet. They are trying to develop them but they aren't yet avaible. Only then will I be truly worried about AI.

No, I'm talking about AI instructing other AI

Currently happening

OP posts:
catinateacup · 11/02/2026 22:23

tfresh · 11/02/2026 13:38

People here are clearly not using current models or are dreaming. The level of investment in AI, and what it can do with latest models is mind blowing.

Many white collar industries are seriously under threat in the next 1-5 years. Yes it is scary and new jobs will take time to surface.

People need to remember it doesn't need to be perfect to better than a human. Humans are also make mistakes.

Take the role of a paralegal for example. Reviewing documents and making notes about them. Who is going to be better doing that for 24 hours a day 7 days a week? An LLM or a human?

A human, obviously. You have an over-rosy view of AI if you think it can review its own work.

In reality, AI looks great any time it’s in a field you don’t happen to be an expert in. When it’s in a field where you are, it’s obvious how far or is from being able to replicate what humans do.

AI — and yes, even the very latest models — is fundamentally a technology that rates plausibility over correctness. If you aren’t an expert in something, it sounds very plausible. If you are, it’s often simply wrong. Plausible, but still wrong.

It’s also — and I hate to say it, but it’s true, unfortunately — a technology that impresses people who aren’t high-skilled or great thinkers. It produces work at the mean — that’s how it works. So it looks very useful to those who aren’t particularly expert or high-skill in their fields. If you are used to working at a high-skill knowledge job, it just can’t do what humans can. As someone said on a recent thread about this — if you are used to high-skill knowledge work where you can write several thousand words of polished, expert prose a day, AI isn’t very useful. (It would take me more time prompting an AI and then editing it than just writing something myself - even using the latest models.) This seems implausible to those who don’t do these kind of jobs; but it’s true. In law, for example, AI might be able to generate template documents at a paralegal-level of skill, but any higher than that and the level of interpretive facility the job demands is way beyond what AI can replicate. Those jobs aren’t just churning out contracts - they require judgment.

Really — and ironically, given how overblown the rhetoric around AI is at the moment — the jobs that it’s actually likely to take are in tech, or areas like marketing — not in jobs that actually require human judgment and the combination of knowledge with people skills. If I were a coder, or wrote product descriptions for sales catalogues or websites, I’d be worried.

catinateacup · 11/02/2026 23:05

Designless · 11/02/2026 14:48

What sort of writing job can't be replaced by AI? I can't think of one

I’m an academic, and whilst AI can produce simulacra of academic writing, eg. summaries of articles and books, they are always missing key elements that only the original text gives you.

One example might be if you compare reading Spark Notes (which has been around for yonks), to reading a Shakespeare play. There’s only so far a summary will get you. In the end, it isn’t remotely like reading the original text. The entire point is reading it for yourself. Lots of texts just aren’t like information dumps. You can’t adequately summarise them in bullet points, because they aren’t just transparent content points to start with. You can’t read Kant by reading an AI summary (well, you can read the summary, but it can’t replace the original). You can’t become an expert in Kant’s philosophy by only reading AI summaries of it. And AI doesn’t need to read Kant’s philosophy for itself. What can it do with it? Why would you want Harry Potter, for example, in a bullet-pointed summary written by Claude? What’s the point of that?

The audience of a piece of writing is important. What’s the use of producing endless writing if there are no humans to read it? What’s the point of AI writing website content for only AI to read? And why would humans only want to read content generated by AI? Would you really trust medical information that an AI produced that hadn’t been edited or vetted by a human doctor with professional experience? Would you read a magazine article about how to deal with teenage mental health issues written by an AI and take it as gospel? Or a memoir written entirely by AI? What would be the point?

In education we have been familiar for decades with the one “computerz will take your jobs” schtick; and, guess what — it turns out that students don’t actually want to learn in big online MOOCs, or sit by themselves staring at screens all day, or read only computer-generated material. They crave that human connection and responsiveness that only teaching in person provides. Part of my job is to synthesise and convey complex information to other humans. They don’t want it to come via an AI prompt or an online course. They want a human to be able to understand what they are interested in and need to know, and explain it to them in person.

The other parts of my job are knowledge generation and writing, but AI isn’t even close to being able to read 20 academic books and not only synthesise them but then come up with new ideas about the field. All it does is attempt to simplify material (often oversimplify it). What it can’t remotely do is complicate that material and generate new thinking.

And even if it could — who would be the audience for it? If AI got rid of all the academics, who would read the product? Who would be excited by the ideas? Who would teach them to students? Much as some of the techbros imagine a world in which AI would talk to AI while all of us meatsuits just paint the nails of the tech oligarchs and provide sex services, it’s a silly idea, because without humans there will be no consumers for products, readers for novels, thinkers whose work we to teach to students, clients who need lawyers to interpret their cases to them and represent them in court, and so on and so on.

Cankerousa · 12/02/2026 06:30

catinateacup · 11/02/2026 22:23

A human, obviously. You have an over-rosy view of AI if you think it can review its own work.

In reality, AI looks great any time it’s in a field you don’t happen to be an expert in. When it’s in a field where you are, it’s obvious how far or is from being able to replicate what humans do.

AI — and yes, even the very latest models — is fundamentally a technology that rates plausibility over correctness. If you aren’t an expert in something, it sounds very plausible. If you are, it’s often simply wrong. Plausible, but still wrong.

It’s also — and I hate to say it, but it’s true, unfortunately — a technology that impresses people who aren’t high-skilled or great thinkers. It produces work at the mean — that’s how it works. So it looks very useful to those who aren’t particularly expert or high-skill in their fields. If you are used to working at a high-skill knowledge job, it just can’t do what humans can. As someone said on a recent thread about this — if you are used to high-skill knowledge work where you can write several thousand words of polished, expert prose a day, AI isn’t very useful. (It would take me more time prompting an AI and then editing it than just writing something myself - even using the latest models.) This seems implausible to those who don’t do these kind of jobs; but it’s true. In law, for example, AI might be able to generate template documents at a paralegal-level of skill, but any higher than that and the level of interpretive facility the job demands is way beyond what AI can replicate. Those jobs aren’t just churning out contracts - they require judgment.

Really — and ironically, given how overblown the rhetoric around AI is at the moment — the jobs that it’s actually likely to take are in tech, or areas like marketing — not in jobs that actually require human judgment and the combination of knowledge with people skills. If I were a coder, or wrote product descriptions for sales catalogues or websites, I’d be worried.

I'm a coder, and having seen what the latest models are capable of, I'm not worried at all.

As you say, I am sure that to someone who knows little about coding it looks impressive. Most of the time it will spit out snippets that will work, at least temporarily or until you need it for a large project. It takes longer to fix the crazy stuff it does than just writing it yourself.

What I am concerned about is tech illiterate bosses believing the obvious investment hype, wasting senior developers time and creating a drought of experience for newbies.

Many industries will soon begin to suffer due to the ram and gpu shortage, even after this obvious bubble pops those can't be used in normal work pc's

AI will be useable for some hobbyists, but it is lightyears away from being profitable with our current tech.

walkingaroundsostrenegrene · 12/02/2026 08:35

KitsyWitsy · 11/02/2026 19:14

As someone with a masters in AI; this thread is fucking hilarious. The scariest thing to me is how stupid people are. The ignorance in this thread is a great example. People thinking because some AI models have performed poorly, they will always remain that way so no need to worry? Come on. It is improving at a rapid rate and WILL take over many jobs.

Will it also become less damaging to the environment? I fully admit I am ignorant about AI, but I want to learn more.

StandFirm · 12/02/2026 08:55

catinateacup · 11/02/2026 22:23

A human, obviously. You have an over-rosy view of AI if you think it can review its own work.

In reality, AI looks great any time it’s in a field you don’t happen to be an expert in. When it’s in a field where you are, it’s obvious how far or is from being able to replicate what humans do.

AI — and yes, even the very latest models — is fundamentally a technology that rates plausibility over correctness. If you aren’t an expert in something, it sounds very plausible. If you are, it’s often simply wrong. Plausible, but still wrong.

It’s also — and I hate to say it, but it’s true, unfortunately — a technology that impresses people who aren’t high-skilled or great thinkers. It produces work at the mean — that’s how it works. So it looks very useful to those who aren’t particularly expert or high-skill in their fields. If you are used to working at a high-skill knowledge job, it just can’t do what humans can. As someone said on a recent thread about this — if you are used to high-skill knowledge work where you can write several thousand words of polished, expert prose a day, AI isn’t very useful. (It would take me more time prompting an AI and then editing it than just writing something myself - even using the latest models.) This seems implausible to those who don’t do these kind of jobs; but it’s true. In law, for example, AI might be able to generate template documents at a paralegal-level of skill, but any higher than that and the level of interpretive facility the job demands is way beyond what AI can replicate. Those jobs aren’t just churning out contracts - they require judgment.

Really — and ironically, given how overblown the rhetoric around AI is at the moment — the jobs that it’s actually likely to take are in tech, or areas like marketing — not in jobs that actually require human judgment and the combination of knowledge with people skills. If I were a coder, or wrote product descriptions for sales catalogues or websites, I’d be worried.

So it looks very useful to those who aren’t particularly expert or high-skill in their fields. If you are used to working at a high-skill knowledge job, it just can’t do what humans can. As someone said on a recent thread about this — if you are used to high-skill knowledge work where you can write several thousand words of polished, expert prose a day, AI isn’t very useful.

Agree with this but my real concern is that you need to learn to walk before you run. The next generation of expert workers isn't born with those skills. They are built, at human speed and human scale, over a number of years. If we routinely bypass that stage because we use AI tools that are able to do the grunt work, what are we training the future human experts on? I think we focus too much on the limitations of AI rather than on those we are building for ourselves down the line. We keep thinking of what we train machines on but we have a terrible habit of not thinking of how we train the next human generation.

eurochick · 12/02/2026 09:02

A few people of this thread have suggested law and litigation in particular as AI proof industries. I disagree and think we can add these to areas under threat. As well as AI being used to produce documents used in litigation, there are already attempts to develop AI as the decision-maker in disputes.

There is already a question over what junior litigators can do. For years their main role was reviewing documents and preparing court bundles. E-disclosure took a lot of the grunt work out of document review. So their work became preparing research notes and first drafts of documents. These are tasks for which AI is already well-suited. You (currently at least) need senior lawyers to review everything and give strategic advice, but how are the senior lawyers of the future going to come through if there is no junior work left for humans?

OrdinaryMagicOfAcorns · 12/02/2026 09:10

I don’t believe ‘new jobs will arise’. Human needs are all being met. The only new jobs that are likely to keep being sourced are new ways of abusing women to provide for men’s pathetic never ending sexuality. And that’s only because they’re being sold the need for more and more sexuality. The BBCs Century of the Self program is always a good watch to learn how markets are created for completely unnecessary shit that benefits rich Americans primarily.

Olinguita · 12/02/2026 09:43

I do worry about this quite a bit. However, i think the wheels are starting to come off the AI hype train. There are real limits to what AI can do, companies that have invested heavily in AI aren't seeing the return on investment they would have expected, and I think we are about to run into serious physical constraints eg strain on water resources and chips for data centres etc.
In my industry I see a whiff of desperation among c-level executives who are forcing AI on their employees in order to justify the huge amounts they have invested in it. I don't see much visionary thinking or leadership from these people. There is no grand ambition for what they actually want the AI to achieve. Their main motivation is not being overtaken by a competitor that uses AI. It's a purely defensive position. The biggest risk in the short term isn't AGI, it's corporate hubris around AI.
If you want to go down a rabbit hole check out the Better Offline Reddit and podcast by Ed Zitron. It will give you another perspective https://www.reddit.com/r/BetterOffline/
Also, be wary of AI doomers and people trying to scare you about the "terrifying ' capabilities of AI and how we are all about to be rendered useless and obsolete, and ask if the person writing this has some vested interest in scaring us. They are usually trying to sell something or impress investors.

BlooomUnleashed · 12/02/2026 10:34

It seems obvious that if there are no jobs the bulk of humans will have no money so who will be buying all the ai/robotics created goods and services ?

And in order to be wealthy in reality, not just on paper, wealthy people need products & services to buy, or their wealth is just metal, rocks and paper.

People who make products and services (with ai and robotics) need customers, or there is no point making stuff and offering services. A tiny handful of super wealthy people can’t generate enough profit to make lots of competing business viable.

Limited demand = limited supply.

The vast swathers of unemployable, hungry and not well pleased humans then end up being a threat to the super wealthy AND the precious few who supply them with ai+robotics services and goods.

A guilded cage is not financial freedom. It’s just a nice prison.

Governments (unpaid, cos it’s not like they make anything or provide services people WILLINGLY pay for) can’t sustain 99% of the global population on the taxes of a tiny handful. Especially if the tiny handful choose not to co-operate.

So… I suppose things will evolve so new forms of employment replace old forms of employment. To keep the system stable enough to make being at the top of the pile attractive, fun and relatively free from threat of the angry mob.

Or not. And it all goes Mad Max.

I can see the potential for “actual human relationship/connection” services and “human made” non mass production items having added value due to being a status symbol. But that will still require a lot of human employment in some form, so demand and supply don’t dribble down into being an insignificant sized economy.

And in other news, I am working on making my veg patch much bigger and investigating quails. But only as an occasional hysterical displacement activity, because there is no way I can defend my garden if it all goes to shit. Most of the time I focus on Today. Cos the future is unknowable and I don’t want to lose today worrying about a version of the future I have no control over, and might never happen.

The Serenity Prayer is my friend 😬

JuliettaCaeser · 12/02/2026 10:39

That’s exactly my question. If there’s no market for these services because the majority don’t have jobs to pay for anything and therefore governments have no tax revenue if there are no jobs for the majority … then what’s the plan ?!

Not sure 100000 people living like gods and everyone else desperately poor idle and unfulfilled so furiously angry is a particularly desirable outcome - even for the chosen few.

BlooomUnleashed · 12/02/2026 10:54

JuliettaCaeser · 12/02/2026 10:39

That’s exactly my question. If there’s no market for these services because the majority don’t have jobs to pay for anything and therefore governments have no tax revenue if there are no jobs for the majority … then what’s the plan ?!

Not sure 100000 people living like gods and everyone else desperately poor idle and unfulfilled so furiously angry is a particularly desirable outcome - even for the chosen few.

Edited

Maybe the super wealthy and their handful of ai+robotics service/goods providers will all fuck off to a nice, big island surrounded by an robot army to keep us out.

And we’ll go back to growing food, making things from wool/silk/linen/wood etc, using shells and polished stones as currency.

I’m not saying that’s a great outcome. But I for one can’t pretend I wouldn’t be pleased if I could have a donkey and cart and not have to walk everywhere. (three driving lessons, three accidents, I prioritised The World Being Safer over me doggedly trying to get a licence)

I’ll miss starlink though. I bloody love it. Telecom Italia internet in my deeply rural hamlet was shite. On the bright side, my eyes will feel better and my internet addiction will be cured.

Off to Serenity Prayer again before my brain wastes my whole day off trying to immagine a future that scares me.

moderate · 12/02/2026 11:29

OrdinaryMagicOfAcorns · 12/02/2026 09:10

I don’t believe ‘new jobs will arise’. Human needs are all being met. The only new jobs that are likely to keep being sourced are new ways of abusing women to provide for men’s pathetic never ending sexuality. And that’s only because they’re being sold the need for more and more sexuality. The BBCs Century of the Self program is always a good watch to learn how markets are created for completely unnecessary shit that benefits rich Americans primarily.

The only new jobs that are likely to keep being sourced are new ways of abusing women to provide for men’s pathetic never ending sexuality.

Scratching my head at this. Why would men pay more for real women on OnlyFans when perfect facsimiles of women are at their personal beck and call for less? I think a loneliness epidemic is much more likely as the dopamine-hit psychology of online gaming finds its way into ever-more-ergonomic sex toys which learn the exact way you wish to be touched.

moderate · 12/02/2026 11:34

BlooomUnleashed · 12/02/2026 10:34

It seems obvious that if there are no jobs the bulk of humans will have no money so who will be buying all the ai/robotics created goods and services ?

And in order to be wealthy in reality, not just on paper, wealthy people need products & services to buy, or their wealth is just metal, rocks and paper.

People who make products and services (with ai and robotics) need customers, or there is no point making stuff and offering services. A tiny handful of super wealthy people can’t generate enough profit to make lots of competing business viable.

Limited demand = limited supply.

The vast swathers of unemployable, hungry and not well pleased humans then end up being a threat to the super wealthy AND the precious few who supply them with ai+robotics services and goods.

A guilded cage is not financial freedom. It’s just a nice prison.

Governments (unpaid, cos it’s not like they make anything or provide services people WILLINGLY pay for) can’t sustain 99% of the global population on the taxes of a tiny handful. Especially if the tiny handful choose not to co-operate.

So… I suppose things will evolve so new forms of employment replace old forms of employment. To keep the system stable enough to make being at the top of the pile attractive, fun and relatively free from threat of the angry mob.

Or not. And it all goes Mad Max.

I can see the potential for “actual human relationship/connection” services and “human made” non mass production items having added value due to being a status symbol. But that will still require a lot of human employment in some form, so demand and supply don’t dribble down into being an insignificant sized economy.

And in other news, I am working on making my veg patch much bigger and investigating quails. But only as an occasional hysterical displacement activity, because there is no way I can defend my garden if it all goes to shit. Most of the time I focus on Today. Cos the future is unknowable and I don’t want to lose today worrying about a version of the future I have no control over, and might never happen.

The Serenity Prayer is my friend 😬

Edited

Money has worked for a while to allow for efficient commodification of labour.

If your nanobot army can defend your leisure complex built over a geothermal power source, what need have you for money?

catinateacup · 12/02/2026 11:42

StandFirm · 12/02/2026 08:55

So it looks very useful to those who aren’t particularly expert or high-skill in their fields. If you are used to working at a high-skill knowledge job, it just can’t do what humans can. As someone said on a recent thread about this — if you are used to high-skill knowledge work where you can write several thousand words of polished, expert prose a day, AI isn’t very useful.

Agree with this but my real concern is that you need to learn to walk before you run. The next generation of expert workers isn't born with those skills. They are built, at human speed and human scale, over a number of years. If we routinely bypass that stage because we use AI tools that are able to do the grunt work, what are we training the future human experts on? I think we focus too much on the limitations of AI rather than on those we are building for ourselves down the line. We keep thinking of what we train machines on but we have a terrible habit of not thinking of how we train the next human generation.

Yes — I completely agree with this. There’s a serious risk of deskilling ourselves by outsourcing to AI all the things we use to train our selves how to think.

Compare AI to the calculator. A fantastic innovation - but you still have to train kids in how to do mental arithmetic and times tables to give them the cognitive bases for building higher level mathematical understanding. Nobody has to use a slide rule any more; but we now understand (after a somewhat rocky period in the 80s-2010s) that you do not get a population of people able to do high-level STEM thinking if they lack the basic building blocks of mathematical skill. With AI, we are in danger of losing sight of the fact that unless you start from the basics of teaching language skills and thinking across all disciplines, you will be left with a population who can’t work at higher skill levels, or check and correct AI output.

An important aspect of the debate is that AI curiously fits in very well with a certain reactionary impulse in contemporary politics, which regards education and knowledge work as suspicious, and would quite like to do away with higher education in particular.

This is evident in both the general “degrees are useless/young people should train as plumbers/too many kids go to university” prattle that circulates widely in the U.K.; but also in the elitist techbro enthusiasm for the idea that AI will consign to the meatsuit labour market bin all those annoyingly educated lawyers and teachers and civil servants and so on, who might point out the drawbacks of AI, or at the least resist the transformation of the labour market into a free-for-all money grab by tech workers and oligarchs.

All of it is profoundly individualist and capitalist in a particular kind of wild-west American (and Russian) form — anti-oversight, anti-regulation, anti-education.
It’s particularly telling that some of the recent people exiting from AI companies are going off to do things like poetry degrees and sounding alarms about regulatory oversight of AI.

BlooomUnleashed · 12/02/2026 11:46

moderate · 12/02/2026 11:34

Money has worked for a while to allow for efficient commodification of labour.

If your nanobot army can defend your leisure complex built over a geothermal power source, what need have you for money?

Status

Not everybody is motivated to earn loads merely for the comfort it provides. They crave the status.

Without a tangible (on a day to day level) hierarchy that you can strive towards working your way up on, or show off being at the top of, some are going to lose their sense of purpose.

I spend a slightly surreal & brief time in the high society of another country. You’d think comfort, from any source, would be enough. But I noticed that a not insignificant number of the people there were more motivated by the competition than the tangible luxury of their lives.

It’s also why I think a poster above has a point. Sexbots will take the work of those who just provide personalised sexual pleasure. But those who need to be loved, adored and admired will need humans, cos a bot won’t be “real” love/adoration/admiration.

Same for sexual sadists. They’ll need genuine human anguish to satisfy them. Faux bot anguish won’t scratch the itch.

< more Serenity Prayer required >