Sorry OP but I find it worrying also. I think the “AI can’t think so it’s nothing to worry about” line is a bit too simplistic and is based on people's knowledge and experiences of old versions.
Yes, current AI isn’t conscious and it doesn’t “think” like a human. And yes, large language models are fundamentally prediction engines trained on patterns in data. That part is true.
But jumping from “it predicts the next word” to “it can’t reason, innovate or change society much” doesn’t really hold up.
Modern systems don’t just autocomplete sentences. They can analyse legal documents, write and debug code, summarise complex research, solve maths problems and assist in drug and materials discovery. That’s more than basic parroting. It’s large-scale statistical reasoning over huge bodies of knowledge.
They do make mistakes and hallucinate – but so do humans. The key question is whether performance is improving and whether we can build verification and oversight around them. In many tasks, AI systems already outperform humans.
It’s also not just about whole jobs disappearing. Historically, technology replaces tasks first. If a significant proportion of cognitive tasks can be automated or accelerated, labour markets will shift and jobs will disappear especially at entry level.
There’s definitely hype. But dismissing it as “just predictive text” underestimates how powerful these technologies have become, and they are improving all the time.
Read the book The Coming Wave by one of the founders of Deep Mind if you're interested.