It's pretty terrifying to see people embrace AI so fully and unthinkingly.
We need to be clear about what AI is and is not. What it is not is "intelligence". It is not sentient, it does not analyse, it does not think, it does not care about you.
What it does is puree data and give you the most likely outcomes. It's predictive text on steroids. It doesn't think or analyse to give you the results. If AI was marketed as "large language models" (which is what they are) rather than a phrase designed to connote decades of advanced sci-fi robot situations, people might view it more realistically.
It mostly produces slop for stupid people, obviously. I suppose if you have a huge amount of data you want to process and you're certain of its accuracy, it can be useful. But the minute you feed it anything wrong then the outcome will be wrong. Given the absolute nonsense it's ingesting there isn't much I'd trust to get right. (In my particular field one of our biggest problems is patchy and erratic historic data. AI literally can't help with that.)
It's also worrying that a lot of people don't seem to recognise that in many fields of endeavour the process is the point, not just the outcome. Yes, it can summarise long tomes for you - but the point wasn't to produce a summary, it was to understand the book. By reading it. The summary is the outcome of you understanding the arguments therein, not the goal in and of itself. We read and write and create in order to understand the world, and yes that is difficult sometimes, but skipping that stage to get to a scam summary is abdicating what it means to be human.
From conversations in real life, I find a strong correlation between people who are not very clever and don't enjoy thinking, and people who are AI enthusiasts.