It can be useful but once you've used it enough for a wide variety of queries you will start to to see all kinds of different inadequacies - there are so many different ones to list. I often tend to treat it as a conversation with a once wise old aunt who now has dementia - sometimes there are pearls of wisdom... and sometimes there is some made up nonsense - it's really important that you continue to stand back and carefully question everything, refine your questions and dig deeper for clarification. It makes mistakes (and lots of them) and can even contradict itself within the space of a couple of follow up questions.
In questioning about the history of quantum physics, for instance, it doesn't always interpret the sequence of events correctly - for instance - some experiments produced confusing or contradictory results, or new evidence which then led to new theories being advanced. Some experiments were then designed to test new theories. AI engines often fail to understand the difference between the meaning and significance of these two types of experiment - ones that led to theoretical breakthroughs and ones that simply confirmed the theories - it doesn't always understand the evolution of ideas.
In designing a solar system e.g. solar panels, battery, etc it can focus on the system on one level and fail to view the system from a wider perspective and miss important details that put the low level issues in perspective. So it may lead you to think that using a bigger battery, for instance, that could potentially either export more unused power to the grid or achieve a longer backup during an extended power grid outage or improve system performance during the day... but it may completely overlook the limitations imposed by G99 applications or the extra protection hardware G100 applications will need.. restrictions that are imposed to prevent solar installations from causing instability in the local grid... this is all very detailed and probably doesn't mean much to a lot of people reading here... but it's to illustrate the point that it doesn't always tell you about the bigger picture which might have an a profound impact on your perspective.
What both of these very different examples show is that it doesn't always understand or know when to factor in important ideas, details or understanding from a wider context or perspective. You can be making queries on quite a low level of detail or with a very narrow focus yet other, sometimes much more important thinking about the issue, are not being brought to your attention when it might be highly relevant.
I find it useful for getting ideas about an issue I might be investigating, but really, it still requires me to do as much thinking, if not more, for myself.
You can ask AI about how the BoE base rate change or upcoming budget might affect the funds in your stocks and shares ISA... and it will probably overlook other really important factors, such the global impact of the FED base rate cuts, or currency conversion factors, or the reactions of stock markets in other time zones around the world between the LSE closing and opening, or Friday trading... it's up to you to think about and learn about, the bigger pictures, the wider contexts - things that will put your very narrow query in perspective.
It might also be the case that if you know very little about something, that it can seem quite authoritative, knowledgeable and competent. I find that in the areas I do know a fair amount, it's a lot easier to see it's limitations so FGS don't trust it with important decisions like health or investments - it can be helpful in both of those areas in providing ideas that you need to investigate or think about more.
On top of this - AI engines regularly hallucinate - they will give you summaries of meetings that haven't yet happened and overlooked many real time events because it doesn't always understand what time or day it is or what time zone you are in.
Frankly there are times when it feels like it would easier to nail a jelly to the ceiling than to try and get ChatGPT, Copilot, Claude, Perplexity, or whatever you use, to get an answer you think you can trust.