Well that only works if it has access to the web, which not all models do.
(Rest of post is general response not personal).
Someone also said you can tell it not to hallucinate. It doesn't "know" it is hallucinating so it will confidently agree and then proceed to hallucinate anyway. Trying to get it to not hallucinate and/or admit when it can't give you a truthful answer is one of the biggest problems AI companies are working on at the moment. It is confidently wrong, a lot. Which is a problem because unless we know something to be false, confidence is part of how we judge trustworthiness and therefore it comes across as extremely knowledgable/trustworthy.
The biggest thing people need to be aware of is that it does NOT "know" or "believe" what it is telling you - it has no capacity to know or believe things. It's essentially an excellent mimic, and for many purposes that's OK. Think of it as being more like the psychic paper in Doctor Who, or the spell in the Harry Potter books where a muggle looks at Hogwarts and just sees a dangerous old ruin, or the Pokémon called Ditto which takes on the appearance and behaviour of other Pokémon, or probably several hundred other examples in the fantasy or sci fi genres. (If you don't know these, say which fantasy/magical/sci-fi universe you do know and someone will think of an equivalent).
Being good at mimicking means it's fairly good at writing anything which is allowed to be mediocre, like an email, or combining two things (e.g. make some maths puzzles with a minecraft theme for a 6 year old) or making small shifts/changes (those maths puzzles are too hard, make them easier/that email is too harsh, make it friendlier). It's roughly on the level of creativity of those book sets you can buy which are a tie-in to a TV programme which toddler DC absolutely gravitate to and all parents despise. But because it can put whatever you want in it, you essentially have an endless supply of "DS2 goes on an adventure with Peppa and Bluey" if you so wished, and it's good for stuff like themed treasure hunts/riddles/jokes for a party, which is good if your child has very niche interests or you just want to DIY it. But you shouldn't use it for anything where it would really matter if it was wrong, and if you don't have the knowledge yourself to check if it's wrong, it might not be a good idea to use it.
OTOH if your current level of knowledge in an area is very low, then its advice or info trying to mimic someone with more knowledge is likely to be better than your blind guess and quicker than trying to make sense of a full page of google results, though not as good as asking an actual experienced person. It's not good at anything too niche or specific, though. Asking it for general ideas to try if your washing machine smells bad (common problem, easy and low risk solutions) works OK though proceed with caution if it sounds like its suggestion might cause harm. Asking for help with a specific unique feature of one specific model doesn't work NEARLY as well as finding a niche forum full of nerds who have had a collaborative discussion trying to fix the same issue. And IME the ones which can search the web don't find these, at least not as well as I can (yet).
It's good at making suggestions although they won't usually be anything groundbreaking you've never come across before, which is helpful if you are stuck and need a starting point. It's reasonable at reflecting back what you've said which is helpful if you're a person who likes to process "out loud", or organising a long ramble into the main themes or a bullet point list. I find that last one helpful because, er, as you can see, I am not good at being concise.