You defo need to check your work policy if you're uploading confidential stuff to it. If you're using it privately that's different.
The term "learning" or "remembering" is a bit misleading I think as a lot of people don't really understand what is meant by this, they assume it's like a human - it's not at all. It's a bit like Rita Skeeter's quill from Harry Potter that elaborates on all sides but it does different modes, not just "Daily Mail editor" 
I have used it for a fair few things. I don't have the paid version but DH does through work so sometimes I send him things to put through that. The paid version can search the web and use this information to help with queries, it also knows what the current date/time is and there are some other features like the to do list one is supposed to be very good. What I find annoying is that the free version is constantly offering to make images for me and then if I say yes OK it turns the chat into a premium one and locks it for the next 3 hours. I told it not to offer to make images but it doesn't follow that instruction. It also keeps telling me it can "check back in" or remind me of something but it can't send messages independently or create notifications, even in the app. After a couple of times of this I asked it if it could send notifications and it confirmed it cannot, and suggested talking me through how to set a reminder on my own phone. I don't know if the paid version can do reminders or alarms.
DH does treat it as gospel sometimes whereas I am a bit more suspicious, even though one of the first things he got me to do in it was an exercise illustrating not to trust it blindly. But I like it when we are arguing about something and I tell him to ask ChatGPT and it has without fail ALWAYS said I'm right 
(The exercise is to ask it to make a list of 5 colours without using the letter e).
The very biggest problem with it I think is that it does not admit if it doesn't know something. It will always come up with a plausible-sounding answer rather than admit it can't find the information. That I think is a problem - DH was using it to try and troubleshoot an issue with our dishwasher and I was scouring all the appliance/plumbing forums I could find and I couldn't find the answer or confirmation/denial of my hunch about what the answer was, because basically the model/problematic part is too new and the issue is too niche. But ChatGPT claimed it was a common issue with this model and then made up a load of bullshit about what was causing the problem.
And I once asked it a very niche question about making a mod for an old game and again, it just hallucinated a load of game files which essentially led me on a wild goose chase for ages.
Topics that there is a reasonable amount of knowledge already available in the world, or a formula you can stick to for good results, it's pretty good at regurgitating this information and it's fairly accurate. Imagine a human with 6-12 months' experience in whatever area you're asking about and you're roughly there. They don't know everything but they have enough knowledge/experience about the most common cases. A lot of the time, that's a lot more knowledge than we have ourselves and so it's enough, but beware of trusting the confident tone, check with another source if there would be consequences to getting it wrong, or take it as a suggestion.
Topics where there are a handful of people in the world who would be able to answer the question - it's terrible at that, and you should find the human instead and ask them, or read their book or listen to their interviews or whatever it is.