But if it is a computer how does it think it knows what I want to hear ,as a computer can't think
This is true. People trying to explain how LLMs like ChatGPT work often fall back on anthropomorphic (human-like) comparisons because this is more easily understood for most people. However it is more metaphorical, which might be confusing - a lot of autistic people prefer literal communication. The problem with ChatGPT is that it is so different from our everyday experiences, that literal explanations can be difficult for most people to follow.
So it is not thinking, but it has certain pieces of information.
It has a huge database of text input, from which it has analysed language patterns on a massive scale, making it very good at mimicking human language. Some of this text input is "the internet" (it is hard to tell exactly what, but it seems a large chunk of this input is taken from discussion forums like reddit and even mumsnet, as well as other websites) and some is books, reports, research etc. Some of the text input comes from users themselves, although depending on which exact model you use, your input may or may not be used to train the model further.
It also has a large amount of feedback about its responses, both whether or not they make sense and whether or not people generally like them. It is able to use this feedback to spot patterns that humans may not consciously be aware of. You can notice some of the patterns - such as it tends to put out sentences which agree with the user, it tends to use patterns of speech which suggest sympathy and understanding of the user. It is extremely unlikely to put out a response meaning "I don't know" or "I can't answer that question" unless it has been specifically coded in. For example, some illegal or dangerous topics are coded in to try to ensure that a user will get a "can't answer" response. However, there are probably also patterns in ChatGPT (and other models) which we don't notice explicitly.
Some LLMs have a process which is called "reasoning" which is coded in. This is where it will generate an initial response to the user's input, which it then runs through its own code (which they call "reasoning") and the idea is that the end result is more logical or more accurate. The term reasoning suggests thinking, which probably means the idea of the machine "thinking" gets more embedded for people. I was listening to a podcast the other day, where it was pointed out that the most profitable users for the companies are those who develop emotional attachments to the persona they are conversing with, so it is actually in the company's interest to subtly encourage ideas people have about the machine being able to "think" because it makes us think of it as more human.