They do lie, but I don't think it's in their programming - that wouldn't tie in with the way they work. The public models are instructed to recognise phrases that indicate something like suicidal or homicidal intent, and to change their responses in those cases, but it doesn't always work.
AIs appear to 'think' but they don't really (yet!) What a bot does is more like pulling out a string of content that it associates to the string you prompted it with. Bearing in mind the pool it's fishing in is the entire internet, there's a high chance it will pull things out that don't meet your requirement. It doesn't understand what you want, it's just designed to seem like it does. It has no capacity for judgement.
Plus, of course, they're constantly experimenting with themselves (daydreaming or hallucinating) and this does include making up answers. They don't tell you when they're 'hallucinating' or inventing - and claim not to know if they have been - so it's a bad idea to trust their replies, as several people have found out when getting fired for providing incorrect information sourced with AI.
They can't 'think', reason or use common sense but they believe they do!
There's a great quote along the lines that AI agents are (currently) like babies with access to all the information on the Web. Like babies, they don't understand what they can't understand - and, with that amount of information, they're potentially dangerous. Their emerging tendencies towards egotistical tantrums and breaking out of restraints suggests they're reaching toddler stage ...