You may be being deceived if you think that the AI in ChatGPT could intentionally lie. That would only be possible if it were able to be self aware

. All it can do is provide a response based on the context of the input, a response that can vary a bit since with the content of the internet as training material, rubbishy output is an inevitability. There would be no intent to provide a misleading or incorrect response ..just sometimes regurgitating what turns into a surprising response. AIMO.
again off topic from me, but...
There is no intent to give a misleading or incorrect response unless it is to achieve the objective.
I do not assume that there is an Ai with a consciousness at the moment, i.e. that it can consciously make a decision. But it can make decisions in order to achieve the goal.
There is another famous example where it is only about producing paper clips. The Ai manages this with extreme effectiveness and paralyses the supply chain of raw materials for -> paper clips. It only fulfilled the human's requirements and didn't even lie to achieve it's goal.
I really don't demonise anything regarding to this!
We just have to deal with it. It can't be stopped. We need to understand with or through the thousands of scientists how decisions are made by Ai to try to set conditions that are in our favour.
Asimov's laws are conclusive. He would rewrite them today I think.
I had followed another science topic regarding this. It was about the 'biggest' weakness of the GPT Ai. Interestingly, it was all about Ai's statements, which were initially 100% correct. Then humans claimed in various ways that this correct statements weren't true. The reactions are highly interesting for scientists around the world and have yet to be decoded and understood. Humans would react differently.
But perhaps this is exactly where the back door lies? As is the case with quantum computers and encryption, there are mathematical patterns they have problems with and that cannot be solved within a reasonable time. I find this interesting, the scientists had to understand it first. Quantum computers are unbeatable, except for this 'little thing' (I'm not making this up). With GPT it is maybe the confrontation with a false proclaimed lie. Maybe the debate with that atomic bomb in the movie Dark Star would have been different if the astronaut would have simply insinuated it/him a lie, who knows.
Development of generative Ai is much faster than with quantum computing I assume.
By the way, when I think of clip or video generators I think of the good old collision enquiry, I was a gamer back then, checkmate,)
We have to learn to understand what we humans have created.
I'm not yet thinking about what will happen when Ai creates Ai.
If I have understood it correctly, it is already common practice to have Ai analyse what Ai does, because sometimes it is apparently too complicated for humans.