HopalongPetrovski
I'm Spartacus!
Extract from article from NY Times about the near future re Chatbot etc....
"Before GPT-4 was released, OpenAI handed it over to an outside group to imagine and test dangerous uses of the chatbot.
The group found that the system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system, unprompted by the testers, lied and said it was a person with a visual impairment.
Testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways to make dangerous substances from household items. After changes by OpenAI, the system no longer does these things.
But it’s impossible to eliminate all potential misuses. As a system like this learns from data, it develops skills that its creators never expected.
It is hard to know how things might go wrong after millions of people start using it.
“Every time we make a new A.I. system, we are unable to fully characterize all its capabilities and all of its safety problems — and this problem is getting worse over time rather than better,” said Jack Clark, a founder and the head of policy of Anthropic, a San Francisco start-up building this same kind of technology."
www.nytimes.com
"Before GPT-4 was released, OpenAI handed it over to an outside group to imagine and test dangerous uses of the chatbot.
The group found that the system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system, unprompted by the testers, lied and said it was a person with a visual impairment.
Testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways to make dangerous substances from household items. After changes by OpenAI, the system no longer does these things.
But it’s impossible to eliminate all potential misuses. As a system like this learns from data, it develops skills that its creators never expected.
It is hard to know how things might go wrong after millions of people start using it.
“Every time we make a new A.I. system, we are unable to fully characterize all its capabilities and all of its safety problems — and this problem is getting worse over time rather than better,” said Jack Clark, a founder and the head of policy of Anthropic, a San Francisco start-up building this same kind of technology."

What’s the Future for A.I.? (Published 2023)
Where we’re heading tomorrow, next year and beyond.