Quite like reading this guy's articles and thoughts. He did one on neuromorphic I posted like last year which included BRN.
Quite a well credentialed gentleman.
Put to a vote, I might have been chosen “least likely to succeed” in my New York City high school class. My path has taken me from repairing fighter planes in Thailand during the Vietna…
steveblank.com
This one a take on ChatGPT.
Steve Blank, Innovation, Entrepreneurship, Stanford, I-Corps, H4D Hacking for Defense
steveblank.com
Posted on April 4, 2023 by steve blank
The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life.
John F. Kennedy
Humans have mastered lots of things that have transformed our lives, created our civilizations, and might ultimately kill us all. This year we’ve invented one more.
Artificial Intelligence has been the technology right around the corner for at least 50 years. Last year a set of specific AI apps caught everyone’s attention as AI finally crossed from the era of niche applications to the delivery of transformative and useful tools –
Dall-E for creating images from text prompts,
Github Copilot as a pair programming assistant,
AlphaFold to calculate the shape of proteins, and
ChatGPT 3.5 as an intelligent chatbot. These applications were seen as the beginning of what most assumed would be domain-specific tools. Most people (including me) believed that the next versions of these and other AI applications and tools would be incremental improvements.
We were very, very wrong.
This year with the introduction of
ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application. If you haven’t played with
ChatGPT-4, stop and spend a few minutes to do so
here. Seriously.
At first blush ChatGPT is an extremely smart conversationalist (and homework writer and
test taker). However, this
the first time ever that a software program has
become human-competitive at multiple general tasks. (Look at the links and realize there’s no going back.) This level of performance was completely unexpected. Even by its creators.
In addition to its outstanding performance on what it was designed to do, what has surprised researchers about ChatGPT is its
emergent behaviors. That’s a fancy term that means “we didn’t build it to do that and have no idea how it knows how to do that.” These are behaviors that weren’t present in the small AI models that came before but are now appearing in large models like GPT-4. (Researchers believe this tipping point is result of the complex interactions between the neural network architecture and the massive amounts of training data it has been exposed to – essentially everything that was on the Internet as of September 2021.)
(Another troubling potential of ChatGPT is its ability to manipulate people into beliefs that aren’t true. While ChatGPT “sounds really smart,” at times it simply makes up things and it can convince you of something even when the facts aren’t correct. We’ve seen this effect in social media when it was people who were manipulating beliefs. We can’t predict where an AI with emergent behaviors may decide to take these conservations.)
But that’s not all.
Opening Pandora’s Box
Until now ChatGPT was confined to a chat box that a user interacted with. But
OpenAI (the company that developed ChatGPT) is letting ChatGPT
reach out and interact with other applications through an API (an
Application Programming Interface.) On the business side that turns the product from an incredibly powerful
application into an even more incredibly powerful
platform that other software developers can plug into and build upon.
By exposing ChatGPT to a wider range of input and feedback
through an API, developers and users are almost guaranteed to uncover new capabilities or applications for the model that were not initially anticipated. (The notion of an
app being able to request more data and write code itself to do that is a bit sobering. This will almost certainly lead to even more new unexpected and emergent behaviors.) Some of these applications will create new industries and new jobs.
Some will obsolete existing industries and jobs. And much like the invention of fire, explosives, mass communication, computing, recombinant DNA/CRISPR and nuclear weapons, the actual consequences are unknown.
Should you care? Should you worry?
First, you should definitely care.
Over the last 50 years I’ve been lucky enough to have been present at the creation of the first microprocessors, the first personal computers, and the first enterprise web applications. I’ve lived through the revolutions in telecom, life sciences, social media, etc., and watched as new industries, markets and customers created literally overnight. With ChatGPT I might be seeing one more.
One of the problems about disruptive technology is that disruption doesn’t come with a memo. History is replete with journalists writing about it and not recognizing it (e.g. the NY Times putting the invention of the transistor on page 46) or others not understanding what they were seeing (e.g. Xerox executives ignoring the invention of the modern personal computer with a graphical user interface and networking in their own Palo Alto Research Center). Most people have stared into the face of massive d
isruption and failed to recognize it because to them, it looked like a toy.
Others look at the same technology and recognize at that instant the world will no longer be the same (e.g.
Steve Jobs at Xerox). It might be a toy today, but they grasp what inevitably will happen when that technology scales, gets further refined and has tens of thousands of creative people building applications on top of it – they realize right then that the world has changed.
It’s likely we are seeing this here. Some will get ChatGPT’s importance instantly.
Others will not.
Perhaps We Should Take A Deep Breath And Think About This?
A few people are concerned about the consequences of ChatGPT and other
AGI-like applications and believe we are about to cross the
Rubicon – a point of no return. They’ve
suggested a 6-month moratorium on training AI systems
more powerful than ChatGPT-4. Others find that idea laughable.
There is a long history of scientists concerned about what they’ve unleashed. In the U.S. scientists who worked on the development of the atomic bomb proposed
civilian control of nuclear weapons. Post WWII in 1946 the U.S. government seriously considered
international control over the development of nuclear weapons. And until recently most nations agreed to a
treaty on the nonproliferation of nuclear weapons.
In 1974, molecular biologists were alarmed when they realized that newly discovered genetic editing tools (recombinant DNA technology) could put
tumor-causing genes inside of E. Coli bacteria. There was concern that without any recognition of biohazards and without agreed-upon best practices for biosafety, there was a real danger of accidentally creating and unleashing something with dire consequences. They asked for a voluntary moratorium on recombinant DNA experiments until they could agree on best practices in labs. In 1975, the U.S.
National Academy of Science sponsored what is known as the
Asilomar Conference. Here biologists came up with guidelines for lab safety containment levels depending on the type of experiments, as well as a list of prohibited experiments (cloning things that could be harmful to humans, plants and animals).
Until recently these rules have kept most biological lab accidents under control.
Nuclear weapons and genetic engineering had advocates for unlimited experimentation and unfettered controls. “Let the science go where it will.” Yet even these minimal controls have kept the world safe for 75 years from potential catastrophes.
Goldman Sachs economists predict that 300 million jobs could be affected by the latest wave of AI. Other economists are just realizing the ripple effect that this technology will have. Simultaneously, new startups are forming, and venture capital is already pouring money into the field at an outstanding rate that will only accelerate the impact of this generation of AI. Intellectual property lawyers are already arguing who owns the data these AI models are built on. Governments and military organizations are coming to grips with the impact that this technology will have across Diplomatic, Information, Military and Economic spheres.
Now that the genie is out of the bottle, it’s not unreasonable to ask that AI researchers take 6 months and follow the model that other thoughtful and concerned scientists did in the past. (
Stanford took down its version of ChatGPT over safety concerns.) Guidelines for use of this tech should be drawn up, perhaps paralleling the ones for genetic editing experiments – with
Risk Assessments for the type of experiments and
Biosafety Containment Levels that match the risk.
Unlike moratoriums of atomic weapons and genetic engineering that were driven by the concern of research scientists without a profit motive, the continued expansion and funding of generative AI is driven by
for-profit companies and
venture capital.
Welcome to our brave new world.