Sam Altman, OpenAI CEO spoke candidly with Lex Fridman on all aspects of AI and more.

Does OpenAI have a duty to let GPT-4 off the chain right now as a "shock and awe" demonstration of AI power, a "Hiroshima moment" that might get the world to act? Yesterday, CEO Sam Altman said in an interview that he's thinking about it.

It's a very different, open, and careful way of doing things compared to what you might expect from the tech world. If anyone should be on the cutting edge of these kinds of technologies, it should be people who understand how much responsibility they have. Listening to OpenAI CEO Sam Altman's two-and-a-half-hour interview with podcast host Lex Fridman, who is substantially active in AI, is a delightful experience.

A newly trained GPT model will gladly use all of its power to fulfil any request without judging it or answering questions about its sentience in the same way a human would. So, OpenAI tried for eight months to tame, cage, and chain GPT-4 before letting it out for the public to see and poke at.

In addition, there is no assurance that humans will be able to govern AIs once they reach a certain level of development, even if the global community could agree on boundaries for them tomorrow. It is regarded as "alignment" in the AI community, ensuring that the AI's interests align with ours. 

Altman clarified to Fridman during the interview, "I do not believe that we have yet discovered a means to align a super-powerful system. However, RLHF (Reinforcement Learning from Human Feedback) is a system that applies to our current scale."

Does GPT represent Artificial General Intelligence?

Sam Altman, CEO of OpenAI, stated, "I believe that GPT-4 is not an AGI; yet, isn't it fascinating that we're having this discussion? Yet we're approaching an era where precise AGI definitions are crucial.

Someone else told me this morning that this is the most sophisticated software item humanity has ever created, and I thought, oh, this may be true. And it will be irrelevant in a few decades, correct? Essentially, anyone can do it, whatever."

Whether the tool is used for good or evil?

Sam Altman remarked, "No one at OpenAI, including myself, sits there and reads every ChatGPT communication. But things are generally positive based on what I hear from the folks I speak with and what I observe on Twitter. But not all of us are constantly. So we intend to push these systems to their limits. And, you know, we want to test some of the world's darker theories.

This instrument will result in damage. There will be both negative and positive effects. Tools can be used for both good and evil. We will minimize the negative and increase the positive."

Should OpenAI distribute the GPT-4 base model without safety and ethics restrictions?

Sam Altman noted, "We've discussed releasing the base model, at least for researchers, but it's not user-friendly. Everyone is requesting the base model. People desire the most a model that has been RLHFed to their worldview. It is fundamentally about restricting the speech of others. As with the disputes regarding what appeared in the Facebook feed, I have yet to hear many people discuss this. "

Source: Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE