Here’s what the CEO of OpenAI, creator of ChatGPT, has to say about the “dangers of AI”

ChatGPT, created by the American company OpenAI, has seen several updates since its launch. The AI ​​capabilities of the underlying technology have become so sophisticated that people have started to wonder if AI can replace jobs and be used to spread misinformation. OpenAI CEO Sam Altman has now said he’s “a bit scared” of his company’s invention, but is positive about the good it can do. In a conversation with ABC News, Altman said he believes AI technology has real dangers, but it could also be “the greatest technology mankind has ever developed” to improve people’s lives considerably.

“We have to be careful here. I think people should be happy that we’re a little scared of that,” Altman said. He said that if he was not afraid, “you should either not trust me or be very unhappy that I am doing this job”.

Altman said AI will likely replace some jobs in the near future and worries about how quickly that could happen. However, he also highlighted the positive side that technology will improve our lives.


“I think in a few generations humanity has proven that it can adapt wonderfully to major technological changes,” Altman said, adding, “But if it happens in a single-digit number of years , some of these changes… That’s the part that worries me the most.

“It’s going to eliminate a lot of current jobs, it’s true. We can create much better ones. The reason for developing AI, in terms of impacting our lives and improving our lives and the benefits , it will be the greatest technology mankind has yet to develop,” Altman noted.

He also encouraged people to use ChatGPT as a tool, not a replacement. Altman also discussed the positive effects of AI on education.

“We can all have an amazing educator in our pocket who is personalized for us, who helps us learn. Education is going to have to change,” he said.

Use of AI in disinformation

For Altman, a constant problem with AI language models like ChatGPT is misinformation. He said the program may give users factually inaccurate information.

“The thing I try to warn people the most about is what we call the ‘hallucination problem’. The model will confidently state things as if they were entirely made up facts,” he said. he said, adding that GPT-4, the latest language model, is more powerful than the one ChatGPT was launched with.

“The right way to think about the models we create is as a reasoning engine, not a database of facts,” Altman said.

“They can also act as a factual database, but that’s not really what they’re special about – what we want them to do is something closer to the ability to reason, not to memorize,” he added.

The company’s top executive noted that the technology itself was incredibly powerful and potentially dangerous.

Related Article

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button