Explanation: What is Generative AI, the technology behind OpenAI’s ChatGPT?
/cloudfront-us-east-2.images.arcpublishing.com/reuters/FWJEJRQLOBO5RJW3UGCZZB3DTA.jpg?resize=780,470)
March 17 (Reuters) – Generative artificial intelligence has become a buzzword this year. work.
Here’s everything you need to know about this technology.
WHAT IS GENERATIVE AI?
Like other forms of artificial intelligence, generative AI learns to take action from past data. It creates brand new content – text, image, even computer code – based on this training, instead of simply categorizing or identifying the data like other AIs.
The most famous generative AI application is ChatGPT, a Microsoft-backed chatbot launched by OpenAI late last year. The AI that powers it is known as a great language model because it takes a text prompt and from it writes a human-like response.
GPT-4, a newer model announced by OpenAI this week, is “multimodal” because it can perceive not only text but also images. The OpenAI president demonstrated on Tuesday how he could take a photo of a hand-drawn mockup for a website he wanted to build, and from there, generate a real one.
WHAT’S THE POINT?
Beyond demos, companies are already putting generative AI to work.
The technology is useful for creating a first draft of marketing copy, for example, although it may need cleaning up as it’s not perfect. One example is CarMax Inc (KMX.N), which used a version of OpenAI technology to summarize thousands of customer reviews and help buyers decide which used car to buy.
Generative AI can also take notes during a virtual meeting. He can write and personalize emails, and he can create slide presentations. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
WHAT’S WRONG WITH THAT? A response from ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration photo taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo
Nothing, although there are concerns about potential abuse of the technology.
School systems have become concerned about students handing in AI-written essays, undermining the hard work needed for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.
At the same time, the technology itself is susceptible to errors. Factual inaccuracies confidently touted by AI, called “hallucinations,” and responses that seem erratic like professing love to a user are all reasons companies have sought to test the technology before making it widely available. .
IS IT ONLY GOOGLE AND MICROSOFT?
These two companies are at the forefront of research and investment in large language models, as well as the biggest in integrating generative AI into widely used software such as Gmail and Microsoft Word. But they are not alone.
Big companies like Salesforce Inc (CRM.N) as well as smaller ones like Adept AI Labs are either creating their own competing AI or packaging technology from others to give users new powers through software.
HOW IS ELON MUSK INVOLVED?
He was one of the co-founders of OpenAI with Sam Altman. But the billionaire quit the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and AI research conducted by Telsa Inc (TSLA.O) – the electric vehicle maker that he leads.
Musk has voiced concerns about the future of AI and fought for a regulator to ensure the technology’s development serves the public interest.
“It’s a pretty dangerous technology. I’m afraid I’ve done some things to accelerate it,” he said towards the end of Tesla Inc’s (TSLA.O) Investor Day event earlier this month. -this.
“Tesla is doing some good things in AI, I don’t know, this one stresses me out, I don’t know what more to say about it.”
(This story has been reclassified to correct the March 17 deadline)
Reporting by Jeffrey Dastin in Palo Alto, California and Akash Sriram in Bengaluru; Editing by Saumyadeb Chakrabarty
Our standards: The Thomson Reuters Trust Principles.