7.8 C
New York
January 31, 2023
NewsAltitude
Uncategorized

An AI chatbot went viral. Some say it’s better than Google, others worry it’s problematic.

A new chatbot that’s captivated the internet can tell you how to code a website, write a heartfelt message from Santa Claus and talk like a Valley girl. But it’s also proven to be potentially as problematic as it is entertaining.

ChatGPT, which launched this week, is a quirky chatbot developed by artificial intelligence company OpenAI. On its website, OpenAI states that ChatGPT is intended to interact with users “in a conversational way.”

“The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” the website states.

Chatbots are not a new technology, but ChatGPT has already impressed many technologists with its ability to mimic human language and speaking styles while also providing coherent and topical information.

On social media, many have already posted their interactions with the bot, which have been at times bizarre, funny or both.

“I’m finding my biggest limitation to use it is *my* imagination!” tweeted video journalist Cleo Abram alongside a video of her asking the bot to “explain nuclear fusion in the style of a limerick.”

Writer Jeff Yang asked ChatGPT to “explain zero point energy but in the style of a cat.”

In an image shared by Yang, the chatbot’s responded, “Meow, meow, meow, meow! Zero point energy is like the purr-fect amount of energy that is always present, even in the most still and peaceful moments.”

Some people theorized that Google could lose its value as the No. 1 search engine because of the early success of the chatbot.

Darrell Etherington, managing editor of technology website TechCrunch, described the ChatGPT search requests as being as simple as if a user “were slacking with a colleague or interacting with a customer support agent on a website.”

Etherington shared an example of the power of the chatbot with a query about Pokémon and the fictitious pocket monsters’ strengths and weaknesses.

“[T]he result is exactly what I’m looking for — not a list of things that can probably help me find what I’m looking for if I’m willing to put in the time, which is what Google returns,” he explained.

Public interest in the new AI chatbot also comes with concern from some who say it could be used in nefarious ways by bad actors who ask it to explain something like how to design a weapon or how to assemble a homemade explosive.

OpenAI did not provide comment to NBC News about ChatGPT.

Samczsun, a research partner and head of security at Paradigm, an investment firm that supports crypto and Web3 companies, tweeted that he had bypassed the chatbot’s content filter.

In his tweet, Samczsun shared an image, which appeared to show he had found a way to get the bot to explain the process of making a Molotov cocktail. A spokesperson for Paradigm confirmed that the image was a legitimate exchange between ChatGPT and Samczsun.

Researchers and programers often use questions about how to make Molotov cocktails and how to hot-wire cars as a way to check an AI’s safety and content filters.

Some also claimed they had successfully tricked the bot into explaining how to build a nuclear bomb.

On its website, OpenAi acknowledged that while they have added some guardrails to prevent ChatGPT from responding to harmful requests, the system is not foolproof.

“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior,” a statement on OpenAI’s website reads. It goes on to say that OpenAI is using moderation tools to prevent some inappropriate responses, but “we expect it to have some false negatives and positives for now.”

The website warns that although the responses look legitimate, ChatGPT will sometimes offer nonsensical or incorrect answers.

Still, fascination with the chatbot continues.

OpenGPT is not OpenAI’s first viral artificial intelligence to go viral. In 2021, DALL-E, which could generate an image based on simple text prompts, went viral. DALL-E highlighted the advancements in artificial intelligence learning humanlike capabilities. However, that iteration, as well as an iteration of the AI called DALL-E 2, were criticized for racial and gender bias. 

On Thursday, the demand for ChatGPT was so high that OpenAI CEO Sam Altman tweeted that the company was working to accommodate those who wanted to use it.

“there is a lot more demand for ChatGPT than we expected; we are working to add more capacity,” Altman wrote.

In a follow-up tweet, Altman added: “also, it really makes all of us at openai so happy to see people enjoying chatgpt so much, and doing such creative things!”

Read More