Google Bard Controversy! Here's Everything You Need to Know

What Is The New Controversy Surrounding Google Bard? Find Out Here

by: XpertsApp Team



Following the success of OpenAI’s ChatGPT, Google decided to release its version, Bard. Unlike ChatGPT, Google’s AI chat service has had a rocky start and met a bard controversy.

What is Google Bard?

What is Google Bard?

With Bard, Google is developing its own conversational AI chat service. Its main difference from ChatGPT is that Google’s service will pull all its information from the web.

When was Google Bard announced?

In a statement from Sundar Pichai on February 6, Bard was announced as a brand new concept, even though it was based on Google’s Language Model for Dialogue Applications (LaMDA).

How does google bard work?

How does Google Bard work?

The LaMDA system was built using Transformer, Google’s neural network architecture that it invented and open-sourced in 2017. According to Google, GPT-3, the language model ChatGPT uses, was also built using Transformer.

LaMDA’s lightweight model will be used in Bard’s initial release, as it requires less computing power and can be scaled to more users. To respond, Bard will use all the information from the web in addition to LaMDA. According to Pichai, pulling from the web would provide “fresh, high-quality responses.”

Who has access to google bard?

Who has access to Google Bard?

The public still needs to be given access to Google Bard. Bard is currently being tested by a small group of “trusted testers,” according to Pichai. Internal and external testing feedback will be considered to ensure that the service is ready for public release and adheres to Google’s AI responsibility standards. Following the announcement on Feb. 6, Google said Bard would be available to everyone within a few weeks.

What is the Google Bard Controversy?

It was a rough launch for Google’s Bard, which delivered inaccurate information about the James Webb Space Telescope (JWST). As part of the launch, Google tweeted a demo of the AI chat service with the prompt, “What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”.

As soon as the optical response was corrected, the VLT (Very Large Telescope) of the European Southern Observatory took the first photograph of an exoplanet in 2004.

“This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program,” said a Google spokesperson.

As well as Bard, Google’s LaMDA was under fire before it was released. After publishing LaMDA, former Google engineer Blake Lemoine released a document claiming LaMDA might be “sentient.” The controversy faded when Google denied sentience and placed Lemoine on paid administrative leave before releasing him.

The Bard Controversy That Dropped Google’s Share

One of the AI programs routinely offered by major research labs is Google’s AI program, which was released two years ago.

It is known as LaMDA, which stands for “language models for dialogue applications.” The program, which generates human-sounding text, might have received very little attention from the general public.

Blake Lemoine, a former Google engineer, caused controversy shortly after LaMDA’s publication when he released a document suggesting that LaMDA could be “sentient.”

The controversy faded after Google denied that Lemoine was sentient, put him on paid administrative leave, and then fired him.

A new chatbot in December caught the public’s attention: OpenAI’s ChatGPT, a large language model like LaMDA that operates via chat. Since then, ChatGPT has become the only large language model application anyone talks about.

Bard, Google’s competitor to ChatGPT, was unveiled on Monday by Sundar Pichai, CEO of Alphabet, in a blog post announcing Bard. Bard will initially be available only to a small group of “trusted testers.”

There is no reference to Lemoine’s recent claims about the sentience of LaMDA in Pichai’s discussion of Bard.

Related: What Is Web 3.0 And Its Future In 2023?

What’s Next?

According to Lemoine’s document released last year,

“LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation, and imagination.” He added, “It has worries about the future and reminisces about the past.”

Instead of testing sentience, Pichai said that LaMDA is tested with a human feedback process. “We’re thrilled to continue learning and improving Bard’s speed and quality,” Pichai wrote, “we’ll combine external feedback with our internal testing to ensure Bard’s responses are of high quality, safe, and grounded in real-world data.”

OpenAI’s ChatGPT can only offer content based on information from a point in time in the past. LaMDA, on the other hand, can offer content based on current information.

According to LaMDA’s developers, a Google team led by Romal Thoppilan, they aimed to improve what they call “factual groundedness.” In order to achieve this, they allowed the program to access additional information beyond what it had already processed during its development, referred to as the training phase.

Due to Google’s plans to integrate Bard into its various applications, including search, such current information may become a distinguishing feature for Bard versus ChatGPT.

Keep Reading: 2023 Trends! Is Bitcoin Living Up To The Mark?