1. SEJ
  2.  ⋅ 
  3. Generative AI

Underneath Google’s Bard AI: A Peek Into The World Of Human Training

Contractors under pressure and low pay crucially shape Google's Bard AI chatbot, raising concerns over AI product quality.

  • Contractors play a crucial role in training Google's Bard AI chatbot.
  • The quality and accuracy of AI products could be compromised due to the high-pressure work environment contractors are subjected to.
  • Despite the challenges, contractors' contributions are indispensable to AI development.
Underneath Google’s Bard AI: A Peek Into The World Of Human Training

In an era where AI is increasingly defining the customer experience, understanding the inner workings of these technologies has never been more crucial, particularly for marketers.

A recent Bloomberg report has shed light on the human workforce training Google’s Bard chatbot, highlighting the integral role of thousands of contractors in shaping the responses of this AI tool.

This in-depth report uncovers the realities of AI development and presents significant implications for people who use them.

The quality, accuracy, and trustworthiness of AI-driven interactions can impact brand reputation, customer trust, and, ultimately, your bottom line.

As we delve into the human processes behind the Bard AI chatbot, we gain valuable insights into the challenges and opportunities that lie ahead for businesses leveraging AI in their marketing strategies.

A Peek Into The AI Training Ground

Google’s Bard is well-known for its swift and confident answers to various questions.

However, anonymous contract workers reveal to Bloomberg that behind this AI prowess lies the labor of frustrated humans.

These contractors, hailing from companies such as Appen Ltd. and Accenture Plc, work under tight deadlines to ensure the chatbot’s responses are reliable, accurate, and free from bias.

Working Under Pressure

These contractors, some of whom earn as little as $14 an hour, have been under increasing pressure in the last year as Google and OpenAI compete in an AI arms race.

Tasks have become more complex, and the workload has grown, often without the contractors having specific expertise in the areas they are reviewing.

One unnamed contractor said:

“As it stands right now, people are scared, stressed, underpaid, don’t know what’s going on. And that culture of fear is not conducive to getting the quality and the teamwork that you want out of all of us.”

The Role Of Contractors In Training AI

The contractors’ role is to review the AI’s answers, identify errors, and eliminate potential bias. They work with convoluted instructions and tight deadlines, often as short as three minutes.

According to documents shared with Bloomberg, contractors are frequently asked to decide whether the AI model’s answers contain verifiable evidence. They analyze responses for factors such as specificity, freshness of information, and coherence.

One example in the Bloomberg report discusses how a rater could use evidence to determine the correct dosage for a blood pressure medication called Lisinopril.

The contractors must ensure that the responses don’t contain harmful, offensive, or overly sexual content. They must also guard against inaccurate, deceptive, or misleading information.

Highlighting The Human Factor Behind AI

Even though AI chatbots like Bard are seen as groundbreaking technological advancements, the truth is that their effectiveness is dependent on the work of human contractors.

Laura Edelson, a computer scientist at New York University, tells Bloomberg:

“It’s worth remembering that these systems are not the work of magicians — they are the work of thousands of people and their low-paid labor.”

Despite contractors’ integral role, their work is often shrouded in mystery, and they have little direct communication with Google.

Concerns Over the Quality Of AI Products

Contractors are raising concerns about their working conditions, which they believe could affect the quality of the AI products.

Contractors are an indispensable part of training AI, as Ed Stackhouse, an Appen worker, stated in a letter to Congress.

Stackhouse warned that the speed required for content review could lead to Bard becoming a “faulty” and “dangerous” product.

Google responded to these concerns by stating that it undertakes extensive work to build its AI products responsibly, employing rigorous testing, training, and feedback processes to ensure factuality and reduce biases.

While the company states that it doesn’t rely solely on human evaluators to improve the AI, it has been pointed out that minor inaccuracies can creep in, which could potentially mislead users.

Alex Hanna, the director of research at the Distributed AI Research Institute and a former Google AI ethicist, said:

“It is still troubling that the chatbot is getting main facts wrong.”

A Call for Change

Despite the growing concerns about working conditions and the quality of AI products, it’s clear that human contractors are an essential part of AI development.

The challenge is ensuring they are appropriately compensated and given the necessary resources to perform their jobs.

Emily Bender, a professor of computational linguistics at the University of Washington, underscored this point, saying:

“The work of these contract staffers at Google and other technology platforms is a labor exploitation story.”

As the AI revolution continues, the role of human contractors in shaping and refining these technologies will remain vital.

Their voices and concerns must be heard and addressed to ensure the ongoing development of reliable, accurate, and ethical AI products.

Featured Image: Maurice NORBERT/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...

Underneath Google’s Bard AI: A Peek Into The World Of Human Training

Subscribe To Our Newsletter.

Conquer your day with daily search marketing news.