Advertisement
  1. SEJ
  2.  ⋅ 
  3. Generative AI

Senate AI Insight Forum Considers Who’s Liable For AI Harm

The Senate AI Insight Forum discussed the development of a framework for regulating AI, including who to hold liable when AI causes harm

Senate AI Insight Forum Considers Who’s Liable For AI Harm

The U.S. government Senate AI Insight Forum discussed solutions for AI safety, including how to identify who is at fault for harmful AI outcomes and how to impose liability for those harms.  The committee heard a solution from the perspective of the open source AI community, delivered by Mozilla Foundation President, Mark Surman.

Up until now the Senate AI Insight Forum has been dominated by the dominant corporate gatekeepers of AI, Google, Meta, Microsoft and OpenAI.

As a consequence much of the discussion has come from their point of view.

The first AI Insight Forum held on September 13, 2023,  was criticized by Senator Elizabeth Warren (D-MA) for being a closed door meeting dominated by the corporate tech giants who stand the most to benefit from influencing the committee findings.

Wednesday was the chance for the open source community to offer their side of what regulation should look like.

Mark Surman, President Of The Mozilla Foundation

The Mozilla foundation is a non-profit dedicated to keeping the Internet open and accessible. It was recently one of the contributors to the  $200 Million fund to support a public interest coalition dedicated to promoting AI for the public good. The Mozilla Foundation also created Mozilla.ai which is nurturing an open source AI ecosystem.

Mark Surman’s address to the senate forum focused on five points:

  1. Incentivizing openness and transparency
  2. Distributing liability equitably
  3. Championing privacy by default
  4. Investment in privacy-enhancing technologies
  5. Equitable Distribution Of Liability

Of those five points, the point about the distribution of liability is especially interesting because it advises at a way forward for how to identify who is at fault when things go wrong with AI and  impose liability on the culpable party.

The problem of identifying who is at fault is not as simple as it first seems.

Mozilla’s announcement explained this point:

“The complexity of AI systems necessitates a nuanced approach to liability that considers the entire value chain, from data collection to model deployment.

Liability should not be concentrated but rather distributed in a manner that reflects how AI is developed and brought to market.

Rather than just looking at the deployers of these models, who often might not be in a position to mitigate the underlying causes for potential harms, a more holistic approach would regulate practices and processes across the development ‘stack’.”

The development stack is a reference to the technologies that work together to create AI, which includes the data used to train the foundational models.

Surman’s remarks used the example of a chatbot offering medical advice based on a model created by another company then fined-tuned by the medical company.

Who should be held liable if the chatbot offers harmful advice? The company that developed the technology or the company that fine-tuned the model?

Surman’s statement explained further:

“Our work on the EU AI Act in the past years has shown the difficulty of identifying who’s at fault and placing responsibility along the AI value chain.

From training datasets to foundation models to applications using that same model, risks can emerge at different points and layers throughout development and deployment.

At the same time, it’s not only about where harm originates, but also about who can best mitigate it.”

Framework For Imposing Liability For AI Harms

Surman’s statement to the Senate committee stresses that any framework developed to address which entity is liable for harms should take into effect the entire development chain.

He notes that this not only includes considering every level of the development stack but also at how the technology is used, the point being that who is held liable depends on who is best able to mitigate that harm in their point of what Surman calls the “value chain.”

That means if an AI product hallucinates (which means to lie and make up false facts), the entity best able to mitigate that harm is the one that created the foundational model and to a lesser degree the one that fine tunes and deploys the model.

Surman concluded this point by saying:

“Any framework for imposing liability needs to take this complexity into account.

What is needed is a clear process to navigate it.

Regulation should thus aid the discovery and notification of harm (regardless of the stage at which it is likely to surface), the identification of where its root causes lie (which will require technical advancements when it comes to transformer models), and a mechanism to hold those responsible accountable for fixing or not fixing the underlying causes for these developments.”

Who Is Responsible For AI Harm?

The Mozilla Foundation’s president, Mark Surman, raises excellent points about what the future of regulation should look like. He discussed issues of privacy, which are important.

But of particular interest is the issue of liability and the unique advice proposed to identify who is responsible when AI goes wrong.

Read Mozilla’s official blog post:

Mozilla Joins Latest AI Insight Forum

Read Mozilla President Mark Surman’s Comments to the Senate AI Insight Forum:

AI Insight Forum: Privacy & Liability (PDF)

Featured Image by Shutterstock/Ron Adar

Category News Generative AI
ADVERTISEMENT
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...