AI Safety Institute of the United Kingdom “Must Establish Criteria Instead of Conducting Tests”

A company that is helping the government’s AI Safety Institute believes that the UK should focus on creating international standards for AI testing rather than attempting to conduct all of the screening on its own.

Faculty AI CEO Marc Warner warned that the government’s pioneering efforts in AI safety could put the recently formed institute “on the hook” for reviewing a plethora of AI models, the underlying technology of chatbots like ChatGPT.

Prior to last year’s global AI safety summit, which saw major tech companies pledge to work with the European Union (EU) and ten other countries—including the United Kingdom, France, Japan, and the United States—to test advanced AI models both before and after deployment, Rishi Sunak announced the establishment of the AI Safety Institute (AISI).

In this pact, the United Kingdom plays a leading role due to its pioneering efforts in AI safety, which the institute’s founding has highlighted.

The institute should take the lead in establishing global standards for testing artificial intelligence, according to Warner, whose London-based company has contracts with the UK institution that involve assisting it in testing AI models on whether they can be induced to violate their own safety norms.

UK’s AI Safety Institute

He emphasized the importance of setting norms for the global community instead of attempting to handle everything on its own. “I don’t think I’ve ever seen anything in government move as fast as this,” Warner said of the institution, adding that his company also works with the NHS on COVID-19 and the Home Office on fighting terrorism.

Still, “the technology is moving fast as well,” he chimed in. He advocated for the institution to establish norms that other organizations and governments might emulate, like “red teaming”—a practice where experts mimic the abuse of an AI model instead of handling it all alone.

According to Warner, the government may end up “red teaming everything” and face a backlog “where they don’t have the bandwidth to get to all the models fast enough” if problems persist. “They can set really brilliant standards such that other governments, other companies… can red team to those standards,” he said, alluding to the institute’s ability to hold itself to worldwide standards.

Therefore, it’s a far more feasible, far-sighted plan for the future of these objects’ security. Warner told the Guardian just before last week’s AISI testing program update that it could not test “all released models” and would instead concentrate on the most advanced systems.

Large AI firms are reportedly pressuring the UK government to expedite AI safety testing, according to a report in last week’s Financial Times. Google, Microsoft, OpenAI (the developer of ChatGPT), and Meta (Mark Zuckerberg’s company) are all parties to the voluntary testing agreement.

At the Bletchley Park summit, the United States also revealed an AI safety institution that would be a component of the testing program. Guidelines for watermarking AI-generated information were among the objectives outlined in the White House’s October executive order on AI safety, and this week, the Biden administration launched a consortium to help achieve these goals.

The US institute will serve as the home for the partnership, which comprises Meta, Google, Apple, and OpenAI. Governments worldwide “need to a play a key role” in evaluating AI models, according to the UK’s minister for science, innovation, and technology.

Efforts are being spearheaded by the UK’s groundbreaking AI Safety Institute, which is “raising the collective understanding of AI safety around the world” through reviews, research, and information sharing, according to a spokesman. “Policymakers around the world will be better informed about AI safety thanks to the institute’s work.”

If you found this piece interesting, you might also like these other tech articles:

Leave a Comment