Home » Technology » Global AI safety is hindered by regulatory delays and indecision

Global AI safety is hindered by regulatory delays and indecision

artificial intelligence regulations

Indecision and roadblocks are holding up cross-nation agreements about priorities and obstacles.

In November 2023, Great Britain will publish its Bletchley declaration, in which it agrees to intensify global efforts for cooperation on artificial intelligence security with 28 countries including the United States of America, China and the European Union.

In May, the U.K. and Republic of Korea continued their efforts to develop AI safety regulations. At the second Global AI Summit they secured the commitment of 16 global AI tech firms to a series of safety outcomes based on the agreement.

In a separate declaration accompanying the Declaration, Britain stated that “the Declaration fulfils key summit goals by establishing a shared agreement and responsibility regarding risks, possibilities, and an forward process for international cooperation on frontier AI research and safety, particularly through increased scientific collaboration.”

The European Union’s AI ActThe first major AI law in the world,, was adopted in May. The Act includes penalties and enforcement powers, including fines up to $38 million (or 7% of global revenue) for companies who violate the Act.

A bipartisan group in the U.S. Senate responded in a “Johnny-come late” way by recommending that Congress draft an emergency spending bill for AI of $32 billion. The senators also published a study saying that the U.S. must harness AI opportunities while addressing the risks.

“Governments must be actively involved in AI, especially when it concerns national security. We must harness the benefits of AI, but be aware of its risks. To be able to make informed decisions, governments must invest time and resources to become informed. AppOmniTechNewsWorld quoted.

AI Safety is Essential for SaaS Platforms

AI safety becomes more and more important each day. Nearly every software product, including AI applications, is now built as a software-as-a-service (SaaS) application, noted Thacker. In order to maintain the security and integrity, it will be crucial.

We need robust security measures in SaaS applications. “Investing in SaaS Security should be the top priority of any company that develops or deploys AI,” he said.

SaaS vendors add AI to everything they do, increasing the risk. He said that government agencies should be aware of this.

US response to AI Safety Needs

Thacker wants to see the U.S. Government take a more aggressive and deliberate approach in addressing the reality of the lack of AI safety standards. But he did praise the 16 major AI firms for prioritizing safety and responsible deployment.

It shows a growing awareness of AI risks, and a willingness for companies to take steps to mitigate them. He said that the real test would be to see how well companies keep their promises and are transparent in their safety practices.

Despite his glowing praise, he fell short on two important points. He could not find any mention of aligning incentives or consequences. He added that both are very important.

According to Thacker’s view, forcing AI companies to release safety frameworks demonstrates accountability. This will give insight into the depth and quality of their testing. Transparency will enable public scrutiny.

He said that it could also lead to knowledge sharing across industries and the creation of best practices.

Thacker wants to see more legislative action taken in this area. He believes that the U.S. Government will have a difficult time making a major change in the near term, due to the slow pace of their usual actions.

He said that a bipartisan group presenting these recommendations would hopefully spark a number of conversations.

AI Regulations Still Have Unknowns

Melissa Ruzzi is the director of artificial Intelligence at AppOmni. She agreed that the Global AI Summit marked a significant step in protecting AI’s development. Regulations are crucial.

She told TechNewsWorld that “before we can think about regulating, there is a great deal more research to be done.”

She added that it is crucial for companies to work together in the AI sector and join voluntary initiatives around AI security.

The first challenge is to set thresholds and objective measurements. Ruzzi said that he didn’t believe we were ready to establish those for the AI sector as a group.

These will require more data and investigation to determine. Ruzzi also added that AI regulations must be able to keep up with the technology without hindering it.

Define AI harm

David Brauchler is a principal security consultant for NCC GroupAs a first step in establishing AI guidelines, governments may want to consider examining definitions of harm.

As AI becomes more mainstream, there may be a shift in how AI is classified. The standard was part the recent U.S. presidential executive order.

The focus could instead be on the harm AI can cause in the context of its implementation. He pointed out that different pieces of legislation suggest this possibility.

Brauchler, a TechNewsWorld reporter, said that an AI system which controls traffic lights should incorporate more safety features than a shopping assistant. This is true even if it takes more computing power to train the latter.

A clear understanding of regulatory priorities in AI development and use is still lacking. The government should put the people’s real impact first when implementing new technologies. He said that legislation should not try to predict the future of rapidly evolving technologies.

As soon as the government has concrete information, it can act accordingly. Brauchler said that pre-legislating these threats is likely to be a blind shot.

He said that if we focus on preventing harms to individuals through impact-targeted laws, then we do not have to guess how AI might change in the future.

Balancing Legislative and Governmental Oversight

Thacker is of the opinion that regulating AI involves a difficult balance between oversight and control. The end result shouldn’t be to suppress innovation through heavy-handed laws, or to rely solely on the self-regulation of companies.

“I think a lightweight regulatory framework coupled with high-quality monitoring mechanisms is the best way to go.” “Governments should set guardrails, enforce compliance, and allow responsible development to proceed,” he reasoned.

Thacker draws parallels between the movement for AI regulation and the dynamics surrounding nuclear weapons. He warned countries that achieve AI supremacy could gain significant advantages in terms of economics and military.

This encourages nations to quickly develop AI capabilities. Global cooperation on AI is possible, but more difficult than with nuclear weaponry, because of the greater network effect that we enjoy with social media and the internet.