top of page
AdobeStock_61981708.jpeg

PERSPECTIVES

Why We Need to Worry About AI





“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” So states an open letter from last May, signed by leaders from ChatGPT, the CEO of Anthropic, the head of Google’s AI lab, and others included in the Center for AI Safety.


This warning bell was recently rung again last week as the United Nations announced the creation of a new AI Advisory Body to support the international community’s efforts to govern artificial intelligence. “Without entering into a host of doomsday scenarios, it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself,” said UN Secretary General António Guterres on October 26.

 

QUICK TAKES


  • 300 million full-time jobs could be lost to automation during the coming years, according to research published by Goldman Sachs.

  • Under new legislation, fines for non-compliance with the prohibition of certain AI practices have been raised to EUR 40 million or 7% of the total worldwide annual turnover of the offender, whichever is higher.

  • Automation technology has caused a 50 to 70% decrease in wages in the U.S., reports Forbes.


 

The Controversial Adoption of AI

A report from Ceros outlining the risks and challenges counteracting the benefits of AI raises red flags for the future of AI. “Generative AI has the potential to revolutionize many industries,” write the authors. “The technology can help organizations become much more efficient, create new business models, and even help in the discovery of life-saving medicines.” But the rapid adoption of the technology, they add, is also extremely controversial for several reasons. These include:

  • The huge impact on the workforce and society caused by automation of jobs

  • The potential for discrimination in how the technology is implemented

  • The risk of “bad actors” spreading misinformation unchecked through generative AI apps

  • Challenges around intellectual property, copyright and privacy resulting from using existing data — including text, audio, and video — to generate new data

“Right now, what we’re seeing is things like GPT-4 eclipse a person in the amount of general knowledge it has,” says former Google employee Dr. Geoffrey Hinton in the Ceros report. “In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”


The World’s First Piece of Dedicated AI Legislation

In June, governments formally approved the EU AI Act, which is expected to come into force in 2025 after passing a final draft at the end of this year. Four risk categories comprise the act:

  1. Minimal risk

  2. Limited risk

  3. High risk

  4. Unacceptable risk

Meanwhile, generative AI systems will need to meet the following requirements:

  • Transparency: For example, the foundational models that underpin systems

  • Accountability: Developers must be able to explain how their systems work and the biases and limitations

  • Robustness: To withstand attack or robustness

  • Diversity: Systems must be inclusive and fair, and not discriminate against any group of people

As the Ceros report explains, however, proposed legislation is receiving a backlash from organizations claiming it will cost the European economy 31 billion euros during the next five years and reduce AI investments by nearly 20 percent. More than 150 executives from companies including Renault, Airbus, and Siemens stated the EU AI Act would heavily regulate foundational models regardless of their use cases, “and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.”


“The potential harms of AI extend to serious concerns”

Now, the controversy over AI expands worldwide with the October 26 announcement of the United Nations forming the AI Advisory Board.


“The new initiative will foster a globally inclusive approach, drawing on the UN’s unique convening power as a universal and inclusive forum on critical challenges,” writes the UN in a press release. “Bringing together experts from government, the private sector, the research community, civil society, and academia, the Body’s global, gender-balanced and interdisciplinary makeup will help it play a unique role in helping AI work for humanity. The Body’s immediate tasks include building a global scientific consensus on risks and challenges, helping harness AI for the Sustainable Development Goals, and strengthening international cooperation on AI governance. The Body will help bridge other existing and emerging initiatives on AI governance, and issue preliminary recommendations by end-2023, with final recommendations by summer 2024, ahead of the Summit of the Future.”


During the press conference on October 26, UN Secretary-General António Guterres discussed how AI had given him “the surreal experience of watching myself deliver a speech in flawless Chinese, despite the fact that I do not speak Chinese and the lips movement corresponded exactly to what I was saying.”


Misinformation and disinformation; bias and discrimination; surveillance and invasion of privacy; fraud; and other violations of human rights are among the potential harms of AI leading to serious concerns for the UN.


Six months after the open letter from AI founders called for regulation as a “global priority,” the world is now taking action and, as Guterres explained, racing against the clock.





bottom of page