AI applications are spreading rapidly and can have a major impact on our lives. How are policymakers dealing with it?
From freelancers losing clients, graphic designers taking legal action, IT students questioning their future careers, to a Harvard professor accused of sexual harassment by ChatGPT, there is no question that AI has arrived in society.
In particular, OpenAI’s ChatGPT is popular and, while still in its early stages, has the potential to grow into an ecosystem that will put even more pressure on other companies like Google to bring more risky AI applications to market. Microsoft’s collaboration with OpenAI brings AI technology to millions of offices worldwide. Who is managing this change?
One organization for all AI
In an article for The EconomistGary Marcus (formerly of NYU) and researcher Anka Reuel (PhD in computer science at Stanford) call for the “immediate development of a global, neutral, non-profit International Agency for AI (IAAI).”
This agency, they say, must be supported by governments, major tech companies, nonprofits, universities, and “society at large,” with the goal of “collaboratively finding governance and technical solutions to promote safe, secure, and peaceful AI technologies.”
The agency should address security risks such as the spread of misinformation, election interference, or even the development of novel deadly toxins, according to Marcus and Reuel. In addition to security, important issues include trustworthiness, transparency, explainability, interpretability, privacy, accountability, and fairness.
“In the past year alone 37 regulations mentioning ai were passed around the globe; Italy went so far as to ban Chatgpt. But there is little global co-ordination,” Marcus and Reuel lament. While individual policies are needed for each issue and industry, they all need global oversight and technical innovation, they say.
As an example of such global cooperation, Marcus and Reuel cite the International Atomic Energy Agency for the “Promotion of Safe and Peaceful Nuclear Technologies” with inspection rights, which was created after World War II out of fear of nuclear weapons.
A milder example, they say, is the International Civil Aviation Organization, where a global agency advises states. Google CEO Sundar Pichai also said that eventually, there should be global guidelines for artificial intelligence.
“Given how fast things are moving, there is not a lot of time to waste. A global, neutral non-profit with support from governments, big business and society is an important start,” Reuel and Marcus write.