AI Across Borders: The Future of Global Governance
An animated GIF of technology, air travel and forests.

AI Across Borders: The Future of Global Governance

Our world is at an inflection point in the development of artificial intelligence. We’ve seen tremendous momentum in the past 18 months. With this momentum has come an expanded sense of AI’s potential to solve major global challenges—as well as increased attention paid to the potential risks of AI.  

Pursuing these benefits and mitigating these risks will take governance at multiple levels, including domestic and international. Because AI technology transcends borders, it’s essential that we bring global stakeholders together to collaborate on and pursue governance goals.   

AI is far from the first domain to require complex and evolving global governance. To learn more about the challenges and successes of international governance, we invited experts in six areas—the history of international institutions, civil aviation, climate change, particle physics research, nuclear power, and financial services—to participate in a workshop on Microsoft’s campus and to submit a case study.  

These case studies formed the basis of a book we released last month, Global Governance: Goals and Lessons for AI. Our work with these experts allowed us to home in on three core outcomes for international AI governance: globally significant risk governance, regulatory interoperability, and inclusive progress. 

Key desired international AI governance outcomes.

Each case study included in this book offers unique insights on how global institutions have played critical governance roles in their domains. The Intergovernmental Panel on Climate Change, for example, has helped strengthen global consensus around climate change research. IAEA has helped avoid the use of nuclear weapons while providing assistance to nations using nuclear material for energy.  

An important underpinning element of a new intergovernmental organization is a shared definition of risk. In the past, identifying the level of global temperature rise that constituted “dangerous” interference with the climate system was a source of contention during climate negotiations. Agreeing on a shared definition of global risk is crucial to aligning on science-based solutions.  

Lessons from the International Civil Aviation Organization (ICAO) 

Another important consideration is how international institutions incentivize or enforce rules they establish. While the International Civil Aviation Organization (ICAO) lacks formal enforcement authority, it intensifies the consequences of rule breaking through reputational mechanisms, a process that is complemented by domestic enforcement.  

For example, the United States and European Union have audit systems based on ICAO standards and can restrict air travel to their jurisdictions if countries receive poor ratings. Given the size of these economies, such ramifications can be extremely costly for a country’s airline industry. 

ICAO member states also can and do provide technical assistance to help ensure other states’ compliance with standards. In 2021, when the Federal Aviation Authority (FAA) downgraded Mexico from Category 1 to Category 2, the FAA provided technical assistance to Mexico. In 2023, the FAA restored Mexico’s Category 1 status.  

Lessons from the International Atomic Energy Agency (IAEA) 

Of all the institutions studied, the IAEA has the most enforcement power: safeguards are legally binding for most states and include extensive monitoring and verification measures. This correlates with the severity of the risk that IAEA is guarding against: nuclear catastrophe.  

States are willing to accept some limitations on their sovereignty in exchange for an orderly, peaceful nuclear regime. The IAEA also offers technical assistance to member states to facilitate their peaceful use of nuclear technology.  

In all realms covered in Global Governance: Goals and Lessons for AI, collaboration among different countries has helped foster important advances in scientific understanding, promoted information sharing and collaboration, and improved safety.  

These are outcomes we want to translate to AI, where we’re already seeing the beginnings of international governance beginning to form. After the UK and US established AI Safety Institutes late last year, Japan, Singapore, Canada, and others have embarked on similar processes. In April, the US and UK announced a Memorandum of Understanding to work together via these AI Safety Institutes on research, standards, and testing. The last chapter of Global Governance: Goals and Lessons for AI explores these and other recent developments in depth.  

The task of further developing an international governance system for a technology that’s always evolving is no small feat—but history holds many lessons, which is why we used this book to look back in order to move forward. If you’re interested in learning more, we hope you’ll download or order a physical copy of Global Governance: Goals and Lessons for AI.  

 

Samira Khan

Societal Transformation | Social Innovation | Wellbeing | Tech/AI for Humanity | X-Dir @Microsoft, Pub Affairs; Citizenship-Skilling, Sustainability I X-Salesforce Impact/ESG, Entrepreneurship | DEIB |Empathy | Community

3w

#interoperability fascinating & arguably, growing in the regulatory space

Like
Reply
Timothy Asiedu

Managing Director (Information Technology Consultant) & at TIM Technology Services Ltd and an Author.

1mo

Thank you for sharing.

Like
Reply

Great advice! We AI is more advanced features in this time and this advance feature are also difficult to handle.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics