G Fun Facts Online explores advanced technological topics and their wide-ranging implications across various fields, from geopolitics and neuroscience to AI, digital ownership, and environmental conservation.

The AI Code of Practice: Regulating Artificial Intelligence Across Borders

The AI Code of Practice: Regulating Artificial Intelligence Across Borders

Navigating the Global Matrix: The Quest for a Cross-Border AI Code of Practice

Artificial intelligence, a force of transformative potential, is rapidly and irrevocably reshaping our world. From the algorithms that curate our news feeds to the complex systems that guide medical diagnoses and financial markets, AI is no longer a futuristic concept but a present-day reality with profound societal implications. Yet, as this technology weaves itself ever more deeply into the fabric of our lives, it operates within a digital realm that knows no borders, a reality that presents one of the most pressing challenges of our time: how to govern a global technology in a world of national laws.

The development of a coherent, cross-border "AI Code of Practice" has emerged as a critical necessity. Without shared principles and interoperable regulations, we risk a fractured digital future, where differing rules create "a patchwork of laws across 50 states that could complicate compliance and slow AI innovation" on a global scale. This complex landscape could lead to regulatory arbitrage, stifle innovation, and, most importantly, fail to protect fundamental human rights from the potential harms of unchecked AI. The quest for a global AI governance framework is not merely a technical or legal challenge; it is a geopolitical and philosophical imperative that will define the future of international cooperation in the digital age.

The Dawn of AI Regulation: A World of Divergent Philosophies

The global conversation on AI regulation is characterized by a fascinating divergence of approaches, each rooted in distinct cultural, political, and economic philosophies. At the forefront of this regulatory landscape are three main players: the European Union, the United States, and China, each championing a model that reflects its core values.

The European Union has positioned itself as a global standard-setter with its comprehensive, rights-based approach. The landmark EU AI Act, which came into force in 2024, is the world's first comprehensive law on artificial intelligence. It is built on the precautionary principle, prioritizing societal safeguards and fundamental rights over unchecked innovation. This is operationalized through a risk-based framework that categorizes AI systems into four tiers:

  • Unacceptable Risk: These systems are deemed a clear threat to people and are banned. This includes manipulative AI, social scoring by governments, and most forms of real-time biometric surveillance.
  • High-Risk: This category includes AI systems used in critical sectors like healthcare, transportation, employment, and law enforcement. These systems are subject to stringent requirements, including rigorous risk assessments, high-quality data governance, human oversight, and transparency.
  • Limited Risk: AI systems like chatbots or deepfakes fall into this category and are subject to lighter transparency obligations, ensuring users know they are interacting with an AI.
  • Minimal Risk: The vast majority of AI applications, such as AI-enabled video games or spam filters, fall into this category and are largely unregulated.

This tiered approach reflects the EU's desire to create a legal framework that is both protective and proportionate. The EU's broader strategy also includes the General-Purpose AI (GPAI) Code of Practice, a voluntary framework designed to help companies comply with the AI Act's rules on transparency, copyright, and safety for powerful models like ChatGPT and Gemini. The goal is to create a harmonized and trustworthy AI ecosystem that can serve as a global benchmark.

In stark contrast to the EU's top-down, comprehensive regulation, the United States has adopted a more fragmented, "pro-innovation" approach. This strategy prioritizes flexibility and market-driven solutions, reflecting a general reluctance to impose heavy-handed regulations that could stifle technological advancement and economic growth. As of early 2025, there is no single, comprehensive federal AI law in the U.S. Instead, the landscape is a patchwork of sector-specific guidelines, state-level legislation, and executive actions.

Key elements of the U.S. approach include:

  • A Patchwork of State Laws: In the absence of federal legislation, states like California, Colorado, and Illinois have taken the lead in enacting their own AI-related laws. This has led to concerns about a fragmented regulatory environment that could create compliance nightmares for businesses operating nationwide.
  • Executive Orders: The executive branch has used its authority to shape AI policy. For instance, a 2025 executive order, "Removing Barriers to American Leadership in Artificial Intelligence," signaled a clear preference for deregulation to maintain a competitive edge.
  • Focus on Existing Laws: U.S. regulators often rely on existing laws, such as those related to consumer protection, anti-discrimination, and privacy, to address AI-related harms.
  • Public-Private Partnerships: The U.S. government has also emphasized collaboration with the private sector, as seen in initiatives like the Stargate Project, a massive public-private venture to build AI infrastructure.

This decentralized approach is rooted in a different philosophical tradition than the EU's. While the EU's collectivist-leaning model emphasizes the protection of society and fundamental rights, the U.S. approach aligns more with an individualist philosophy that champions personal autonomy and free-market principles.

China's approach to AI regulation is distinct from both the EU and the U.S. It is a state-driven model that prioritizes national security, social stability, and economic competitiveness. The Chinese government has implemented targeted and sometimes strict regulations, particularly in areas like deepfakes and algorithmic recommendations, reflecting a desire to maintain tight control over the information ecosystem. At the same time, China has heavily invested in its domestic AI industry, elevating companies like Baidu and Alibaba to the status of "national champions" to accelerate innovation and challenge U.S. dominance in the field. This approach reflects a collectivist philosophy where the interests of the state and societal harmony, as defined by the government, take precedence over individual freedoms.

Beyond these three major players, other nations are carving out their own unique regulatory paths:

  • The United Kingdom, having left the EU, is pursuing a "pro-innovation" approach similar to the U.S. Its strategy relies on empowering existing regulators to apply a set of cross-sectoral principles, such as safety, transparency, and fairness, in a flexible and context-specific manner.
  • Canada's proposed Artificial Intelligence and Data Act (AIDA) aims to take a risk-based approach, similar to the EU, with a focus on "high-impact" AI systems. However, as of early 2025, the act has not been enacted, and Canada is currently relying on a voluntary code of conduct for generative AI.
  • Brazil's Bill 2338/23, also known as the AI Legal Framework, is closely aligned with the EU's risk-based model, even prohibiting certain uses of AI, such as social scoring.
  • India is taking a "pro-innovation" and phased approach, currently focusing on self-regulation and sector-specific guidelines rather than a comprehensive AI law.
  • Japan is also charting a cautious, innovation-friendly course, relying on a combination of voluntary guidelines and existing laws to manage risks without stifling development.
  • Singapore has established itself as a leader in AI governance with its Model AI Governance Framework, which promotes a systematic and balanced approach to addressing risks while fostering innovation.
  • The United Arab Emirates (UAE) has launched an ambitious National Strategy for Artificial Intelligence 2031, aiming to become a global leader in AI by creating a fertile ecosystem for development and adoption across various sectors.

This global tapestry of regulations, while diverse, reveals a shared recognition of the need for some form of AI governance. The key challenge now is to find a way to bridge the gaps between these divergent approaches and create a more harmonized global framework.

The Quest for Global Harmony: The Role of International Bodies

In the absence of a single global regulator, a constellation of international organizations has stepped into the breach, working to foster dialogue, establish common principles, and promote interoperability between different national AI governance regimes. These bodies are playing a crucial role in shaping the global conversation on AI and laying the groundwork for a potential international code of practice.

The Organisation for Economic Co-operation and Development (OECD) has been a key player in this space. In 2019, it adopted the OECD AI Principles, the first intergovernmental standard on AI. These principles are built on five value-based pillars:

  1. Inclusive growth, sustainable development, and well-being.
  2. Respect for the rule of law, human rights, and democratic values.
  3. Transparency and explainability.
  4. Robustness, security, and safety.
  5. Accountability.

These principles, which have been endorsed by over 47 governments, provide a flexible yet comprehensive framework for developing trustworthy AI and have become a benchmark for national policies around the world.

UNESCO, the United Nations Educational, Scientific and Cultural Organization, has also made a significant contribution with its Recommendation on the Ethics of Artificial Intelligence, adopted by all 194 member states in 2021. This is the first-ever global standard on AI ethics, and it places a strong emphasis on protecting human rights and dignity. The Recommendation outlines ten core principles, including proportionality, safety and security, fairness and non-discrimination, and human oversight. It also provides concrete policy guidance in areas like data governance, gender equality, and the use of AI in education, culture, and health.

The Global Partnership on Artificial Intelligence (GPAI), launched in 2020 and hosted by the OECD, is another key initiative. GPAI brings together experts from science, industry, civil society, and government to bridge the gap between theory and practice on AI. It supports research and applied activities on AI-related priorities, with a focus on responsible AI, data governance, the future of work, and innovation.

The United Nations itself is taking an increasingly active role in AI governance. The Global Digital Compact, adopted at the Summit of the Future in 2024, represents a major step towards creating a comprehensive global framework for digital cooperation and AI governance. Key commitments of the Compact include:

  • Establishing an international scientific panel on AI to promote a better understanding of its risks and opportunities.
  • Initiating a global dialogue on AI governance to foster inclusive discussions among all stakeholders.
  • Developing a global network for AI capacity building to bridge the digital divide.

Finally, the G7 has also become a key forum for discussing AI governance. The Hiroshima AI Process, launched under Japan's presidency in 2023, is a G7 initiative aimed at promoting safe, secure, and trustworthy AI. It has produced a comprehensive policy framework that includes international guiding principles for all AI actors and a voluntary code of conduct for organizations developing advanced AI systems.

These international efforts, while not legally binding, are creating a growing consensus on the core principles that should underpin any AI code of practice. They are fostering a shared understanding of the challenges and opportunities of AI and providing a platform for collaboration and harmonization.

The Elephant in the Room: The Influence of Big Tech

No discussion of AI governance would be complete without acknowledging the immense influence of the private sector, particularly the handful of "Big Tech" companies that dominate the AI landscape. Companies like Google, Microsoft, Meta, and IBM are not just developing the most advanced AI systems; they are also actively shaping the policy discourse surrounding them.

This influence is wielded in several ways:

  • Setting De Facto Standards: Through their own internal ethical frameworks, such as Google's AI Principles and IBM's AI Ethics Board, these companies are setting precedents that often become industry-wide norms.
  • Lobbying and Policy Input: Big Tech companies invest heavily in lobbying efforts and are often invited to provide input on government regulations. This gives them a significant opportunity to advocate for policies that align with their business interests, which often favor self-regulation and a light-touch approach.
  • Funding Research: These companies are major funders of AI research, both in-house and at academic institutions. This allows them to shape the research agenda and influence the epistemic foundations of policymaking.
  • Dominating Standards Bodies: Research has shown that Big Tech companies have a significant presence in the technical committees that develop AI standards, giving them a powerful voice in shaping the technical implementation of regulations like the EU AI Act.

This outsized influence raises concerns about regulatory capture and whether the public interest is being adequately represented in the policymaking process. Critics argue that a lack of transparency in the interactions between governments and tech companies makes it difficult to assess the extent of this influence and ensure that AI regulation is not being watered down to serve corporate interests.

However, the role of Big Tech is not entirely negative. These companies possess unparalleled technical expertise and resources, and their collaboration is essential for developing effective and practical AI governance frameworks. Many are actively participating in multi-stakeholder initiatives like the GPAI and are contributing to the development of open-source tools and best practices for responsible AI. The challenge is to find a way to harness their expertise and innovative capacity while ensuring that AI governance remains a democratic and accountable process.

The Tangled Web of Cross-Border Enforcement

Even if a global consensus on an AI code of practice were to emerge, the practical challenges of enforcing it across borders would be immense. The decentralized and borderless nature of AI makes it difficult to apply national laws and hold actors accountable for harms that may occur in different jurisdictions.

Several key challenges stand in the way of effective cross-border enforcement:

  • Jurisdictional Ambiguity: When an AI system developed in one country causes harm in another, it can be difficult to determine which country's laws apply and which courts have jurisdiction. This is particularly true for decentralized AI systems that operate on a global network without a clear physical location.
  • Conflicting Legal Frameworks: As we have seen, different countries have adopted vastly different approaches to AI regulation. This can create a compliance nightmare for multinational companies and lead to legal uncertainty. For example, an AI application that is legal in the U.S. might be considered high-risk or even banned in the EU.
  • Data Protection and Privacy: The cross-border flow of data, which is essential for training and deploying many AI systems, is subject to a patchwork of data protection laws. The EU's GDPR, for example, imposes strict rules on the transfer of personal data outside the EU, which can create significant challenges for global AI companies. Recent cases, such as Italy's fine against the developer of the Replika chatbot for GDPR violations, highlight the real-world risks of deploying AI services across borders without a clear understanding of local privacy laws.
  • Liability and Accountability: Determining who is responsible when an AI system fails or causes harm is another complex issue. Is it the developer, the user, the data provider, or the AI system itself? Establishing clear lines of liability is essential for ensuring that victims have access to redress, but this is a challenge that has yet to be fully resolved in any jurisdiction.
  • The "Pacing Problem": The rapid pace of technological development means that laws and regulations often struggle to keep up. By the time a new law is enacted, the technology may have already evolved, rendering the law obsolete. This makes it difficult to create future-proof regulations that can effectively govern AI in the long term.

These challenges underscore the need for greater international cooperation on enforcement. This could include developing mutual legal assistance treaties for AI-related cases, establishing international standards for evidence gathering in the digital realm, and creating forums for resolving cross-border AI disputes.

AI in Action: Sector-Specific Codes of Practice

While the debate over a universal AI code of practice continues, sector-specific guidelines and regulations are already emerging in areas where AI is having a significant impact. These sector-specific approaches provide valuable insights into the practical challenges and opportunities of AI governance.

In healthcare, AI is being used for everything from diagnosing diseases to personalizing treatment plans. This has led to the development of specific regulations for AI-based medical devices, as well as guidelines on data protection and patient consent. Case studies from the Middle East show how countries like Saudi Arabia and the UAE are creating specific governance mechanisms for the use of AI in the health sector. In Australia, case studies highlight the importance of obtaining informed consent before using generative AI to handle patient data and the need for practitioners to verify the accuracy of AI-generated transcripts.

In the financial sector, AI is being used for credit scoring, algorithmic trading, and fraud detection. This has raised concerns about algorithmic bias, data privacy, and the potential for AI to create new systemic risks. Regulators are grappling with how to adapt existing financial regulations to the age of AI and are calling for greater international cooperation to manage the cross-border risks of AI in finance.

In the realm of employment, AI is being used in recruitment, performance management, and even for making decisions about dismissals. This has raised concerns about discrimination, privacy, and the future of work. In the EU, the AI Act will have significant implications for how HR departments use AI, and employers will need to ensure that their AI systems comply with both employment law and data protection regulations. Trade unions and works councils are also becoming increasingly active in this space, demanding greater transparency and consultation on the use of AI in the workplace.

These sector-specific examples demonstrate that a one-size-fits-all approach to AI regulation is unlikely to be effective. A truly comprehensive AI code of practice will need to be flexible enough to accommodate the unique challenges and opportunities of different sectors, while still upholding a core set of universal principles.

The Road Ahead: Forging a Global AI Future

The path to a global AI code of practice is fraught with challenges, but it is a journey that we must undertake. The alternative is a fragmented and chaotic digital world where innovation is stifled, rights are unprotected, and the immense potential of AI is squandered.

So, what does the future hold? Several key trends are likely to shape the road ahead:

  • A Continued Battle of Ideas: The philosophical and geopolitical tensions between the EU's rights-based approach, the U.S.'s innovation-focused model, and China's state-driven strategy are likely to persist. The "Brussels effect," whereby the EU's regulations become de facto global standards, will continue to be a powerful force, but it will be challenged by the competing visions of other major powers.
  • The Rise of Agile Governance: Given the rapid pace of technological change, there is a growing recognition that traditional, slow-moving regulatory models are no longer fit for purpose. We are likely to see a move towards more agile and adaptive forms of governance, such as regulatory sandboxes, which allow for experimentation and learning in a controlled environment.
  • A Multi-Stakeholder Approach: It is clear that governments cannot and should not regulate AI in a vacuum. The future of AI governance will depend on close collaboration between governments, the private sector, academia, and civil society. Multi-stakeholder initiatives like the GPAI and the UN Global Digital Compact will become increasingly important forums for building consensus and developing shared solutions.
  • A Focus on Interoperability: Rather than striving for a single, uniform global law, the focus is likely to shift towards ensuring interoperability between different national and regional regulatory frameworks. This will involve developing common standards, mutual recognition agreements, and other mechanisms to reduce regulatory friction and facilitate cross-border innovation.
  • The Growing Importance of Socio-Technical Standards: As AI systems become more autonomous and intelligent, there is a growing need for new socio-technical standards that can govern the AI systems themselves, not just the organizations that develop them. These standards will need to address issues like interoperability, explainability, and the ability to align AI systems with human values.

Ultimately, the creation of a global AI code of practice will require a delicate balancing act. We must find a way to protect fundamental rights without stifling innovation, to foster international cooperation while respecting national sovereignty, and to create a regulatory framework that is both robust and adaptable. The task is monumental, but the stakes are even higher. The decisions we make today about how to govern AI will not only shape the future of technology but also the future of our societies, our economies, and our shared humanity. The quest for a global AI code of practice is, in essence, a quest to build a better future for all.

Reference: