How AI Regulation is Shaping up Around the Globe
Artificial intelligence (or “AI” ) holds the potential to expand human learning and potential. However, the idea of machines potentially taking over key human functions has sparked worry among numerous governments… and for good reason.
Some experts argue that the progress of artificial intelligence (AI) is outpacing regulatory efforts. Given the pace of development, it’s hard to argue that this is not at least partly true.
Recommended reading
- What is EMV Bypass Cloning? Are Chip Cards Still Secure?
- Terminal ID Number (TID): What is it? What Does it Do?
- Dispute Apple Pay Transaction: How Does The Process Work?
- How Do Credit Card Numbers Work? What do the Numbers Mean?
- What is PSD2? How it Impacts Banks, Businesses & Consumers
- P2P Payment Use in eCommerce Jumps 66% in 2024
Some Regulation is Inevitable
Regulatory measures seem unavoidable, yet the shape and timing of such governance remains unclear. It’s also uncertain which global player will determine the course of this process: the US, the EU, or China.
Policymakers in each of these blocs are currently considering various approaches to AI regulation. These can broadly be divided into four subsets:
- AI-Specific Regulations (The AI Act, which we’ll discuss below)
- Data-Related Regulations (The General Data Protection Regulation)
- Updates to Existing laws & Legislation (Antitrust & Anti-Discrimination Law)
- Sector-Specific Regulations
Unique regulatory frameworks have already started taking shape in Washington, Beijing, and Brussels. Each is founded on their own set of principles and motivations, though. Who will ultimately direct the conversation on AI implementation and use?
Europe: Setting the Pace
The EU has arguably been quickest in adopting regulations that uphold the rights-centric approach endorsed by their existing digital regulation policies.
Under the EU’s proposed AI Act, which is expected to take effect in the next 12-24 months, AI developers aim to utilize data from the EU's 27 member states. This data will help train their algorithms to face regulatory restrictions beyond EU borders.
The AI Act implements a risk-based model that classifies AI applications into three risk categories: unacceptable, high risk, and low or minimal risk. This classification acknowledges that governmental social scoring tools and automated hiring tools possess different risks compared to AI usage in spam filters, for example.
Leading AI companies have requested legislators accelerate AI regulation. They’ve argued that regulations are necessary to ensure user safety and maintain a competitive edge against foreign adversaries. Conversely, the same leaders are contesting data-privacy rules that they perceive as unnecessarily restrictive, with OpenAI even briefly threatening to leave the European market.
“It’s an interesting Rorschach to figure out, you know, what is important to the EU versus what is important to the United States,” says Shaunt Sarkissian, founder and CEO at AI-ID. “If you look at all rules that come out of the EU, generally they tend to be very consumer privacy-oriented and less fixated on how this is going to be used in commerce.”
“There needs to be clear demarcation lines of what is considered generative and output-based AI and what is just running analytics at existing systems,” added Sarkissian.
The US: Still in the Early Days
The regulation of AI has recently become a buzzing issue in Washington. The conversation is characterized by legislative hearings and frequent press conferences. Then, there was the White House's recent announcement of voluntary AI safety pledges agreed to by seven tech giants in July.
However, a deeper dive into these activities provokes thoughts about how impactful these actions are in establishing policies. The truth is that the impact is not really substantial yet, especially as the US seems to be following a market-centric approach to AI implementation.
As mentioned before, leading AI companies have requested US legislators to accelerate AI regulation. However, the White House published a handbook in October 2022, titled The Blueprint for an AI Bill of Rights. This provides a guideline on how to protect American citizens' rights in the upcoming AI era, but ultimately it relies on tech companies to self-regulate.
The US is merely on the brink of what is expected to be a challenging journey toward establishing legal structures around AI, say lawmakers and policy experts. It’s true that there have been countless hearings, White House meetings with top tech bosses, and AI bill introduction speeches. Regardless, it's still premature to foresee even the raw outlines of regulations.
The US is still in the early days of developing effective AI regulation. It could be many years before rules are set to protect consumers and mitigate the tech's threats to employment and security, or the spread of misinformation.
China: Coming Into Play
While the US seems to be following a market-centric approach, China predictably is following a more state-driven regulatory route.
China is currently leading the world in AI-driven surveillance and facial recognition tech. However, the nation trails behind others in developing advanced generative AI systems. This is partially due to Chinese laws that restrict the data used to train foundation models.
Some observers think that the shared worry of the US and EU over China's rising global digital influence could potentially foster closer transatlantic cooperation. This collaborative approach could balance the techno-optimism and innovation drive of the US against the user-focused privacy protections of Europe.
The upcoming years will witness significant strides as distinct digital superpowers arise and compete for control over the future of AI technology. Washington, Brussels, and Beijing are increasingly relied upon for compatibility guidance as other countries contemplate their own AI legislation.
Regulation Taking Shape: What Should We Expect?
Developing an AI regulatory body is one option for the US going forward. However, this new agency could potentially become swayed by the very tech industry it's designed to oversee.
Instead, Congress may opt to endorse private and public adoption of the NIST risk management framework and pass bills like the Algorithmic Accountability Act. This action could enforce accountability similarly to the Sarbanes-Oxley Act and other regulations that transformed company reporting requirements. Congress also has the opportunity to implement extensive laws surrounding data privacy.
Drawing inspiration from the EU model, the US National Institute of Standards & Technology has crafted their own AI risk management framework. This was developed with substantial contributions from numerous stakeholders, including the Chamber of Commerce and the Federation of American Scientists. Various business and professional associations, tech companies, and think tanks also participated.
Federal entities have already introduced their own guidelines addressing some of the risks intrinsic to AI. The Equal Employment Opportunity Commission and the Federal Trade Commission each have their own proposals in place. Other agencies, such as the Consumer Product Safety Commission, also have significant roles to play.
The regulation of AI should be a collective effort involving academia, industry, policy professionals, and international agencies. This methodology has been compared to international organizations such as the European Organization for Nuclear Research and the Intergovernmental Panel on Climate Change. The way in which the internet has been managed by nongovernmental entities like nonprofits, civil society, and industry presents another case study.
Preparation is Key: 5 Tips to Navigate an AI-Enabled Market
Regardless how AI regulation plays out, or which governing body assumes leadership, merchants and financial institutions have an opportunity to get well ahead of the matter. This should be done before they are mandated to do so.
Companies can best prepare for AI regulation through a Responsible AI (RAI) initiative. RAI focuses on accountability, transparency, privacy, security, fairness, and inclusiveness in algorithm development and usage. Seen as an “innovation enabler,” rather than a compliance issue, it can enhance AI performance and accelerate feedback.
We recommend that concerned parties:
#1 | Delegate RAI Leadership
Assign an RAI leader, often a Chief AI Ethics Officer, to guide the initiative. Effective RAI leaders are adept in policymaking, technical needs, and business requirements. They build a cross-functional team to design and direct RAI programs, ensuring compliance with future regulations while aligning with the company's broader values.
#2 | Establish an Ethical AI Framework
Instill RAI principles and policies into your corporate culture, forming a solid foundation for new regulatory requirements. For companies operating across regions, a framework emphasizing bias mitigation, robust privacy protection, and clear documentation can ease compliance with diverse regulations.
#3 | Include Humans in AI Processes
Current and proposed AI regulations require strong governance and human accountability. Feedback loops and escalation paths should be integral to any RAI program. Firms applying these practices are more likely to secure the trust of governments and customers.
#4 | Implement RAI Reviews & Tools
Monitor AI effects throughout the system's lifecycle to catch and resolve issues early. Embed core RAI principles into algorithm development processes. Also, conduct continuous end-to-end reviews of algorithms, business processes, and outcomes.
#5 | Engage in the RAI Ecosystem
The burgeoning RAI ecosystem offers insights into AI risks and rewards. It also encourages proactive addressing of societal concerns. Active participation can foster further collaboration.
Implementing RAI practices enhances AI performance, minimizes system failures, and fosters trust, which can simultaneously drive growth.
Instead of delaying due to regulatory uncertainty, companies should take proactive steps. The goal should be to play a part in shaping the regulatory landscape, rather than being overwhelmed by it.