AI Governance - The battle for control

“Until now the only people testing the safety of new AI models have been the very companies developing it,” Rishi Sunak, British PM said. “We shouldn’t rely on them to mark their own homework, as many of them agree.”

This week saw the major economic blocks of the world all jostling for control for the governance of AI.

The UK government has just finished its international AI summit, whose aim was to address the potential “catastrophic and existential” risks from this frontier technology, with decision makers and lawmakers from the US, China and the Kingdom of Saudi Arabia all in attendance, flanked by prominent figures from Big Tech.

The United States, even though they did attend the summit by sending the Vice President of the United States, Kamala Harris, did manage to upstage the UK’s efforts, with President Joe Biden signing his own 100 page executive order on AI, and announcing a new AI safety institute to match the UK.  Alex Krasodomski, a senior research associate at Chatham House who specialises in tech policy said, “there is some tension there.  It’s no accident that the US announced its executive order a few days before the UK summit.”

As both leaders face dwindling support in the run-up to election campaigns, they need to be seen as taking the charge and getting ahead of this technology.  The countries are also taking differing approaches to AI, with the US more focused on near term risk around job loss and fraud and the UK focused on existential long-term risks poised by the frontier models of AI which are currently still “learning”.

The EU has gone its own way as well with the introduction of a rigorous AI Act in 2021 which should be passed into law soon.  With these divergent macro approaches to AI it is hard not to dismiss these policies as political grandstanding, for example Mr Sunak can now claim a diplomatic coup of not only getting China, the US and other world powers in the same room, but also to agree on a set of principles (no matter how basic they are).

These moves should hopefully create a knock on effect and encourage some of the smaller economic countries to act on AI, especially as they can now call on the major economic blocks (such as the US and the EU) as the waypoints for their decisions.

The Bletchley Declaration is a declaration published by Britain, agreed with 28 countries and the EU, aimed at boosting global efforts to cooperate on artificial intelligence safety.  Moving to a more micro basis and focusing on the UK, it is difficult to see how the new “Bletchley Declaration” will move the needle, at the moment, as there is no legal mechanism to enforce and the language is open-ended at best.  Looking at the positives, this is the first step in building a set of guardrails for the AI industry and does now move the focus on to the Boardroom and how we engage with AI at all levels throughout our value chains.

With the FRC firmly kicking the can on any AI based updates to the UK’s Corporate Governance Code, and the ongoing regulatory capture between the major economic blocks and Big Tech, it is even more vital that businesses act with foresight, insight and oversight to ensure value for shareholders and stakeholders but also avoid the doomsday scenarios predicted by Mr Sunak’s summit.

With the new Frontier AI models already in “learning” mode, and with the ability to plan and problem solve it is important that businesses are being proactive in their approach, rather than waiting for prompts.  Boards and business leaders should be thinking about the following to ensure they are making informed decisions:

Business Applications:

Not only understanding how AI can be transformative for your industry, but understanding the nuances specific for your own business, as well as the frameworks required for parallel implementations, and the reality of up skilling your workforce and workflows.

Risk Factors:

A deep understanding of the legal, operational and reputational risks associated with deploying AI, and the scenario plans required to mitigate that risk and the real-world implications of associated decision making.  The potential pitfalls of AI algorithms, specifically for your business, and the possibility of errors in the human-computer handoff.

The risks involved with intellectual property, security, deceptive trade practices, contract, privacy and data protection as well as a thorough understanding of the potential social and environmental consequences of AI and training your own models.

Risk Mitigation:

The strategies to mitigate risks associated with implementation of AI, frameworks for accountability, governance and compliance as well as responsible and trustworthy practices.

Regulatory and Policy Developments:

A framework to stay up-to-date with AI regulations across different jurisdictions, and how those affect command and control structures for your portfolio companies, as well as best practices for navigating this evolving landscape.

Our technology and digital transformation team is happy to answer any questions you may have, and can be reached on enquiries@beyondgovernance.com

"We live in the most interesting times, and I think this is 80% likely to be good, and 20% bad, and I think if we're cognisant and careful about the bad part, on balance actually it will be the future that we want."

The CoSec point of view:

With great power comes great responsibility

Clearly, generative AI has amazing potential and applications, and will help people be more efficient and productive at work by augmentation of human intelligence, and could revolutionise many industries, including the corporate governance world. However, the risks cannot be ignored, but neither should the opportunities and possibilities.

Scroll to Top

Clearly, generative AI has amazing potential and applications, and will help people be more efficient and productive at work by augmentation of human intelligence, and could revolutionise many industries, including the corporate governance world. However, the risks cannot be ignored, but neither should the opportunities and possibilities.