AI and Corporate Governance

Reading Time: 3 minutes

Introduction

Are we taking AI seriously enough in considering its impact on corporate governance?  It would be easy to miss the additional question which has appeared at the end of the current Corporate Governance Code (the Code) consultation.  Where did it come from and who is thinking about it? As a reminder, the question is:

Q26: Are there any areas of the Code which you consider require amendment or additional guidance, in support of the Government’s White Paper on artificial intelligence?

I wondered who it was addressed to.  Possibly IT professionals.  Possibly people who knew something about AI.  I then read one of the proposed responses to the consultation:

“No, we consider it premature and unnecessary to make any amendment to the Code or related guidance at a time when the legislative provision on artificial intelligence in the UK (and more widely) is still emerging.  Any such amendment or guidance could be outdated in the near-term.  Also, artificial intelligence is just one of a number of areas of risk in relation to the use of technology (albeit one that is attracting a certain amount of attention at present given recent technological developments). None of these other areas are specifically called out in the Code or related guidance, both of which are intentionally high-level.”

Suddenly I realized, it’s all of us.  I am an AI novice but even for me, I find this draft response deeply uncomfortable.  It is true that we don’t have the white paper yet.  But can we afford to ignore the subject until a convenient moment when all the facts are published and things reach a point of clarity?  With an average of five years between each iteration of the Corporate Governance Code, I don’t believe we can.  By that time it may be too late.

Corporate Governance

Corporate Governance practitioners need to at least start to think about the impacts that AI may have on corporate governance.  Even if this only leads to some overarching principles which can be developed in the supporting guidance, which is designed to be more flexible and pragmatic.

Areas where AI may impact corporate governance include all potential interfaces with stakeholders.  Employees are the obvious starting point.  Could workforce engagement mechanisms be provided by AI and reviewed by the designated non-executive director? The board might conclude that this was an effective means, being potentially all-encompassing (rather than involving a sample of the workforce), up to date and cost-effective. What sort of information will be harvested from the workforce?  We have already heard about movement tracking, facial expressions and mouse clicks. Will this data even have data protection if it is deemed stateless, held in the jurisdiction of Open AI?

Thinking about shareholders, already there are concerns about how the proxy advisors assess data on behalf of institutional voters.  Although the Code is designed to work on a “comply or explain” basis, the proxy advisors do not.  They claim that they do not use a box ticking approach, but we know they are resource-constrained and they only allow 24 hours from the publication of their draft to the release of their final report, so in reality there is not much time for a sensitive approach.  It is likely they will use AI, and are probably doing so already. We know, for example, that AI can analyse a document to summarise the number of positive, and negative, statements.

Conclusion

As a tool, AI could bring many benefits.  It could enable up-to-the-moment reporting on sustainability or risk management across a business.  It could even make the newly worded provision 30, (eg “…The board should provide in the annual report: •A declaration of whether the board can reasonably conclude that the company’s risk management and internal control systems have been effective throughout the reporting period and up to the date of the annual report”) achievable for both executive and non-executive directors at the touch of a button.

What might the overarching principles say? As a suggestion, the Code could state that where there is reliance on data collated and assessed using generative AI technology, this must be done in a proportionate way with reasonable levels of assurance.  In particular, interactions with stakeholder for the purposes of the Code should require appropriate assurances provided by humans. Perhaps it should be written in that companies must monitor the adoption and development of AI within their own organisation and ensure suitable AI policies to protect stakeholders.

There are more questions than answers here.  To me the only certainty is we all need to engage with the subject.  Maybe I will ask Chat GPT what it thinks?

Beyond Governance is encouraging feedback on this.  Have your say by emailing enquiries@beyondgovernance.com before 8 September 2023.

Leave a Comment

Scroll to Top