Author | Niamh Libonatti-Roche |
Date | 7th November 2023 |
AIPrivSec Briefing Note
Executive Summary
On the 1st and 2nd November 2023, Bletchley Park, renowned as the home of World War 2’s codebreakers, played host to the world’s first summit on AI organised as a response to the clear and imminent the threat that unregulated and uncontrolled AI represents deserving of international attention and collaboration on the scale exemplified by the Bletchley summit.
Its purposes was to start:
- to understand the potential risks associated with ‘Frontier AI’ including cybersecurity, biotechnology, and the distribution of misinformation and the available mitigations for those risks
- to establish international agreement on the risks and opportunities presented by AI
Prior to the summit, the world’s economic superpowers – The UK and USA, China, and Europe – had not met publicly with Big Tech superpowers, like Google, Meta and ChatGPT, to discuss the best way to control and regulate AI. The summit resulted in the signing of the first international agreement on AI by attendees.
This bulletin looks at the key takeaways from the summit and provides a short analysis of what this may mean from a regulatory perspective.
Outcomes
- All countries in attendance agreed to the Bletchley Declaration on AI safety, that sets out the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”, as well as the huge opportunity that AI represents for humanity, and the need for international collaborative action to realise these benefits and fully mitigate these risks.
- Delegates agreed that some form of higher-level understanding of the capabilities of AI needs to be sought. Consequently, on November 2nd all delegate agreed to support an independent report on the “State of the Science” of AI.
- Several countries and companies developing AI agreed to widescale state-led testing of the next generation of models before they are released. Much of this testing is to be achieved by the AI Safety Institutes.
- The US made large scale commitments to AI regulation. Vice President Kamala Harris said that “the US will establish rules and norms for AI with allies and partners that reflect the US’s democratic values and interests, including transparency, privacy, accountability, and consumer protections.”.
- It was agreed that continued depoliticised collaboration, exemplified by the Bletchley Summit, was key to successfully achieving “control” over AI.
- There was little agreement between delegates on whether legislation is the most prudent way forward in achieving regulation of AI.
- And, finally, it was agreed that AI safety policies will continue to be discussed at the forthcoming AI Safety Summits in the Republic of Korea and France over the course of the next year.
UK: AI Approach expanded
The UK set out its action plan for tackling the risks associated with AI in its ‘Pro-Innovation approach to AI’ White Paper (published in March 2023) which rejects the introduction of legislative measures to control the development and deployment of AI. However, the summit’s introduction of an agreed system by which the UK AI Safety Institute undertakes mandatory pre-release safety checks on all Next Generation AI models, is a welcome addition to the scheme.
The UK has taken its position to ensure that the UK benefits fully as a haven for AI companies concerned by restrictions put in place by more stringent regimes.
One of those regimes, the EU, is moving forward with its AI Act which will raise the bar for the protection expected of countries or companies who trade with the EU or who process EU citizen data. As a result, just like with the GDPR, the EU standard will likely become an internationally adopted gold standard of legislation that countries and their companies must comply with to continue to trade with the EU.
That reality suggests that the UK’s “haven” for AI companies may be short-lived if those companies also wish their products to marketable outside of the UK – in Europe.
Key take-aways
- The Bletchley Declaration stands as a moment of international agreement on the risks and benefits of AI as well as the need for its control.
- The UK will not, for now, regulate AI beyond that already achieved – the agreement between Big Tech and UK government that testing of next generation AI tools will be independently undertaken prior to release. This will increase the level of assurance that the tools are safe before they become available to use.
- The EU AI act will become the gold standard for regulating AI and will be required for businesses processing EU person data.
- All those currently developing AI tools, have received a red flag that:
- more consistent, higher quality development and testing standards are to be expected.
- attainment of those standard is likely to be assessed by an impartial external audit.
- AI business users and those deploying AI now know that regulation and mandatory testing is an imminent possibility.
What does this mean for businesses?
- It would be prudent for businesses to seek to understand and comply with existing gold standards for development, data privacy and information security (e.g., ISO-23053, ISO-27001 and the NIST AI Framework)
- Businesses planning to use AI should also seek to understand the EU AI Act to ensure that its requirements do not affect business as usual.
- Businesses should consider the reputational and commercial impact of ‘AI gone wrong’ as a result of their shortcomings when developing, deploying or using Artificial Intelligence tools.
AiPrivSec
For more information on the UK’s Pro-Innovation approach, the EU AI Act, what these may mean for your company or to access our white papers get in touch by clicking here
Leave a Reply