A Framework For Countering The Risks Of Artificial Intelligence

A Framework For Countering The Risks Of Artificial Intelligence

Artificial Intelligence (AI) hits the headlines every day with its promises of new benefits for mankind, but also by concerns about new harms. Normal commercial investment and competition will ensure we get the new benefits, but the harms of AI are already evident and many are very difficult to deal with; ‘AI Safety’ is therefore an urgent priority.

AI is often discussed as if it is just one topic: it is not. This paper aims to define a framework of the various players in the AI landscape, the interactions between them, and the associated risks of harm from AI to various types of users. The framework helps us define the different types of measures required to counter the risks. As shown in Figure 1, the players in the AI landscape are the Suppliers of AI apps, tools and components, their ‘innocent’ Users, the Aggressors who attack the Users, and the Defenders whose job is to protect all Users. Suppliers are shown as overlapping with Aggressors and Defenders because their AI products are also used by both of these groups who range from nation states down to individuals. (‘Users’ includes people but also hardware devices and software that may interact with AI products.)  The AI landscape is thus the scene of multiple complex interactions between the various players. Their effects range from benefits and harms to many Users, to multiple and rapidly evolving AI-assisted warfare between the other players.

Figure 1: The players in the AI Landscape

The topic of ‘AI Safety’ covers the risks of harm to three main classes of Users, as shown in Figure 2, that require different countermeasures. They are a) the physical and services infrastructure that society relies on that may be disrupted by AI-enabled attacks, b) individual persons and institutions, and c) society in general. The risks of harm to the first two classes are already evident; the potential for harm to society in general seems to be under-appreciated but is growing.

Figure 2. The three classes of AI Users and the types of harm they may suffer.

Risks to our infrastructure from AI-assisted attacks can only be mitigated by major investments in strong cybersecurity measures. Since so much of the infrastructure on which we depend is of a global nature, international cooperation in this area is vital. (We regard as implausible the idea that advanced AI could ‘take control’ and pose an existential risk to humanity. However, advanced AI could well be weaponized to cause severe harm to life.)

Risks of harm to individual persons and institutions are complex and wide-ranging. Any product reliant on AI should be required to keep its users safe from harm in the same way as products in any other industries, such as transport, food, health. AI Suppliers that launch unsafe products must be held accountable for the harms they cause. AI safety is too important to be left to free markets or industry self-regulation.

The risks to society in general are that the on-line ‘infosphere’ becomes so polluted with misinformation, disinformation, fakes and hallucinations, etc., much of it generated or mediated by AI, that we can no longer trust the infosphere for learning or for communication. A general breakdown of trust in the infosphere would be a disaster.

Implementing cyber-security protection is straightforward in principle though often complex in practice. Nevertheless, its importance is such that the owners of infrastructure assets whose disruption could harm large parts of society should be made legally liable for protecting their assets from cyber-attacks.

In contrast, protecting individual persons, institutions, and society in general against AI-enabled harms is inherently more complex, ranging from the need to protect children from harmful material to deciding on the limits of free expression. Only Governments can make the difficult decisions on the types of protective regulations needed.

Given the untrustworthy nature of much AI-generated material, pollution of the infosphere can only be mitigated by including a non-erasable ‘health warning’ watermark in any such material.

The rush to develop even more advanced forms of AI runs the risk of making the current harms worse. (Has there even been any other class of legal consumer products in history that it is strongly suspected will cause a variety of harms, but that cannot be properly tested for safety except by letting consumers use them as the only way to find out what harms they actually cause?)

For a fuller discussion of the threats of AI and the necessary countermeasures, click here.

Charles Symons
Prometheus Endeavor
Reigate, England, May 2024

Author

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *