A.I. : Essential requirements needed

Essential requirements – not just guidelines
for products and services embedding AI-technology

Recently the European Commission Comité Européen de Normalisation delivered a Working Document which presented a „Draft“ on AI Ethics Guidelines prepared by the High-Level Expert Group on Artificial Intelligence AI HLEG.
A revised version is scheduled for delivery by the beginning of April 2019.
These Guidelines set out a framework for a “Trustworthy AI” for the concrete implementation and operationalisation of AI systems.
There is no hint in the Working document that these Guidelines are relevant for the lawful “Placing on the Market” of AI technology based products and services.
Hence the question arises, is the respect of (conformity to) these guidelines of AI systems sufficient for the placing on the market in the Member States of the European Union?

I suggest, this is not the case.

The “Placing on the Market” of products and goods in the European Union is governed by a concept called „The New Approach“.
CEN, the European Committee for Standardization (Comité Européen de Normalisation) (https://www.cen.eu/Pages/default.aspx ), one of the three European Standardisation bodies recognized by EU-law, provides a useful introduction to the New Approach: please refer to: https://www.cen.eu/work/supportLegislation/Directives/Pages/default.aspx .
In short, the European Union adopts legislation (EU Directives) that defines essential requirements – in relation to safety and other aspects of public interest – which must be satisfied by products and services being placed on the market.
There is no reason, why this should be different for products and services which embed AI-based technology.
So, essential requirement concern safety and other aspects of public interest.
Well known examples of “safety” covered by specific EU Directives are:
• Electrical safety;
• Electromagnetic compatibility;
• Safety in a more general meaning, like for example in the Toys Directive .

Excerpt of the article on: Essential safety requirements :

Toys, including the chemicals they contain, shall not jeopardize the safety or health of users or third parties when they are used as intended or in a foreseeable way, bearing in mind the behavior of children.
or for chemicals:
• REACH is the European Regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals. All manufacturers and importers of chemicals must identify and manage risks linked to the substances they produce and market. It is to be clarified whether their use poses a risk to human health or the environment

Let us have a closer look at the guidelines document.
Already in the EXECUTIVE GUIDANCE, the (ethical) principle of “Do No Harm” (Non-Maleficence) is stipulated.
The Principle of Non-Maleficence is further developed as follows (excerpt):
AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identity, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI-specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.).
Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly.

Going further down in the discussion of the document, ten requirements for a trustworthy AI are defined:
1. Accountability 2. Data Governance 3. Design for all 4. Governance of AI Autonomy (Human oversight) 5. Non-Discrimination 6. Respect for (& Enhancement of) Human Autonomy 7. Respect for Privacy 8. Robustness 9. Safety 10. Transparency
For the purpose of the present contribution, we have a look at 9. Safety
Safety is about ensuring that the system will indeed do what it is supposed to do, without harming users (human physical integrity), resources or the environment. It includes minimizing unintended consequences and errors in the operation of the system. Processes to clarify and assess potential risks associated with the use of AI products and services should be put in place. Moreover, formal mechanisms are needed to measure and guide the adaptability of AI systems.
To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism.
Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure the suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above-mentioned demographics should have a place in the design process (rather through testing, validating, or other).

ANALYSIS
Many examples are known that AI system can cause harm. Car accidents are one prominent example. AI systems are good in image recognition but who owns the responsibility for a wrong medical diagnosis. Errors may occur in AI-based financial transaction systems. Wrong conclusions based on facial recognition e.g. as a function of racial characteristics have been publicized.
Furthermore, there is one other important question:
Can we trust in a machine (AI system) where we do not know how it makes the decisions it makes?
One of the most pressing problems of AI lies in its non-transparency: it seems – until now – not truly possible to explain why Deep Neural Networks (DNN) take a decision or come to a result. The complexity of DNN.s is very high. Machine Learning (ML) systems using DNN.s are good on correlation and poor on causality.
Regulation cannot accept black-box models.
What is needed is:
• A binding legal instrument which defines essential requirements for AI system;
• Standards which specify the essential requirements;
• An approval system with
• Recognized bodies, competent in AI.

CONCLUSION

1. Perhaps for the nine other requirements for a trustworthy AI, but safety, guidelines may be deemed sufficient.
2. For safety, however, essential requirement, inscribed in law must be defined.
3. The black-box problem must be overcome. Otherwise, there is no way of guaranteeing safety.
As for all other products, the “Placing on the Market” has to be the subject to the conformity with essential requirements stipulated by an EU Regulation or Directive.
There is a lot of work ahead.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

*

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.