top of page

Trustworthy AI — Can Laws Build Trust in AI?

A lawyer and a psychologist discussing the role of law. Trustworthy AI — Can Laws Build Trust in AI?

A lawyer and a psychologist discussing the role of laws and trust in the AI context

The European Commission’s AI regulation proposal is a proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence, Artificial Intelligence Act and amending certain union legislative acts (published in April 2021). Its explanatory memorandum explicitly aims to implement, among others, an ecosystem of trust by proposing a legal framework for trustworthy AI and the word trust is mentioned several times (14 trust, 1 trusted, 2 trustful, 21 trustworthy, 3 trustworthiness, 6 entrusted, 1 entrusting). This is somewhat surprising from a Swiss legal point of view.

Indeed, under Swiss law, trust (German: Vertrauen / Italian: Fiducia / French: Confiance) is never mentioned, for example, in the Swiss Civil Code, in the Code of Obligations, nor in the Federal Product Liability Act, which constitutes fundamental legal bases. However, we start seeing this trend also in Switzerland: The second key objective of the Digital Switzerland Strategy is guaranteeing security, trust, and transparency. Trust seems therefore to become an important aspect concerning AI, and the system of governance (i.e. structures and processes that are designed to ensure accountability, transparency, rule of law and broad-based participation) itself, as well as the regulators applying it, seem to need to earn public trust (Sutcliffe & Brown, 2021). But what exactly are we talking about when we talk about trust?


The extensive use of the trust construct in a regulatory context has also been accompanied by criticism. The topic of trust can be approached very differently depending on one’s perspective. Some, such as Joanna Bryson argue that AI is nothing to be trusted as AI systems should never be presented as being responsible.

Others have questioned whether users can actually trust the product or system, as it is just the proxy of the designer or developer who engineered the product. Moreover, it is debated who can actually be perceived as trustworthy or not, which refers to trustworthiness, the property of the trust receiver.

image source:

This could be integrity for humans or performance for machines. In an influential trust review by Hoff & Bashir, it is stated that human-automation trust can be viewed as a specific type of interpersonal trust in which the trustee (i.e., the trusted actor) is one step removed from the trustor. However, there are also arguments in favor of a direct trust relationship between humans and technology. For example, within the context of automated vehicles, it may indeed be the automation in use that is trusted in particular situations. The topic of trust is indeed complicated and you cannot neglect either side, both views have valid arguments and no matter where you are from, trust should be always used with caution. From a human-AI interaction perspective, trust as a psychological construct is indispensable. However, trust is quite problematic in a regulatory context.

In this article, we — a lawyer and psychologist — first try to understand if and how trust is meant to be built regulatorily. Secondly, we outline our take and finally conclude that trust is not an adequate term in the regulatory context, but useful, when it comes to communicating to the greater public.


According to Hilary Sutcliffe, Director of the Trust in Tech Governance Initiative, and Sam Brown, Director of Consequential, three factors need to be implemented by regulators of AI (such as government and standard setters) in order to

earn public trust in their approach:

  1. Ensure effective enforcement (i.e. compelling observance of or compliance with a law, rule, or obligation),

  2. Explain what they do and communicate more about their role and

  3. Empower the citizen and develop inclusive relationships with the latter.

The three factors mentioned above relate to a sort of system trust, i.e. trust in the governmental and legal system, and are already in some ways implemented in the legislative process. In our view the keyword will be legal certainty, i.e. knowing what you can expect, especially how the AI rules will be applied by the judges. There must be a uniform and regular application over time. The citizen must know what to expect and see that his case will be treated in the same way and have the same result as an identical case in another part of the country. If each State or Canton applies effectively, but in different ways, same cases, trust in the system will be lost.

We, therefore, do not believe that rules can, by the only fact of existing, create trust.


According to Daniel Hult, Lector at School of Business, Economics and Law at the University of Gothenburg, the government should in any case refrain from trying to create personal trust by means of legislation. He believes that a more feasible regulatory goal is to incentivize trustworthy behavior of societal actors (because they are more or less forced to act in a certain way), which might generate trust in the governmental and legal system as a positive side effect. He adds that if, after all, personal trust is the chosen regulatory goal, then legislation is not a suitable regulatory technique to build trust. Instead, less controlling regulatory techniques should be employed, e.g. programmes of voluntary regulation. Indeed, standards, best practices or labels set by private associations are not mandatory and because of it, if a company chooses to voluntarily adhere to standards, it opens the door toa possible trust in its behavior.

Daniel Hult (2018) therefore agrees with the last two factors indicated by Hilary Sutcliffe and Sam Brown. Involving the citizen into the process of regulation is certainly a less controlling regulatory technique. Rules, in particular mandatory ones, exclude any need for trust.

We second this: Even if the legislator wished to build trust with rules, it would be time wasted.


However, this has nothing to do with psychological trust in AI as an attitude of human beings, which is what this new trend in regulation is trying to achieve. Additionally, we argue that too many neglect the important difference between trust in AI and trustworthiness of AI.

The European Commission has already in the Ethics guidelines for Trustworthy AI comprehensively and very clearly defined which aspects are necessary for the creation of trustworthy AI (see references). According to this guideline, Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

  1. It should be lawful, complying with all applicable laws and regulations;

  2. It should be ethical, ensuring adherence to ethical principles and values; and

  3. It should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

The guideline sets out a framework for achieving Trustworthy AI by listing requirements that AI systems should meet and providing a concrete assessment list aimed at operationalizing these requirements, but it does not explicitly deal with the lawful AI component.

This component is defined by the European AI regulation proposal (see references), which sets the requirements for AI in order to be lawful. In particular, the proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalized through harmonized technical standards. The proposal also addresses the situation after AI systems have been placed on the market by harmonizing the way in which ex-post controls are conducted.

On the other hand, Switzerland’s strategy recommendation at the moment is mainly to adapt existing legislation (Christen et al, 2020) and to coordinate with Europe’s AI definitions. Indeed, while the EU has defined a proposal for AI regulation, it should not be forgotten that we are not in a legislative vacuum. Existing laws already provide for rules governing AI applications. However, some definitions, which are exclusively human-centered, need to be updated to incorporate machine-generated actions. An example would be adapting the Federal Product Liability Act to this technology.

The EU has set an example and established important elements that Switzerland will carefully consider, as it did so for the update of its data protection law. It will now be necessary to see how the EU’s AI regulation proposal and the revised Swiss laws, once entered into force, will be applied and, more importantly, how they will be applied over time. We believe that the challenge will be defining the first steps and being coordinated on implementation. A knowledge of this technology not only by the regulator but also by all implicated stakeholders and actors, such as the judiciary, will also be important.


In a nutshell, that trust in AI can only follow trustworthy AI, is an ideal, linear, and unfortunately unrealistic relationship. It’s rather a mission or vision, nothing more and nothing less. Should we trust AI (a normative approach) or is this AI trustworthy (a technical approach), are in fact very nuanced, but distinct questions. Maybe these are the days where we are all better off with a zero-trust approach until a company or developer is able to prove its trustworthiness in order to earn the user’s trust.

We believe that the fundamental purpose of a law is still to establish standards, maintain order, resolve disputes, and protect liberties and rights, not building personal trust, neither trust in AI per se. With a robust legal and judicial system in AI matters, a fuzzy feeling of trust in AI may be generated over time as a positive side effect.

Having a culture of trust would certainly not harm though. Wouldn't it be great if people could in fact blindly rely on AI systems? Knowing how and that they perform reliably, with good intentions of the developers, that they are secure and safe, treat personal data well, and so on? But this time is certainly not here, pondering when and if this day will ever arrive. However, over time people will better understand what AI is all about (or what it is not about), and until then legislation will protect the fragile ones (who are not able to understand nor to defend themselves), as well as the blind (-ly trusting) tech-optimists to not get fooled and that some person can be held accountable if things go wrong.


This article was written by Prisca Quadroni-Renella, Swiss lawyer and founding Partner at AI Legal & Strategy Consulting AG and Legal Lead for Women in AI, in collaboration with Marisa Tschopp, researcher at scip AG and Chief Research Officer at Women in AI.


  • Daniel Hult (2018) Creating trust by means of legislation — a conceptual analysis and critical discussion, The Theory and Practice of Legislation, 6:1, 1–23, DOI: 10.1080/20508840.2018.1434934.

  • Ethics Guidelines For Trustworthy AI, High-Level Expert Group on Artificial Intelligence, 8th April 2018.

  • Explanatory memorandum of the Proposal for a regulation of the European parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, 21th April 2021.

  • Christen M., Mader C., Abou-Chadi T., Bernstein A., Braun Binder N., Dell’Aglio D., Fábián L., George D., Gohdes A., Hilty L., Kneer M., Krieger-Lamina J., Licht H., Scherer A., Som C., Sutter P., Thouvenin F. (2020). Wenn Algorithmen für uns entscheiden: Chancen und Risiken der künstlichen Intelligenz, In TA-SWISS Publikationsreihe (Hrsg.): TA 72/2020. Zürich: Vdf




bottom of page