This page contains automatically translated content.

02/17/2026 | Press Release

AI as a specialist colleague: trust arises from the interaction between humans and machines

Trust in artificial intelligence (AI) is not always created by smart algorithms, but develops gradually when the systems are embedded in social interactions. This is shown by a joint study by economist Prof. Dr. André Hanelt, Head of the Department of Management of Digital Transformation at the University of Kassel, and researchers from the Technical Universities of Darmstadt and Dortmund and Goethe University Frankfurt. The study examined the trust of financial advisors in so-called robo advisors, i.e. AI systems that interact with investors and automatically manage portfolios.

Image: Private
Prof. Dr. André Hanelt, Head of the Department of Management of Digital Transformation at the University of Kassel.

The research focuses on the previously little-studied triangular constellation between AI systems, human experts and their customers. "We already know a lot about the factors that influence the trust of direct AI users. However, we know very little about how human experts, such as doctors or lawyers, develop trust in AI in their field of work when they do not use such technologies themselves but recommend them to their clients," says Hanelt. As part of the study, the researchers found that the human experts evaluate the AI systems in particular in terms of the extent to which they appear to influence their relationships with their clients. "In contexts where a lot is at stake for people, such as in the area of money or health, the technical design or capabilities of AI systems are less important than the extent to which they are embedded in social interactions and can be embedded in such interactions," explains Hanelt. "If this is the case, trust can be built up step by step." The case study is based on more than five years of research and includes 89 interviews as well as extensive secondary material and observations at a large traditional bank that has developed a robo-advisor and introduced it alongside its human financial advisors.

In the study, the experts were initially very skeptical about AI and kept it away from their clients. By creating secure contexts that allowed intensive and personal support from human AI advisors before and during client contact as well as independent testing of the AI systems by the experts, a high level of trust developed over time, which ultimately resulted in a kind of division of labor between human experts and the AI systems. This means that experts remain highly relevant for their customers and their home companies.

According to the authors of the study, the triangular constellation consisting of AI, human experts and their clients will play an important role in the further spread of AI. In the future, human experts in such constellations will influence the scope, type and use of AI by other people for whom they have a professional responsibility, thus helping to shape the diffusion of corresponding systems in more and more professional areas such as medicine, law or finance. The study thus provides important insights for the effective integration of AI in contexts that have been and will continue to be strongly characterized by personal interaction between experts and their clients.

The study has been accepted for publication in the journal Management Information Systems Quarterly (MISQ): https://misq.umn.edu/misq/article-abstract/doi/10.25300/MISQ/2025/18041/3622/Opening-The-Network-of-Trust-How-Domain-Experts-in?redirectedFrom=fulltext

 

What does this mean in summary?

  • Human experts with professional responsibility for others (e.g. in financial advice) evaluate AI systems in particular in terms of how strongly they appear to influence relationships with their clients.
  • The decisive factor is less the technical design or performance of AI systems and more how well they are embedded in social interactions and can be integrated into them. This allows trust to develop gradually.
  • Secure framework conditions, close support from human AI advisors and independent testing of the systems lead to a high level of trust over time and to a kind of division of labor between experts and AI systems.
  • As a result, experts remain highly relevant for their clients and their home companies.