Trust in Artificial Intelligence: A global study

The University of Queensland and KPMG partnered on a global study to understand trust and attitudes towards Artificial Intelligence.

This research examines the public’s trust and attitudes towards AI use, and expectations of AI management and governance, across 17 countries. The report provides timely, comprehensive global insights into the public’s trust and acceptance of AI systems, including: who is trusted to develop, use, and govern AI, the perceived benefits and risks of AI use, community expectations of the development, regulation, and governance of AI, and how organisations can support trust in their AI use. It also sheds light on how people feel about the use of AI at work, public understanding and awareness of AI, the key drivers of trust in AI systems, and how trust and attitudes to AI have changed over time.

Collectively, the survey insights provide evidence-based pathways for strengthening the trustworthy and responsible use of AI systems, and the trusted adoption of AI in society. These insights are relevant for informing responsible AI strategy, practice and policy within business, government, and NGOs, as well as informing AI guidelines, standards and policy at the international and pan-governmental level.

A clear pattern across the data are the stark differences across countries in people’s trust, attitudes and reported use of AI: people in western countries are more wary of AI, and less convinced that the benefits outweigh the risks, than those in the emerging economies (i.e. Brazil, India, China, and South Africa). Younger generations, the university educated, and those in managerial roles are also more trusting and embracing of AI.

The extensive findings are available in the Full Report with highlights presented below and in the Executive Summary.

Individual Country Insights are also available in a standalone report, summarising the highlights for each of the 17 countries included in the full report.

 


Key Findings

  To what extent do people trust AI systems?

Three out of five people (61%) are either ambivalent or unwilling to trust AI. However, trust and acceptance depend on the AI application. For example, AI use in healthcare is more trusted than AI use for Human Resource purposes. People tend to have faith in the capability and helpfulness of AI systems, but are more sceptical of their safety, security, and fairness. Many people feel ambivalent about the use of AI, reporting optimism and excitement, coupled with fear and worry.

 How do people percieve the benefits and risks of AI?

Most people (85%) believe AI will deliver a range of benefits, but only half believe the benefits of AI outweigh the risks. Three out of four people (73%) are concerned about the risks associated with AI, with cyber security rated as the top risk globally. Other risks of concern to the majority include loss of privacy, manipulation and harmful use, job loss and deskilling (especially in India and South Africa), system failure (particularly in Japan), erosion of human rights, inaccurate outcomes and bias.

 Who is trusted to develop, use, and govern AI?

People have the most confidence in their national universities, research institutions and defence organisations to develop, use and govern AI in the best interests of the public (76-82%). People have the least confidence in governments and commercial organisations, with a third reporting low or no confidence in these entities to develop, use or govern AI. This is problematic given the increasing use of AI by government and business.

 What do people expect of AI management, governance, and regulation?

There is strong global endorsement for the principles of trustworthy AI originally: 97% of people globally view these principles and the practices that underpin them as important for trust. These principles and practices provide a blueprint to organisations on what is required to secure trust in their use of AI. Most people (71%) believe AI regulation is necessary, with a majority believing this to be the case in all countries except India. People expect some form of external, independent oversight, yet only 39% believe current governance, regulations and laws are sufficient to protect people and make AI use safe.

 How do people feel about AI at work?

Most people (55%) are comfortable with the use of AI at work to augment and automate tasks and inform managerial decision-making, as long as it is not used for human resource and people management purposes. People actually prefer AI involvement to sole human decision-making, but they want humans to retain control. Except in China and India, most people believe AI will remove more jobs than it creates.

 How well do people understand AI?

Most people (82%) have heard of AI, yet about half (49%) are unclear about how and when it is being used. However, most (82%) want to learn more. What’s more, 68% of people report using common AI applications, but 41% are unaware AI is a key component in those applications.

 What are the key drivers of trust?

Our modelling demonstrates that trust is central to the acceptance of AI and highlights four pathways to strengthen public trust in AI:

1. An institutional pathway consisting of safeguards, regulations, and laws to make AI use safe, and confidence in government and commercial organisations to develop, use and govern AI.

2. A motivational pathway reflecting the perceived benefits of AI use.

3. An uncertainty reduction pathway reflecting the need to address concerns and risks associated with AI.

4. A knowledge pathway reflecting people’s understanding of AI use and efficacy in using digital technologies.

Of these drivers, the institutional pathway has the strongest influence on trust, followed by the motivational pathway. These pathways hold for all countries surveyed.

 How have attitudes changed over time?

We examined how attitudes towards AI have changed since 2020 in Australia, the UK, USA, Canada, and Germany. Trust in AI, as well as awareness of AI and its use in common applications, increased in each of these countries. However, there has been no change in the perceived adequacy of regulations, laws and safeguards to protect people from the risks of AI, nor in people’s confidence in entities to develop, use and govern AI.


How does Australia compare with other countries?

Australian attitudes towards AI generally mirrored other Western countries like the UK, Canada, and France, where fear and worry about AI are dominant emotions. Less than half of Australians are comfortable with, and trust the use of, AI at work, and only a minority of Australians believe the benefits of AI outweigh the risks.

There is a gap in perceptions across age and education in Australia, with 42% of Gen X and Millennials trusting AI compared to 25% of older generation Australians. We see similar numbers when comparing the university educated (42%) with those who don’t have a degree (27%).

Finally, people in Australia and Japan had notably lower interest in learning about AI compared to other countries.

See the Individual Country Highlights for the Australian highlights.

How we conducted the research

We surveyed over 17,000 people using nationally representative samples from 17 countries: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States of America. These countries are leaders in AI activity and readiness within each global region.

We asked survey respondents about trust and attitudes towards AI systems in general, as well as AI use in the context of four domains where AI is rapidly being deployed and likely to impact many people: in healthcare, public safety and security, human resources, and consumer recommender applications.

 

Download Full Report

Download Global Executive Summary

Download Country Insights Report

Cite the report

Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. 10.14264/00d3c94

Acknowledgements

This research was conducted by The University of Queensland (UQ), in collaboration with KPMG Australia. The UQ team led the design, conduct, analysis, and reporting of this research.

University of Queensland Researchers

Professor Nicole Gillespie, Dr Steve Lockey, Dr Caitlin Curtis and Dr Javad Pool

KPMG Advisors

Dr Ali Akbari, James Mabbott, Rita Fentener van Vlissingen, Jessica Wyndham, and Richard Boele

Funding

This research was supported by an Australian Government Research Support Package grant provided to the UQ AI Collaboratory, and by the KPMG Chair in Organisational Trust grant (ID 2018001776).

 

Project members

Professor Nicole Gillespie

Professor of Management & KPMG Chair in Trust
School of Business

Dr Steve Lockey

Postdoctoral Research Fellow
School of Business

Dr Caitlin Curtis

Research Fellow
Centre for Policy Futures
Affiliate Research Fellow of School of Public Health
School of Public Health

Mr Javad Khazaei Pool

Research Associate, UQ Business School