AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Effect of Artificial Intelligence on Social Trust in American Institutions

Andrew Collins( )Jason Jeffrey Jones
Department of Sociology, Stony Brook University, Stony Brook, NY 11790, USA.
Show Author Information

Abstract

In recent decades, social scientists have debated declining levels of trust in American institutions. At the same time, many American institutions are coming under scrutiny for their use of artificial intelligence (AI) systems. This paper analyzes the results of a survey experiment over a nationally representative sample to gauge the effect that the use of AI has on the American public’s trust in their social institutions, including government, private corporations, police precincts, and hospitals. We find that artificial intelligence systems were associated with significant trust penalties when used by American police precincts, companies, and hospitals. These penalties were especially strong for American police precincts and, in most cases, were notably stronger than the trust penalties associated with the use of smartphone apps, implicit bias training, machine learning, and mindfulness training. Americans’ trust in institutions tends to be negatively impacted by the use of new tools. While there are significant variations in trust between different pairings of institutions and tools, generally speaking, institutions which use AI suffer the most significant loss of trust. American government agencies are a notable exception here, receiving a small but puzzling boost in trust when associated with the use of AI systems.

References

[1]
M. K. T. R. Normalini, Trust in Internet banking in Malaysia and the moderating influence of perceived effectiveness of biometrics technology on perceived privacy and security, Journal of Management Sciences, vol. 4, no. 1, pp. 3–26, 2017.
[2]

C. A. Latkin, L. Dayton, G. Yi, A. Konstantopoulos, and B. Boodram, Trust in a COVID-19 vaccine in the U. S.: A social-ecological perspective, Soc. Sci. Med., vol. 270, p. 113684, 2021.

[3]

P. Markey and C. Ferguson, Teaching us to fear: The violent video game moral panic and the politics of game research, Am. J. Play., vol. 10, no. 1, pp. 99–115, 2017.

[4]
W. Ahmed, J. Vidal-Alaball, J. Downing, and F. L. Seguí, COVID-19 and the 5G conspiracy theory: Social network analysis of twitter data, J. Med. Internet Res., vol. 22, no. 5, p. e19458, 2020.
[5]

R. Inglehart, Post-materialism in an environment of insecurity, Am. Polit. Sci. Rev., vol. 75, no. 4, pp. 880–900, 1981.

[6]

P. Paxton, Trust in decline, Contexts, vol. 4, no. 1, pp. 40–46, 2005.

[7]

R. D. Putnam, Bowling alone: America’s declining social capital, J. Democr., vol. 6, no. 1, pp. 65–78, 1995.

[8]
E. M. Uslaner, The moral foundations of trust, SSRN Electron. J.
[9]

I. Kawachi, B. P. Kennedy, K. Lochner, and D. Prothrow-Stith, Social capital, income inequality, and mortality, Am. J. Public Health, vol. 87, no. 9, pp. 1491–1498, 1997.

[10]
C. A. Larsen, The Rise and Fall of Social Cohesion: The Construction and De-construction of Social Trust in the US, UK, Sweden and Denmark. Oxford, UK: Oxford University Press, 2013.
[11]

O. P. Hastings, Less equal, less trusting? Longitudinal and cross-sectional effects of income inequality on trust in U. S. States, 1973–2012, Soc. Sci. Res., vol. 74, pp. 77–95, 2018.

[12]
A. J. G. M. Bekke and F. M. V. D. Meer, Civil Service Systems in Western Europe. Cheltenham, UK: Edward Elgar Publishing, 2000.
[13]

S. Van de Walle, S. Van Roosbroek, and G. Bouckaert, Trust in the public sector: Is there any evidence for a long-term decline, Int. Rev. Adm. Sci., vol. 74, no. 1, pp. 47–64, 2008.

[14]

A. K. Clark, Rethinking the decline in social capital, Am. Polit. Res., vol. 43, no. 4, pp. 569–601, 2015.

[15]

J. M. Twenge, W. K. Campbell, and N. T. Carter, Declines in trust in others and confidence in institutions among American adults and late adolescents, 1972–2012, Psychol. Sci., vol. 25, no. 10, pp. 1914–1923, 2014.

[16]

J. Mewes, M. Fairbrother, G. N. Giordano, C. Wu, and R. Wilkes, Experiences matter: A longitudinal study of individual-level sources of declining social trust in the United States, Soc. Sci. Res., vol. 95, p. 102537, 2021.

[17]

B. B. Neves, Social capital and Internet use: The irrelevant, the bad, and the good, Sociol. Compass, vol. 7, no. 8, pp. 599–611, 2013.

[18]
L. Rainie, C. Funk, M. Anderson, and A. Tyson, AI and human enhancement: Americans’ openness is tempered by a range of concerns, https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/, 2022.
[19]
B. Zhang and A. Dafoe, U. S. public opinion on the governance of artificial intelligence, in Proc. AAAI/ACM Conf. AI, Ethics, and Society, New York, NY, USA, 2020, pp. 187–193.
[20]

B. J. Dietvorst, J. P. Simmons, and C. Massey, Algorithm aversion: People erroneously avoid algorithms after seeing them err, J. Exp. Psychol. Gen., vol. 144, no. 1, pp. 114–126, 2015.

[21]
E. Jussupow, I. Benbasat, and A. Heinzl, Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion, https://ideas.repec.org/p/dar/wpaper/138565.html, 2020.
[22]

J. M. Logg, J. A. Minson, and D. A. Moore, Algorithm appreciation: People prefer algorithmic to human judgment, Organ. Behav. Hum. Decis. Process., vol. 151, pp. 90–103, 2019.

[23]
K. Sostek and B. Slatkin, How Google surveys works? http://g.co/surveyswhitepaper, 2017.
[24]

T. Lumley, Analysis of complex survey samples, J. Stat. Soft., vol. 9, no. 8, pp. 1–19, 2004.

[25]
H. Wickham and G. Grolemund, R for data science, https://r4ds.had.co.nz/, 2017.
[26]

P. Wang, On defining artificial intelligence, Journal of Artificial General Intelligence, vol. 10, pp. 1–37, 2019.

[27]

D. Monett, C. W. P. Lewis, K. R. Thórisson, J. Bach, G. Baldassarre, G. Granato, I. S. N. Berkeley, F. Chollet, M. Crosby, H. Shevlin, et al., Special issue “on defining artificial intelligence”—Commentaries and author’s response, J. Artif. Gen. Intell., vol. 11, no. 2, pp. 1–100, 2020.

[28]

I. Hermann, Artificial intelligence in fiction: Between narratives and metaphors, AI Soc., vol. 38, no. 1, pp. 319–329, 2023.

[29]

J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan, Human decisions and machine predictions, Q. J. Econ., vol. 133, no. 1, pp. 237–293, 2018.

Journal of Social Computing
Pages 221-231
Cite this article:
Collins A, Jones JJ. Effect of Artificial Intelligence on Social Trust in American Institutions. Journal of Social Computing, 2023, 4(3): 221-231. https://doi.org/10.23919/JSC.2023.0022

432

Views

33

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 29 July 2023
Revised: 16 November 2023
Accepted: 19 November 2023
Published: 30 September 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return