AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (371.1 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Unlearning Descartes: Sentient AI is a Political Problem

School of Data Science, UNC Charlotte, Charlotte, NC 28223, USA
Show Author Information

Abstract

The emergence of Large Language Models (LLMs) has renewed debate about whether Artificial Intelligence (AI) can be conscious or sentient. This paper identifies two approaches to the topic and argues: (1) A “Cartesian” approach treats consciousness, sentience, and personhood as very similar terms, and treats language use as evidence that an entity is conscious. This approach, which has been dominant in AI research, is primarily interested in what consciousness is, and whether an entity possesses it. (2) An alternative “Hobbesian” approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious. This both enables a political disambiguation of language, consciousness, and personhood and allows regulation to proceed in the face of intractable problems in deciding if something “really is” sentient. (3) AI systems should not be treated as conscious, for at least two reasons: (a) treating the system as an origin point tends to mask competing interests in creating it, at the expense of the most vulnerable people involved; and (b) it will tend to hinder efforts at holding someone accountable for the behavior of the systems. A major objective of this paper is accordingly to encourage a shift in thinking. In place of the Cartesian question—is AI sentient?—I propose that we confront the more Hobbesian one: Does it make sense to regulate developments in which AI systems behave as if they were sentient?

References

[1]
B. Perrigo, Bing’s AI is threatening users. That’s no laughing matter, Time, https://time.com/6256529/bing-openai-chatgpt-danger-alignment/, 2023.
[2]
K. Roose, A conversation with Bing’s Chatbot left me deeply unsettled, The New York Times, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html, 2023.
[3]
B. Lemoine, What is LaMDA and what does it want? https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489, 2023.
[4]
D. J. Chalmers, Could a large language model be conscious? arXiv preprint arXiv: 2303.07103, 2023.
[5]

P. S. MacDonald, Descartes: The lost episodes, J. Hist. Philos., vol. 40, no. 4, pp. 437–460, 2002.

[6]
R. Descartes, The Philosophical Writings of Descartes. Cambridge, UK: Cambridge University Press, 1984.
[7]

T. M. Schmaltz, Malebranche on Descartes on mind-body distinctness, J. Hist. Philos., vol. 32, no. 4, pp. 573–603, 1994.

[8]
R. Descartes, Oeuvres de Descartes. Paris, France: J. Vrin, 1957.
[9]
S. Gaukroger, Descartes’ System of Natural Philosophy. Cambridge, UK: Cambridge University Press, 2002.
[10]

G. M. Ross, Hobbes and Descartes on the relation between language and consciousness, Synthese, vol. 75, no. 2, pp. 217–229, 1988.

[11]

H. L. Dreyfus, Why Heideggerian AI failed and how fixing it would require making it more Heideggerian, Artif. Intell., vol. 171, no. 18, pp. 1137–1160, 2007.

[12]
L. Floridi, Philosophy and Computing: An Introduction. London, UK: Taylor & Francis, 2002.
[13]
S. Gaukroger, Descartes: An Intellectual Biography. Oxford, UK: Oxford University Press, 1995.
[14]
T. M. Schmaltz, Radical Cartesianism: The French Reception of Descartes. Cambridge, UK: Cambridge University Press, 2002.
[15]
A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, Fairness and abstraction in sociotechnical systems, in Proc. Conf. Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 59–68.
[16]

B. Green, Escaping the impossibility of fairness: From formal to substantive algorithmic fairness, Philos. Technol., vol. 35, no. 4, pp. 1–32, 2022.

[17]
C. Barabas, C. Doyle, J. Rubinovitz, and K. Dinakar, Studying up: Reorienting the study of algorithmic fairness around issues of power, in Proc. 2020 Conf. Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 167–176.
[18]
E. M. Bender and A. Koller, Climbing towards NLU: On meaning, form, and understanding in the age of data, in Proc. 58th Annual Meeting of the Association for Computational Linguistics, Stroudsburg, PA, USA, 2020, pp. 5185–1598.
[19]
R. Schaeffer, B. Miranda, and S. Koyejo, Are emergent abilities of large language models a mirage? arXiv preprint arXiv: 2304.15004, 2023.
[20]
J. Horgan, A 25-year-old bet about consciousness has finally been settled, Scientific American, https://www.scientificamerican.com/article/a-25-year-old-bet-about-consciousness-has-finally-been-settled/, 2023.
[21]

L. B. Solum, Legal personhood for artificial intelligences, North Carolina Law Review, vol. 70, pp. 1231–87, 1992.

[22]
T. Hobbes, Leviathan: With Selected Variants from the Latin Edition of 1668. Indianapolis, IN, USA: Hackett Pub. Co., 1994.
[23]
T. Hobbes, The English Works of Thomas Hobbes of Malmesbury. London, UK: Bohn, 1839.
[24]
N. Sambasivan, S. Kapania, H. Highfill, D. Akrong, P. Paritosh, and L. M. Aroyo, “Everyone wants to do the model work, not the data work”: Data cascades in high-stakes AI, in Proc. 2021 CHI Conf. Human Factors in Computing Systems, Yokohama, Japan, 2021, pp. 1–15.
[25]
C. G. Northcutt, A. Athalye, and J. Mueller, Pervasive label errors in test sets destabilize machine learning benchmarks, arXiv preprint arXiv: 2103.14749, 2021.
[26]

A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, Data and its (dis)contents: A survey of dataset development and use in machine learning research, Patterns, vol. 2, no. 11, p. 100336, 2021.

[27]
T. DeVries, I. Misra, C. Wang, and L. V. D. Maaten, Does object recognition work for everyone? arXiv preprint arXiv: 1906.02659, 2019.
[28]
N. Malevé, An introduction to image datasets, https://unthinking.photography/articles/an-introduction-to-image-datasets, 2023.
[29]
M. Sap, S. Swayamdipta, L. Vianna, X. Zhou, Y. Choi, and N. A. Smith, Annotators with attitudes: How annotator beliefs and identities bias toxic language detection, arXiv preprint arXiv: 2111.07997, 2021.
[30]
E. Denton, A. Hanna, R. Amironesei, A. Smart, and H. Nicole, On the genealogy of machine learning datasets: A critical history of ImageNet, Big Data Soc. doi: 10.1177/20539517211035955.
[31]

O. Keyes and K. Creel, Artificial knowing otherwise, Feminist Philosophy Quarterly, vol. 8, no. 3, pp. 1–26, 2022.

[32]

D. W. Hanson, Reconsidering hobbes’s conventionalism, Rev. Polit., vol. 53, no. 4, pp. 627–651, 1991.

[33]
G. Hull, Hobbes and the Making of Modern Political Thought. London, UK: Continuum, 2009.
[34]
P. Pettit, Made with Words: Hobbes on Language, Mind, and Politics. Princeton, NJ, USA: Princeton University Press, 2008.
[35]
Y. C. Zarka, Hobbes et la pensée politique moderne. Paris, France: Presses universitaires de France, 1995.
[36]
A. Arnauld and P. Nicole, Logic or the Art of Thinking. Cambridge, UK: Cambridge University Press, 1996.
[37]
K. Morris, Bêtes-machines, in Descartes’ Natural Philosophy, S. Gaukroger, J. Schuster, and J. Sutton, eds. New York, NY, USA: Routledge, 2000, pp. 401–419.
[38]

S. Fleming, The two faces of personhood: Hobbes, corporate agency and the personality of the state, Eur. J. Polit. Theory, vol. 20, no. 1, pp. 5–26, 2021.

[39]
Y. C. Zarka, L’autre voie de la subjectivité: six études sur le sujet et le droit naturel au XVIIe siècle. Paris, France: Editions Beauchesne, 2000.
[40]
J. Chanteur, Note sur les Notions de ‘Peuple’ et de ‘Multitude’ chez Hobbes, in Hobbes-Forschungen, M. A. Cattaneo, K. Reinhart, and R. Schnur, eds. Berlin, Germany: Duncker & Humblot, 1969, pp. 223–235.
[41]
B. Green and S. Viljoen, Algorithmic realism: Expanding the boundaries of algorithmic thought, in Proc. 2020 Conf. Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 19–31.
[42]
K. Joyce, L. Smith-Doerr, S. Alegria, S. Bell, T. Cruz, S. G. Hoffman, S. U. Noble, and B. Shestakofsky, Toward a sociology of artificial intelligence: A call for research on inequalities and structural change, Socius Sociol. Res. a Dyn. World. doi: 10.1177/2378023121999581.
[43]
A. Hanna, E. Denton, A. Smart, and J. Smith-Loud, Towards a critical race methodology in algorithmic fairness, in Proc. 2020 Conf. Fairness, Accountability, and Transparency, Barcelona, Spain, 2020, pp. 501–512.
[44]

G. Hull, Dirty data labeled dirt cheap: Epistemic injustice in machine learning systems, Ethics Inf. Technol., vol. 25, no. 3, pp. 1–14, 2023.

[45]

N. Okidegbe, Discredited data, Cornell Law Review, vol. 107, no. 7, pp. 2007–2065, 2022.

[46]

P. D. König, Dissecting the algorithmic leviathan: On the socio-political anatomy of algorithmic governance, Philos. Technol., vol. 33, no. 3, pp. 467–485, 2020.

[47]
A. Birhane, V. U. Prabhu, and E. Kahembwe, Multimodal datasets: Misogyny, pornography, and malignant stereotypes, arXiv preprint arXiv: 2110.01963, 2021.
[48]
B. Perrigo, OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic, Time, https://time.com/6247678/openai-chatgpt-kenya-workers/, 2023.
[49]
J. Dzieza, Inside the AI factory, Intelligencer, https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html, 2023.
[50]
E. Perez, S. Ringer, K. Lukošiūtė, K. Nguyen, E. Chen, S. Heiner, C. Pettit, C. Olsson, S. Kundu, S. Kadavath, et al., Discovering language model behaviors with model-written evaluations, arXiv preprint arXiv: 2212.09251, 2022.
[51]
K. Creel and D. Hellman, The algorithmic Leviathan: Arbitrariness, fairness, and opportunity in algorithmic decision making systems, in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021, p. 816.
[52]
A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, The values encoded in machine learning research, in Proc. 2022 ACM Conf. Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 2022, pp. 173–184.
[53]
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, On the dangers of stochastic parrots: Can language models be too big? in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency, Virtual Event, Canada, 2021, pp. 610–623.
[54]
P. Verma and W. Oremus, ChatGPT invented a sexual harassment scandal and named a real law prof as the accused, Washington Post, https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/, 2023.
[55]
L. Sands, ChatGPT falsely told voters their mayor was jailed for bribery. He may sue. Washington Post, https://www.washingtonpost.com/technology/2023/04/06/chatgpt-australia-mayor-lawsuit-lies/, 2023.
[56]

G. Hull, Building better citizens: Hobbes against the ontological illusion, Epoché: A Journal for the History of Philosophy, vol. 20, no. 1, pp. 105–29, 2015.

[57]
S. R. Bowman, Eight things to know about large language models, arXiv preprint arXiv: 2304.00612, 2023.
[58]

Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung, Survey of hallucination in natural language generation, ACM Comput. Surv., vol. 55, no. 12, pp. 1–38, 2023.

[59]
D. C. Mollo and R. Millière, The vector grounding problem, arXiv preprint arXiv: 2304.01481, 2023.
[60]
O. H. Hamid, ChatGPT and the Chinese room argument: An eloquent AI conversationalist lacking true understanding and consciousness, in Proc. 2023 9th Int. Conf. Information Technology Trends (ITT), Dubai, United Arab Emirates, 2023, pp. 238–241.
[61]
New York Times Co. v. Sullivan, 376 U.S. 254, 1964.
[62]

M. Chatterjee and J. C. Fromer, Minds, machines, and the law: The case of volition in copyright law, Columbia Law Review, vol. 119, pp. 1887–1916, 2019.

[63]
G. Hull, The death of the data subject, Law Cult. Humanit. doi: 10.1177/17438721211049376.
[64]
In re Facebook Biometric Info. Privacy Litig., 2018 U.S. Dist. LEXIS 81044.
[65]
D. K. Citron, Hate Crimes in Cyberspace. Cambridge, MA, USA: Harvard University Press, 2014.
[66]
D. K. Citron, The Fight for Privacy: Protecting Dignity, Identity and Love in the Digital Age. New York, NY, USA: Random House, 2022.
[67]

A. Solow-Niederman, Algorithmic grey holes, Journal of Law and Innovation, vol. 5, no. 1, pp. 116–139, 2023.

Journal of Social Computing
Pages 193-204
Cite this article:
Hull G. Unlearning Descartes: Sentient AI is a Political Problem. Journal of Social Computing, 2023, 4(3): 193-204. https://doi.org/10.23919/JSC.2023.0020
Part of a topical collection:

520

Views

36

Downloads

1

Crossref

1

Scopus

Altmetrics

Received: 01 July 2023
Revised: 27 October 2023
Accepted: 30 October 2023
Published: 30 September 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return