AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (322.5 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

On the Existence of Robot Zombies and our Ethical Obligations to AI Systems

Computer Science Department, Stanford University, Stanford, CA 94305, USA
Show Author Information

Abstract

As artificial intelligence algorithms improve, we will interact with programs that seem increasingly human. We may never know if these algorithms are sentient, yet this quality is crucial to ethical considerations regarding their moral status. We will likely have to make important decisions without a full understanding of the relevant issues and facts. Given this ignorance, we ought to take seriously the prospect that some systems are sentient. It would be a moral catastrophe if we were to treat them as if they were not sentient, but, in reality they are.

References

[1]

R. Kirk, Sentience and behavior, Mind, vol. 83, pp. 43–60, 1974.

[2]

T. Nagel, What is it like to be a bat? Philos. Rev., vol. 83, no. 4, pp. 435–450, 1974.

[4]
D. Jannai, A. Meron, B. Lenz, Y. Levine, and Y. Shoham, Human or not? A gamified approach to the Turing test, arXiv preprint arXiv: 2305.20010, 2023.
[5]

N. Bostrom, Are we living in a computer simulation? Philos. Q., vol. 53, no. 211, pp. 243–255, 2003.

[6]
D. J. Chalmers, Absent qualia, fading qualia, dancing qualia, in Conscious Experience, T. Metzinger, ed. Exeter, UK: Imprint Academic, 1995, pp. 309–328.
[7]

G. Tononi, An information integration theory of consciousness, BMC Neurosci., vol. 5, p. 42, 2004.

[8]
R. Penrose, Shadows of the Mind. Oxford, UK: Oxford University Press, 1994.
[9]

A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, Language models are unsupervised multitask learners, OpenAI Blog, vol. 1, no. 8, p. 9, 2019.

[10]

T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., Language models are few-shot learners, Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.

[11]
OpenAI, GPT-4 technical report, Technical report, OpenAI, San Francisco, CA, USA, 2023.
[12]

C. Biever, ChatGPT broke the Turing test—The race is on for new ways to assess AI, Nature, vol. 619, no. 7971, pp. 686–689, 2023.

[13]

S. D. Baum, B. Goertzel, and T. G. Goertzel, How long until human-level AI? Results from an expert assessment, Technol. Forecast. Soc. Change, vol. 78, no. 1, pp. 185–195, 2011.

[14]

D. J. Chalmers, Facing up to the problem of consciousness, Journal of Consciousness Studies, vol. 2, no. 3, pp. 200–219, 1995.

[15]

C. McGinn, Can we solve the mind-body problem? Mind, vol. 99, no. 391, pp. 349–366, 1989.

[16]

D. Bourget and D. J. Chalmers, Philosophers on philosophy: The 2020 PhilPapers survey, Philos. Impr., vol. 23, no. 1, p. 11, 2023.

[17]

R. D. Ryder, Souls and sentientism, Between the Species, vol. 7, no. 1, p. 3, 1991.

[18]

M. P. Elder, The fish pain debate: Broadening humanity’s moral horizon, J. Anim. Ethics, vol. 4, no. 2, pp. 16–29, 2014.

Journal of Social Computing
Pages 270-274
Cite this article:
Hansen LR. On the Existence of Robot Zombies and our Ethical Obligations to AI Systems. Journal of Social Computing, 2023, 4(4): 270-274. https://doi.org/10.23919/JSC.2023.0023
Part of a topical collection:

543

Views

37

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 01 July 2023
Revised: 30 November 2023
Accepted: 02 December 2023
Published: 31 December 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return