AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (441.3 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Evolution of Agents in the Case of a Balanced Diet

Jianran Liu1Wen Ji1( )
Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
Show Author Information

Abstract

Agents are always in an interactive environment. With time, the intelligence of agents will be affected by the interactive environment. Agents need to coordinate the interaction with different environmental factors to achieve the optimal intelligence state. We consider an agent’s interaction with the environment as an action-reward process. An agent balances the reward it receives by acting with various environmental factors. This paper refers to the concept of interaction between an agent and the environment in reinforcement learning and calculates the optimal mode of interaction between an agent and the environment. It aims to help agents maintain the best intelligence state as far as possible. For specific interaction scenarios, this paper takes food collocation as an example, the evolution process between an agent and the environment is constructed, and the advantages and disadvantages of the evolutionary environment are reflected by the evolution status of the agent. Our practical case study using dietary combinations demonstrates the feasibility of this interactive balance.

References

1

D. L. Dowe and J. Hernández-Orallo, IQ tests are not for machines, yet, Intelligence, vol. 40, no. 2, pp. 77–81, 2012.

2

A. M. Turing, Computing machinery and intelligence, Mind, vol. 59, no. 236, pp. 433–460, 1950.

3
G. Longo, Laplace, turing and the “imitation game” impossible geometry, in Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, R. Epstein, G. Roberts, and G. Beber, eds. Dordrecht, the Netherlands: Springer, 2009, pp. 377–411.https://doi.org/10.1007/978-1-4020-6710-5_23
4

J. Hernández-Orallo and D. L. Dowe, Measuring universal intelligence: Towards an anytime intelligence test, Artif. Intell., vol. 174, no. 18, pp. 1508–1539, 2010.

5
M. Dabaghchian, S. S. Liu, A. Alipour-Fanid, K. Zeng, X. H. Li, and Y. Chen, Intelligence measure of cognitive radios with learning capabilities, in Proc. 2016 IEEE Global Communications Conf. (GLOBECOM), Washington, DC, USA, 2016, pp. 1–6.https://doi.org/10.1109/GLOCOM.2016.7841906
6

W. Ji, J. Liu, Z. W. Pan, J. C. Xu, B. Liang, and Y. Q. Chen, Quality-time-complexity universal intelligence measurement, Int. J. Crowd Sci., vol. 2, no. 1, pp. 18–26, 2018.

7

S. S. Xiao, C. P. Wei, and M. Dong, Crowd intelligence: Analyzing online product reviews for preference measurement, Inf. Manage., vol. 53, no. 2, pp. 169–182, 2016.

8

A. W. Woolley, C. F. Chabris, A. Pentland, N. Hashmi, and T. W. Malone, Evidence for a collective intelligence factor in the performance of human groups, Science, vol. 330, no. 6004, pp. 686–688, 2010.

9
S. Costantini, Defining and maintaining agent’s experience in logical agents, in Proc. 7th Latin American Workshop on Non-Monotonic Reasoning, Toluca, Mexico, 2011, pp. 151–165.
10
Y. H. Chang, T. Y. Lu, and R. J. Fang, An adaptive e-learning system based on intelligent agents, in Proc. 6th Conf. on WSEAS Int. Conf. on Applied Computer Science, Hangzhou, China, 2007, pp. 200–205.
11

J. Duncan, R. J. Seitz, J. Kolodny, D. Bor, H. Herzog, A. Ahmed, F. N. Newell, and H. Emslie, A neural basis for general intelligence, Science, vol. 289, no. 5478, pp. 457–460, 2000.

12
D. F. Hougen and S. N. H. Shah, The evolution of reinforcement learning, in Proc. 2019 IEEE Symp. Series on Computational Intelligence (SSCI), Xiamen, China, 2020, pp. 1457–1464.https://doi.org/10.1109/SSCI44817.2019.9003146
13

V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., Human-level control through deep reinforcement learning, Nature, vol. 518, no. 7540, pp. 529–533, 2015.

14
R. Lowe and T. Ziemke, Exploring the relationship of reward and punishment in reinforcement learning, in Proc. 2013 IEEE Symp. on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), Singapore, 2013, pp. 140–147.https://doi.org/10.1109/ADPRL.2013.6615000
International Journal of Crowd Science
Pages 1-6
Cite this article:
Liu J, Ji W. Evolution of Agents in the Case of a Balanced Diet. International Journal of Crowd Science, 2022, 6(1): 1-6. https://doi.org/10.26599/IJCS.2022.9100005

1163

Views

98

Downloads

4

Crossref

3

Scopus

Altmetrics

Received: 03 March 2021
Revised: 27 February 2022
Accepted: 28 February 2022
Published: 15 April 2022
© The author(s) 2022

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return