Autonomous machines (AMs) are poised to possess human-like moral cognition, yet their morality is often pre-programmed for safety. This raises the question of whether the morality intended by programmers aligns with their actions during actual operation, a crucial consideration for a future society with both humans and AMs. Investigating this, we use a micro-robot swarm in a simulated fire scenario, with 180 participants, including 102 robot programmers, completing moral questionnaires and participating in virtual escape trials. These exercises mirror common societal moral dilemmas. Our comparative analysis reveals a “morality gap” between programming presets and real-time operation, primarily influenced by uncertainty about the future and heightened by external pressures, especially social punishment. This discrepancy suggests that operational morality can diverge from programmed intentions, underlining the need for careful AM design to foster a collaborative and efficient society.


With the increase in large-scale incidents in real life, crowd evacuation plays a pivotal role in ensuring the safety of human crowds during emergency situations. The behavior patterns of crowds are well rendered by existing crowd dynamics models. However, most related studies ignore the information perception of pedestrians. To overcome this issue, we develop a visual information based social force model to simulate the interpretable evacuation process from the perspective of visual perception. Numerical experiments indicate that the evacuation efficiency and decision-making ability promote rapidly within a small range with the increase in unbalanced prior knowledge. The propagation of acceleration behavior caused by emergencies is asymmetric due to the anisotropy of visual information. Therefore, this model effectively characterizes the effect of visual information on crowd evacuation and provides new insights into the information perception of individuals in complex scenarios.