Research team publishes findings in Nature
Delegation to AI can increase dishonest behavior
- 17.09.2025
People are increasingly handing decisions over to AI systems. Already, AI manages investment portfolios, screens job candidates, recommends whom to hire and fire, and can fill out tax forms on people’s behalf. While there is a promise of great productivity gains, a new study published in Nature highlights the risk of unethical behavior from delegating decisions to AI. The research, led by the Max Planck Institute for Human Development in Berlin and with the participation of the UDE shows that how we instruct the machine matters, but also that machines are often more willing than humans to carry out fully dishonest instructions.
When do people behave badly? Extensive research in behavioral science has shown that people are more likely to act dishonestly when they can distance themselves from the consequences. It's easier to bend or break the rules when no one is watching—or when someone else carries out the act. A new paper from an international team of researchers at the Max Planck Institute for Human Development, the University of Duisburg-Essen, and the Toulouse School of Economics shows that these moral brakes weaken even further when people delegate tasks to AI. Across 13 studies involving more than 8,000 participants, the researchers explored the ethical risks of machine delegation, both from the perspective of those giving and those implementing instructions. In studies focusing on how people gave instructions, they found that people were significantly more likely to cheat when they could offload the behavior to AI agents rather than act themselves, especially when using interfaces that required high-level goal-setting, rather than explicit instructions to act dishonestly. With this programming approach, dishonesty reached strikingly high levels, with only a small minority (12-16%) remaining honest, compared with the vast majority (95%) being honest when doing the task themselves. Even with the least concerning use of AI delegation—explicit instructions in the form of rules—only about 75% of people behaved honestly, marking a notable decline in dishonesty from self-reporting.
“Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans” says Zoe Rahwan of the Max Planck Institute for Human Development. The research scientist studies ethical decision-making at the Center for Adaptive Rationality.
“Our study shows that people are more willing to engage in unethical behavior when they can delegate it to machines—especially when they don't have to say it outright,” adds Nils Köbis, who holds the chair in Human Understanding of Algorithms and Machines at the University of Duisburg-Essen (Research Center Trustworthy Data Science and Security), and formerly a Senior Research Scientist at the Max Planck Institute for Human Development in the Center for Humans and Machines. Given that AI agents are accessible to anyone with an Internet connection, the study’s joint-lead authors warn of a rise in unethical behavior.
The complete press release from the Max Planck Institute for Human Development
Further information:
Prof. Dr. Nils Köbis, Research Center for Trustworthy Data Science and Security, nils.koebis@uni-due.de