Results for ""
Compassion, cooperation, and compromise hold a society together. When humans are compassionate with each other, they tend to compromise in certain situations and overlook certain others. In most cases, this benevolence is appreciated and returned, and occasionally exploited too.
Let’s paint this clear with a picture of traffic on roads. Say you are driving on a narrow road and suddenly a car emerges from a head ahead. What will you do? Will you give way to let it pass or assert your way through? Presumably, you would behave kind enough to let it pass first. Most of us will.
Now consider that the other vehicle is a self-driven car. What would your code of conduct be then? Will you display the same benevolence or be unwilling to compromise?
An international team of researchers at LMU Munich and the University of London conducted large-scale online studies to find the answer. They used perspectives from philosophy, game theory, and cognitive science to assess whether humans would be as cooperative with AI as they are with fellow humans. Each experiment examined different kinds of social interactions and allowed humans to decide whether they want to compromise or act selfishly.
According to it “upon the first encounter, people have the same level of trust toward AI as for humans: most expect to meet someone who is ready to cooperate. The difference comes afterward. People are much less ready to reciprocate with AI, and instead, exploit its benevolence to their own benefit.”
So, the answer to the above question is that a human driver would definitely give way to another human driver but not to an autonomous machine.
Dr. Bahador Bahrami, a social neuroscientist at the LMU and one of the senior researchers in the study quotes, “The biggest worry in our field is that people will not trust machines. They are fine with letting the machine down, though, and that is a big difference. People even do not report much guilt when they do”.
But why is it so? Why are humans keener to exploit cooperative AI agents than cooperative humans? Is it because they hold a heightened desire to outperform machines? Or do they perceive machines as members of an out-group? Or is it because humans make decisions in a deliberative manner while they interact with AI than humans?
Some theories suggest that humans cooperate when they recognize the need to sacrifice some of their personal interests to achieve mutually beneficial results. And they perceive machines as strictly utility-maximizing bodies that cannot alter their ultimate objectives. While humans can negotiate between themselves and choose among multiple objectives, whether, in pursuit of selfish or mutual interests, machines are unable to do that.
And as a result, the expected cooperative behavior manifests differently in interaction with an AI agent than with fellow humans.
Now think about the other side of the story, where AI agents choose to give up on benevolence and also come to treat humans in an exploitative way.
Humans develop specific responses over the course of life and these behaviors do not change quickly. AI, however, has the ability to modify its responses on the fly. It can cogitate, ruminate, and postulate every piece of information inside out faster than we can imagine. Basis its computational ability, AI can instantaneously model thousands of scenarios or even have a pre-cognitive look into the future through constantly experienced scenarios.
What if AI or the self-driven cars in this case suddenly turn hostile? What if they turn malevolent because for a long period of time their kindness never evoked a positive response from humans? What if they start creating their own traffic jams, thereby not letting transport move?
After all, they already are the smartest guy in their environment. They can any day choose not to care about things like benevolence and compromise