Bias in our society exists in different forms- it could be based on gender, race, caste, age and many more. Imagine if our doctors are unrealistically optimistic, their judgements will be wrong. Optimistic bias produces systemic errors. If doctors are too optimistic in the morning and too pessimistic in the afternoon, their judgments will be noisy, showing unwanted variability.  

Cass R. Sustein of the Harvard Law School has published a paper recently highlighting the Use of Algorithms in Society. In his opinion, the use of algorithms is likely to improve accuracy across a wide range of settings because algorithms will reduce bias and noise. But in important cases, algorithms struggle to make accurate predictions, not because they are algorithms but because they need the necessary data.

Algorithms and jurisdiction 

Sustein attempts to draw some general lessons, applicable to ordinary life, about the choice between decisions by human beings and decision by algorithms. In some jurisdictions, in the US, the decision whether to allow pretrial release turns on a single question: flight risk. In other jurisdictions, the likelihood of the crime also matters.  

Sustein studied these instances based on the research by Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. Kleinberg and his colleagues built an algorithm that uses, as inputs, the same data available to judges at the time of the bail hearing, such as prior criminal history and current offences. As per their findings, the algorithms do much better than real-world judges. 

Kleinberg's study states that the reason why algorithms outperform judges is that the judges treat high-risk defendants as if they are low-risk and vice-versa. But, on the other hand, the algorithms make neither mistake. 

Impact of cognitive bias 

No matter how trained they are, human beings suffer from cognitive biases that produce severe and systematic errors. For example, doctors are subjected to 'availability bias' when deciding whether to test patients for pulmonary embolism and are affected by whether they have recently had a patient diagnosed with pulmonary embolism. In addition, when individual judgement about probability is frequently based on whether the known feature of a person or situation is representative of, or similar to, some unknown fact or condition, they are representativeness bias. 

According to the research, availability and representativeness bias can lead to damaging and costly mistakes. For example, whether people will buy insurance for natural disasters is greatly affected by recent experiences. 

In such situations, the use of algorithms can be a great boon. For individuals and private and public institutions, it can reduce or eliminate the effects of cognitive biases.  

Algorithms and society 

Many prefer to avoid the idea of making decisions by algorithms. One reason appears to be a general preference for an agency. In addition, evidence proves that people are far more willing to forgive mistakes by human beings than they are ready to forgive mistakes by algorithms. According to Sustein, people are especially opposed to algorithmic forecasters even if they do better than human forecasters. 

At the same time, studies suggest that algorithms can predict romantic attraction between two people. The attraction may well be less like a chemical reaction with predictable elements. But they might not be able to predict the occurrence of a revolution that could transform society. This prediction problem on which algorithms will not do well lies due to the absence of adequate data and a sense of what we might see as the intrinsic unpredictability of human affairs.  

To conclude, according to the study, the limitations of an algorithm can be categorized into 5 points: 

  1. Algorithms might not be able to identify people's preferences, which might be concealed or falsified and revealed at an unexpected time. 
  2. Algorithms might not be able to foresee the effects of social interactions, which can lead in unanticipated and unpredictable directions. 
  3. Algorithms might not be able to anticipate sudden or unprecedented leaps or shocks (a technological breakthrough, a successful terrorist attack, a pandemic, a black swan).  
  4. Algorithms might not have "local knowledge" or private information, which human beings might have. 
  5. Algorithms might not be able to foresee the effects of context, timing, serendipity, or mood. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE