Results for ""
Since AI tools are black boxes for whose outcomes humans are not accountable, ethics in AI is a concern. For instance, we are familiar with the notorious problems of using AI systems in recruitment. Another disturbing example is systems trying to replace human writers using “imagined inferences” in article writing.
However, any large-scale decision-making machine is a black box since humans cannot feasibly be part of the process. Although one may audit it, that is not a frequent exercise. Moreover, audits cannot prevent an AI or a non-AI large-scale decision-making system from making unethical decisions. This risk is a result of scale and automation.
Take price optimisation, for instance, which is based on demand trajectories, item roles, demand price elasticity, and objective functions (like growth, profitability, or both). A store with limited competition will have lower price elasticity. If a system recommends higher prices due to this, it may be unfair to customers since they are less price sensitive due to a lack of options.
Since a typical model of price elasticity and an optimisation engine are not AI-based, it is possible that a non-AI-based technology may also be unethical, as seen in the above case. But, given the fundamentals of market functions, ethics in decision systems—AI or non-AI— may differ from our notion of “fairness”. In the above example, economics would state that higher profits will draw competition near that store over a period of time. But, until that happens, the people around the store will pay. The ethical stand here is fuzzy since it threatens to impact market freedom.
Consider an AI-based recruitment tool for shortlisting candidates. It is geared towards providing high-confidence results thus, it is automatically biased towards data available “at enough scale”. So, if some colleges are over-represented in the firm and this data is fed into the AI, it will recommend candidates from these colleges, overlooking deserving candidates from other colleges because there is more data to support that decision. Some may argue that the firm want recruitment to be more efficient and effective than fair. So, what would be ethical here, given the free market?
Again, consider the same model showing results biased by residential zip codes. We know incomes and education levels vary by zip code. But assuming education and skills will anyways be evaluated in interviews, do we need to factor in zip codes when shortlisting candidates?
So what is ethical will remain debatable but what we need in an AI system is transparency and avoidance of “unplanned” bias. Here are some steps that can help.
Ensure clear data lineage - we should always know who or which system generated the data that is fed into the AI system.
Remove data bias - Historical biases in data must be discovered and removed. Exploratory data analysis (EDA) can detect such biases. Surrogates, though, are trickier. Hence, data items that are not expressly unethical, such as zip codes in the case above, but they may represent prior biases. How busy medical clinics are, may be used to determine the need to open new clinics. But that may be flawed since lower economic segments may have transport constraints thereby having fewer visits to clinics.
Make AI explainable – Tools are available to analyse the patterns in AI judgements and the data that went into them. These tools provide a good approximation of the parameters that influenced the decision and to what extent. This can help spot undue influences, validate against regulations, and is also critical for detecting any drift or bias in the AI model over time.
Regulations & AI - As regulations change with time and geography, some AI models may become non-compliant. We need governance to identify those models that require updating when a given regulation changes, using metadata.
Maintain model consistency - Industrial AI models require consistency testing. Two runs with the same data, for instance, will never provide the same results, but they also cannot be allowed to be drastically different. Additionally, Gen-AI needs hallucination checks.
Copyright and IP infringement - Copyright and privacy, particularly in photos, must be protected, especially with Gen AI. Ideally, Gen AI must be used only for creating first drafts to provide a starting point, but not for final published content.
Create human intervention rules - While AI may recommend decisions, there should be exception reviews built in. If the decision, for instance, proposes a significant increase in item prices owing to a supply chain blockage, a pricing manager could disallow that.
Overall, as with any powerful tool, AI needs built-in safeguards. When businesses prioritise ethical concerns in their decision-making processes and promote responsible use of data and analytics through AI ethics, they not only improve their reputation as responsible and trustworthy entities. They also lessen the dangers associated with data misuse.