In 1971, philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the veil of ignorance. What if, he asked, we could erase our brains so we had no memory of who we were — our race, our income level, our profession, anything that may influence our opinion? Who would we protect, and who would we serve with our policies?
The veil of ignorance is a philosophical exercise for thinking about justice and society. But it can be applied to the burgeoning field of artificial intelligence (AI) as well. We laud AI outcomes as mathematical, programmatic, and perhaps, inherently better than emotion-laden human decisions. Can AI provide the veil of ignorance that would lead us to objective and ideal outcomes?
The answer so far has been disappointing. However objective we may intend our technology to be, it is ultimately influenced by the people who build it and the data that feeds it. Technologists do not define the objective functions behind AI independent of social context. Data is not objective, is it reflective of pre-existing social and cultural biases. In practice, AI can be a method of perpetuating bias, leading to unintended negative consequences and inequitable outcomes.
Today’s conversation about unintended consequences and fair outcomes is not new. Also in 1971, the U.S. Supreme Court established the notion of “disparate impact“ — the predominant legal theory used to review unintended discrimination. Specifically, the Griggs vs. Duke Power Company ruling stated that independent of intent, disparate and discriminatory outcomes for protected classes (in this case, with regard to hiring), were in violation of Title VII of the Civil Rights Act of 1964. Today, this ruling is widely used to evaluate hiring and housing decisions, and it is the legal basis for inquiry into the potential for AI discrimination. Specifically, it defines how to understand “unintended consequences“ and whether a decision process’s outcomes are fair. While regulation of AI is in early stages, fairness will be a key pillar of discerning adverse impact.
The field of AI ethics draws an interdisciplinary group of lawyers, philosophers, social scientists, programmers, and others. Influenced by this community, Accenture Applied Intelligence* has developed a fairness tool to understand and address bias in both the data and the algorithmic models that are at the core of AI systems.
How does the tool work?
Our tool measures disparate impact and corrects for predictive parity to achieve equal opportunity. The tool exposes potential disparate impact by investigating the data and model. The process integrates with the existing data science processes. Step 1 in the tool is used in the data investigation process. Step 2 and 3 occur after a model has been developed. In its current form, the fairness evaluation tool works for classification models, which are used, for example, to determine whether or not to grant a loan to an applicant. Classification models group people or items by similar characteristics. The tool helps a user determine whether this grouping occurs in an unfair manner, and provides methods of correction.
There are three steps to the tool:
- The first part examines the data for the hidden influence of user-defined sensitive variables on other variables. The tool identifies and quantifies what impact each predictor variable has on the model’s output in order to identify which variables should be the focus of step 2 and 3. For example, a popular use of AI is in hiring and evaluating employees, but studies show that gender and race are related to salary and who is promoted. HR organizations could use the tool to ensure that variables like job roles and income are independent of peoples’ race and gender.
- The second part of the tool investigates the distribution of model errors for the different classes of a sensitive variable. If there is a discernibly different pattern (visualized in the tool) of the error terms for men and women, this is an indication that the outcomes may be driven by gender. Our tool applies statistical distortion to fix the error term — that is, the error term becomes more homogeneous across the different groups. The degree of repair is determined by the user.
- Finally, the tool examines the false positive rate across different groups and enforces a user-determined equal rate of false positives across all groups. False positives are one particular form of model error: instances where the model outcome said “yes” when the answer should have been “no.” For example, if a person was deemed a low credit risk, granted a loan, and then defaulted on that loan that would be a false positive. The model falsely predicted that the person had low credit risk.
In correcting for fairness, there may be a decline in the model’s accuracy, and the tool illustrates any change in accuracy that may result. Since the balance between accuracy and fairness is context-dependent, we rely on the user to determine the tradeoff. Depending on the context of the tool, it may be a higher priority to ensure equitable outcomes than to optimize accuracy.
One priority in developing this tool was to align with the agile innovation process competitive organizations use today. Therefore, our tool needed to be able to handle large amounts of data so it wouldn’t keep organizations from scaling proof-of-concept AI projects. It also needed to be easily understandable by the average user. And it needed to operate alongside existing data science workflows so the innovation process is not hindered.
Our tool does not simply dictate what is fair. Rather, it assesses and corrects bias within the parameters set by its users who ultimately need to define sensitive variables, error terms and false positive rates. Their decisions should be governed by an organization’s understanding of what we call Responsible AI — the basic principles that an organization will follow when implementing AI to build trust with its stakeholders, avert risks to their business, and contribute value to society.
The tool’s success depended not just on offering solutions to improve algorithms, but also on its ability to explain and understand the outcomes. It is meant to facilitate a larger conversation among data scientists and non-data scientists. By creating a tool that prioritizes human engagement over automation in human-machine collaboration, we aim to inspire the continuation of the fairness debate into actionable ethical practices in AI development.
* An early prototype of the fairness tool was developed at a data study group at the Alan Turing Institute. Accenture thanks the institute and the participating academics for their role.
from HBR.org https://ift.tt/2EEWb0D