Study Aims to Arm Artificial Intelligence with Fairness Factors

Posted: October 1, 2021
Parinaz

While some researchers consider “fairness” to be a principle deeply seated in human actions, a new study will examine the long-term impacts of automated machine-driven decision-making on equity.

The proposal, “FAI: Fairness in Machine Learning with Human in the Loop,” was awarded a three-year, $1 million grant from the National Science Foundation and Amazon with Assistant Professor for ISE and Electrical and Computer Engineering Parinaz Naghizadeh serving as a co-principal investigator.

The study will be a collaboration with peers from University California-Santa Cruz, University of Michigan and Purdue University. “We pooled our different disciplinary expertise together to apply for this grant,” Professor Naghizadeh says.

In their proposal, the investigators wrote, “While recent works have looked into the fairness issues raised by the use of [Artificial Intelligence] AI in the ‘short-term’, the long-term consequences and impacts of automated decision-making remain unclear. The understanding of the long-term impact of a fair decision provides guidelines to policy-makers when deploying an algorithmic model in a dynamic environment and is critical to its trustworthiness and adoption.”

The project intends to examine the long-term impact made by automated machine learning algorithms. “This knowledge will help design the right fairness criteria and intervention mechanisms throughout the life cycle of the decision-action loop to ensure long-term equitable outcomes,” the investigators wrote in their abstract.

“One of my research interests is the study of economics of decision-making,” Professor Naghizadeh says. “To understand how to drive ethical, fair, and equitable decisions, we need to account for the economic goals of the decision makers as well.”

She says the study is “very much within the skillset of ISE students”, including cost-benefit analysis, human factors, mathematical modeling and optimization.

A main question in the study is how humans react to algorithms, Professor Naghizadeh says. She gives an example of automated machines not understanding a person’s accent. “That’s where user experience comes in,” she says. “If users have a poor experience with an AI assistant, they are less likely to continue using it in the future. This affects the profit the company makes.”

Similar issues arise when algorithms are used in college admission decisions. In this case, “It’s not just about economics, but more about education opportunities – admission algorithms change students’ long-term education and career experiences,” Professor Naghizadeh says.

“We hope the research we do will provide a long-term view of how we can design algorithms and how we should use them. The goal we have is to end up with principles to develop more equitable AI systems.”

 

Story by Nancy Richison

Category: Faculty