UCLA Samueli Launches Engineering in Action Speaker Series to Address Equity

First of three multidisciplinary events highlights the need for inclusion in artificial intelligence
Engineering in action – Equity in AI

May 4, 2021

By UCLA Samueli Newsroom
In an effort to increase equity and diversity in the fields of engineering and computer science, the UCLA Samueli School of Engineering has announced the launch of a three-part “Engineering in Action” series designed to address issues of inclusion and fairness as they relate to engineering and science.

To kick off the program on March 29, UCLA Samueli partnered with the UCLA chapter of National Society of Black Engineers (NSBE) to host the first webinar focused on equity in artificial intelligence.

The one-hour discussion featured four multidisciplinary panelists including Jungseock Joo, an assistant professor of communications at UCLA; Safiya Umoja Noble, the co-founder and co-director of the UCLA Center for Critical Internet Inquiry at the School of Education and Information Studies; Violet Peng, a computer science professor at UCLA Samueli; and Lauren Thomas Quigley, a data science and AI educator for the University of Washington and IBM.

“The issues of inclusion and diversity are relevant to all of us and they center on technology, society and people,” said Jayathi Murthy, the Ronald and Valerie Sugar Dean of Engineering and moderator of the event. “What I hope will come from the series are more collaborations across campus where we combine our strengths and our different perspectives, complementing one another in a joint effort to improve society.”

Oftentimes, artificial intelligence incorporated into technologies are trained using human-developed data sets that contain implicit biases, resulting in biased AI-generated algorithms and reporting.

Attended by more than 150 people from around the world, the panel addressed these challenges and discussed potential solutions to solve the problems by including underrepresented communities in AI technologies.

“The bias in these technologies is usually very subtle, meaning that the AI is not aware when the output it produces is biased,” Peng said, adding that this bias can create inequities in how technology functions for its users. For example, hidden racial biases in the technology used to generate email auto replies may result in those replies being potentially discriminatory.

Noble stressed it is impossible to reach a completely unbiased state.

“There will always be values present. The question we have to ask is what values are present,” Noble said, explaining the need to be aware of the existence of bias in the first place. “Our aspiration should be making visible what is happening in these systems, because they will always be operating to someone’s benefit.”

Machine learning has become such an integral part of technology developments that, according to Joo, students often rely on readily available data sets for their projects without inquiring about the quality of the data used because it is fast — and speed matters in today’s competitive world.

“The biases of these data sets will be captured in their models,” Joo said. “Not everyone is heavily incentivized to scrutinize this, because it will slow down their learning.”

The panelists also highlighted the need for greater collaboration by experts in different fields in order to create a more comprehensive understanding of biases in AI.

“We are doing a disservice as educators to not focus on how we can create an interdisciplinary perspective and value the work of colleagues across boundaries,” Quigley said.

Just as it is important to ensure the data sets used in artificial intelligence are as inclusive as possible, so is the diversity of those involved in creating and examining such data in the first place.

“Minority students and faculty hold different identities and see different things,” Noble said.

Quigley concurred. “We get to decide what technology is like and who gets to make it,” she said. “One of the challenges we face is to reimagine how we are training and educating engineers.”

The panelists agreed that it is crucial for engineers and scientists to take on a greater accountability for overseeing and eliminating bias in AI, and to take responsibility for ensuring the participants in the fields are more diverse.

“There are life or death consequences for many of these technologies,” Noble said. “The people working on these technologies need to consider themselves responsible.”

The entirety of this webinar can be viewed in this YouTube video. The second part of the Engineering in Action series is planned for May 2021, focusing on equity in transportation.

Chloe Slayter contributed to this story.

Share this article