collage of an aerial view of Tulane's campus with fiber connection light web

Building a Better AI

Tulane faculty seek to unlock the secrets of more powerful and useful artificial intelligence applications and to work with community partners to ensure AI’s fairness for those whom it impacts.

From the uptown campus of Tulane University to the health sciences campus downtown, researchers in such fields as sociology, economics, computer science and public health are trying to better understand, improve and apply artificial intelligence.  

They recognize all the good there is to AI, all the ways it can solve complex problems, improve health care and revolutionize industries. But they also worry about the many ways that AI algorithms, or the data they are based on, can be biased, especially in areas such as hiring and criminal justice, where the cost of making the wrong decision can be devastating.

That distrust is one of the reasons why scientists across the country — including those at Tulane University — are prioritizing research that leads to the design of AI systems that are fair, transparent and accountable. 

At Tulane, that work is being done by a multidisciplinary team of social scientists, designers, technologists, students and community partners through the Tulane Center of Excellence in Community-Engaged Artificial Intelligence (CEAI). 

Under the direction of Aron Culotta, an associate professor of computer science in the Tulane School of Science and Engineering, CEAI is one of five such Centers of Excellence created and funded by the Tulane Office of Research to mobilize experts from different fields of study across the university to focus on complex research challenges through an interdisciplinary lens.

Caryn Bell, Alessandra Bazzano and Patrick Button portraits
Caryn Bell, Alessandra Bazzano and Patrick Button

Building a Better AI

“There is a growing tech backlash concerned that AI may exacerbate existing disparities, widen the digital divide or otherwise result in a less just society,” Culotta said. “These trends indicate that AI will succeed only with the trust and support of the communities it affects, especially those from historically underserved groups.”

As such, Culotta has enlisted five Tulane professors to serve as assistant directors — Nicholas Mattei, an assistant professor of computer science and expert in AI ethics; Andrea Boyles, a sociologist and race and gender scholar; Patrick Button, a Tulane economist with expertise in discrimination, particularly in employment and mortgage access; Alessandra Bazzano, an associate professor in the School of Public Health and Tropical Medicine who conducts research in maternal and child health; and Caryn Bell, who studies racial disparities in public health. 

In its first year, the center has made impressive strides. In its Aug. 1, 2023 progress report, it outlines a host of activities that Culotta and his assistant directors have undertaken, from partnerships with community nonprofits to research projects that address such issues as school rating websites, the social innovation potential of AI for public health and fairness in the microlending industry.  

The center has launched a Distinguished Speaker Series, and in November, held a Gulf Coast Artificial Intelligence Social at the 36th Conference on Neural Information Processing Systems in New Orleans. The goal of the event was to raise awareness about the work happening in the region as it relates to coastal, climate, logistics and materials challenges.

A sampling of projects include Bell’s study on how academics, public health institutions and the media talk about race and racism with regard to racial health inequities and Button’s studies of discrimination in mortgage loan applications and access to mental health care. Simone Skeen, a PhD student in the School of Public Health and Tropical Medicine, led a study with Bazzano and Culotta on the use of machine learning to detect suicidal ideation from Reddit posts.

“There is a growing tech backlash concerned that AI may exacerbate existing disparities, widen the digital divide or otherwise result in a less just society.”

Aron Culotta

Mattei said that despite some of the benefits of AI, there is a pervasive mistrust of AI technology by the public, and for good reason. “Often these concerns grow out of a sense that these technologies are applied to people and communities without their input.”

He used the example of recidivism software that was analyzed in 2016 by ProPublica. The software attempted to predict whether criminals would reoffend once they were released on bail. 

“It was much more likely to predict that White defendants would not reoffend, so they were released more frequently and at lower bail prices,” Mattei said. “The outcome was that more Black people were being held on high bail or not released on bail at all.”

“If you can’t afford it, you end up sitting in jail for 30, 60 or 90 days, and then the charges are dropped,” Culotta said. “A lot can happen if you’re not working or not paying your bills.”

Aron Culotta, Andrea Boyles and Nicholas Mattei portraits
Aron Culotta, Andrea Boyles and Nicholas Mattei

Stronger Community Connections

In April, the center held a workshop for academics and community partners to discuss emerging issues in AI and to identify risks on the horizon as AI’s impact continues to grow. While many of the researchers had existing relationships with some of the groups, the workshop enabled them to strengthen those connections and make new ones. 

“We now have several strong partnerships that will serve as the basis for research and proposals in year two,” Culotta said. “Additionally, we plan to create a Community Advisory Board to provide more regular interaction with project partners.”

Researchers teamed up with groups such as Court Watch NOLA, Eye on Surveillance and the city of New Orleans to study AI’s impact on their work and society in general. 

Boyles and Culotta are working with a nonprofit group called Eye on Surveillance to better understand the use of government surveillance tools such as facial recognition and whether the tools have an impact on crime. 

Boyles said the software illustrates how AI can lead to the expansion of racialized surveillance, stigmatization and criminalization, often unbeknown by and to the detriment of Black people and other marginalized groups. “We need to further understand and counter everyday harms that may be exacerbated through computer technology,” she said.

Culotta is also working with Court Watch NOLA, a nonprofit group that trains volunteers to monitor and report on the efficiency of the New Orleans criminal justice system. Together with computer science seniors as part of their Capstone Service Learning course, they built a transparency dashboard to better monitor effectiveness and equity in New Orleans Magistrate Court. Culotta and Boyles are using that work to seek National Science Foundation funding for developing additional community-driven tools to monitor the New Orleans criminal court system. 

Button, executive director of Tulane’s new Connolly Alexander Institute for Data Science, is conducting research that seeks to use AI to detect discrimination, specifically as it relates to access to therapy appointments. 

“I’m doing an audit field experiment, a sort of ‘secret shopper’ study, where therapists get appointment request emails from prospective therapy patients,” Button said. “The requests are, on average, identical but from individuals with different names, which varies the perceived race, ethnicity and gender of the patient.”

He is using an AI tool called natural language processing (NLP) to detect potential subtle discrimination in how therapists respond to appointment requests by email based on the patient’s name. 

“NLP can help determine, for example, if therapists send less helpful or polite emails to Black or Hispanic prospective patients,” Button said. 

Bazzano is investigating new methodologies for community-driven AI, borrowing from public health, which has a rich history of community-engaged research. 

“Artificial intelligence is a technical and social innovation with great potential to positively impact public health, for example, enhancing disease prevention and detection, accelerating behavior change and enriching approaches to improving health,” Bazzano said. 

“However, there are also potential harms of applying AI in public health, which must be addressed to allow this innovation to have a net positive impact. We seek to interrogate and carefully consider these potential harms and to identify community-engaged solutions to fully realize the potential benefits of AI as a social innovation.”

They include appropriate safeguards to protect patient data and privacy when using AI in health and consideration of the real and perceived risks of privacy violations. “The use of sensitive health data and access to it by unknown individuals or organizations is a major concern for AI and other sociotechnical innovations,” Bazzano said.

“Artificial intelligence is a technical and social innovation with great potential to positively impact public health, for example, enhancing disease prevention and detection, accelerating behavior change and enriching approaches to improving health.”

Alessandra Bazzano

AI as a Learning Tool

Students play a significant role in the center’s work, using it as an opportunity to complete their Senior Capstone Service Learning course, and in the process, gain valuable skills as they prepare to head to graduate school or the workforce.

Ila Keshishian, Marisa Long and Anna Schoeny, all of whom graduated between 2022 and 2023, are among the students who worked with Court Watch NOLA, restructuring the organization’s data workflow and designing an interactive data dashboard to effectively host and communicate their findings and perform additional data analysis. 

“We wanted Court Watch to be able to access their data in real time, so that they could understand trends rapidly and post graphics to their social media platforms,” said Keshishian, who uses data science principles in her job as a chemical engineer for Merck’s vaccine manufacturing site in Philadelphia. 

“The goal of the project was to clean up the data, extract additional information for the docket and create a web application to act as an internal docket dashboard for the staff at Court Watch,” said Schoeny, who spent the past summer as a Data Science Fellow with the Bureau of Justice Statistics Corrections Unit.

Long, a software engineer for Nationwide Insurance, said the group gave students the creative freedom to produce a quality and effective dashboard that included authentication, search input and output pages, and the ability to download raw data directly and explore data visualizations for future reports.

Under Mattei’s guidance, computer science students are also working with the city of New Orleans to improve dashboards around such city data as road construction, 311 calls, nuisance reports and saltwater intrusion. 

He said the center’s goal is to position Tulane as a leader in community-driven AI by leveraging its existing strengths in community outreach, service learning and public health. 

“Through our community programs, research and student projects in the first year, we laid the foundation for partnerships within the greater New Orleans community and beyond,” Mattei said. “Our goal moving forward is to deepen these partnerships and continue to build out best practices, research projects and community engagement to make positive, community-driven impacts both locally and nationally.” 

“Our goal moving forward is to deepen these partnerships and continue to build out best practices, research projects and community engagement to make positive, community-driven impacts both locally and nationally.”

Nicholas Mattei