Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficially—in this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.
WHAT IS THE NSF’S FAIRNESS IN AI PROGRAM?
The FAI program is an investment in what the NSF calls “use-inspired research,” where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional “basic research,” which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSF’s total investment in AI—around $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.
The FAI program is specifically oriented towards the ethical principle of “fairness”—more on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970’s, the federal government started actively shaping bioethics research in response to public outcry following the AP’s reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, it’s possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.
A FOCUS ON FAIRNESS
NSF made a conscious decision to focus on fairness rather than other prevalent themes like “trustworthiness” or “human-centered design.” Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAI’s domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisions—the same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.
The second category, Gianchandani writes, is “to understand how an AI system produces a given result.” The NSF sees this as directly related to fairness because giving an end-user more information about an AI’s decision empowers them to challenge that decision. This is an important point—by default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a tool—e.g., a website or web browser extension—that pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.
The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in mind—developing a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.
Source: brookings.edu