How to develop ethical standards for AI

Elizabeth Holmes convinced investors and patients that she had a prototype microsampling machine that could perform a wide range of relatively accurate tests using a fraction of the volume of blood normally required. She lied; Edison and miniLab devices did not work. Worse still, the company knew they didn’t work, yet continued to provide patients with inaccurate information about their health, even telling healthy pregnant women they were having miscarriages and producing false positives on cancer and HIV screening tests.

But Holmes, who is due in prison by May 30, was convicted of defrauding investors; she was not convicted of defrauding patients. This is because the ethical principles for investor disclosure and the legal mechanisms used to take action against fraudsters like Holmes are well developed. They are not always well enforced, but the laws are on the books. In medical devices, things are murkier: By encouraging innovation, legal standards give a wide berth to people trying to develop technologies, with the understanding that sometimes even people who try their best get it wrong.

We are seeing a similar dynamic with the current AI regulation debate.
Lawmakers are clutching a straw over what to do, as doomsday predictions collide with breathless sales pitches about how AI technology will change everything. Either the future will be an algorithmically created panacea for happiness and students will never be forced to write another paper, or we will all be reduced to radioactive rubble. (In which case, students still wouldn’t have to write term papers.) The problem is that new technologies without ethical and legal regulation can cause a lot of harm, and often, as was the case with Theranos patients, we don’t. I don’t have a good way for people to recover their losses. At the same time, due to its fast-moving nature, the technology is particularly difficult to regulate, with loose standards and more opportunities for fraud and abuse, whether through flashy startups like Holmes’s or sketchy cryptocurrency schemes. and NFTs.

Holmes is a useful case to think about in developing ethical standards in AI because he built a literal opaque box and claimed that people couldn’t look at or report what was inside. Doing so, he said, would violate his intellectual property, even as the technology told healthy patients they were dying. We are seeing many of these same dynamics in the conversation around developing ethical standards and regulations for artificial intelligence.

Developing ethical standards to form the basis for AI regulation is a new challenge. But it’s a challenge for which we have the tools and can apply the lessons we’ve learned from failure to manage other technology.

Like Theranos devices, AI technologies are little boxes generally understood by their designers (at least as well as anyone) but often not subject to outside scrutiny. Algorithmic accountability requires some degree of transparency; if a black box makes a decision that causes harm or has a discriminatory impact, we need to open it to find out if these errors are attributable to an occasional blind spot, a systematic error in design, or (as in Holmes’s case) fraud. shameless. This transparency is important both to prevent future damage and to determine accountability and responsibility for existing damage.

There is a lot of urgency around AI regulation. Both big AI companies and researchers are pressing lawmakers to act quickly, and while the proposals vary, they consistently include some transparency requirements. To avoid systemic problems and fraud, even intellectual property law shouldn’t stop big AI companies from showing how their technology works. Sam Altman’s recent testimony in Congress on OpenAI and ChatGPT included a discussion of how the technology works, but it only scratched the surface. While Altman seems eager to craft a regulation, he too is threatening to withdraw from the European Union based on proposed AI regulations before the European Parliament.

In early May, the Biden administration announced progress on its proposal to address artificial intelligence; the most significant was a commitment by major AI companies—Alphabet, Microsoft, and OpenAI, among others—to opt for “public evaluation,” which will put their technology through independent testing and assess the potential impact. The assessment is not exactly “public” in the way that Altman’s testimony before Congress was; access would be given to experts outside the company to evaluate the technologies on behalf of the public. If companies follow through on these commitments, experts will be able to detect problems before the products are deployed and widely used, in the hope of protecting the public from dangerous consequences. This is an early-stage proposal, because we don’t know who these experts are or what powers they will have, and companies may not want to follow the rules, even when they helped design them. Still, it’s a step forward in setting the terms for more scrutiny of private technology.

The Biden administration’s broader proposal, the “Blueprint for an AI Bill of Rights,” identifies a variety of areas where we already know AI technologies do harm: facial recognition algorithms that misidentify Black people ; social media algorithms that promote violent and sexual content, and adopts (broad strokes) ethical principles to address those issues, which can then be codified into law and enforced. Among these principles are non-discrimination, security, the right to be informed about the data collected by the systems, and the right to refuse an algorithmic service (and have access to a humane alternative).

Horror stories espousing these principles are widespread. Researchers, including Joy Buolamwini, have extensively documented problems with racial bias in algorithmic systems. Facial recognition software and autonomous driving systems overwhelmingly trained on data sets of white subjects do not recognize or differentiate black subjects. This poses obvious dangers, from someone being misidentified as a crime suspect based on faulty facial recognition to someone being hit by a self-driving car that can’t see black people at night. People should not be subject to discrimination (or get hit by cars) due to biased algorithms. The Biden administration’s proposal holds that designers have an obligation to conduct pre-deployment testing.

This obligation is critical. Many technologies have error and failure rates; For example, COVID tests have a false positive rate and a variety of complicated variables, but that’s why it’s important that technologies be tested to assess and reveal those failure rates. There is a difference between a false positive antigen test and the machine touted by Holmes, which simply did not work. The responsibility of the designers must respond to what the designer did. If the designers adhered to best practices, then they shouldn’t be held accountable; if they were grossly negligent, then they should be. This is a principle in engineering and design ethics in all fields, from medical tests to algorithms to oil wells.

There’s a long way to go between proposing an obligation and implementing a legal framework, but in an ideal world, the potential for discrimination can be addressed during the testing phase, with companies submitting their algorithms for an independent audit before going to market. As work like Buolamwini’s becomes part of the standard for developing and testing these technologies, companies that don’t test for bias in their algorithms would be negligent. These standards of proof should have legal implications and help establish when consumers injured by a product can recover damages from the company; this is what was missing in the Theranos fraud case and is still missing from standards on medical tests and devices at startups.

Businesses, for their part, should endorse clear and well-founded standards for AI like those outlined in the Biden administration’s proposal, because doing so builds public trust. That grounding is not absolute; If we find out that the toothpaste is contaminated, then we’re going to look askance at the toothpaste company, but knowing that critical regulatory control is in place helps establish that the products we use regularly are safe. Most of us feel safer because our doctors and lawyers have a code of ethics and because engineers who build bridges and tunnels have professional standards. AI products are already embedded in our lives, from recommendation algorithms to scheduling systems to speech and image recognition systems. Making sure these things don’t have severe and inappropriate biases is a bare minimum.

Algorithms, like medical tests, give us the information we need to make decisions. We need regulatory oversight of algorithms for the same reason we needed it for Holmes boxes: if the information we get is produced by machines that make systematic errors (or worse, don’t work at all), then they can and will. . endanger the people who use them. If we know what the errors are, then we can work to prevent or mitigate them. hurts

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.