Advertisement

Is Ethical A.I. Even Possible?

Video
Some of the top minds in tech and policy shared their outlooks for artificial intelligence and its applications at The New York Times’s New Work Summit/Leading in the Age of A.I. conference.CreditCreditMike Cohen for The New York Times

HALF MOON BAY, Calif. — When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

“We don’t want to see a commercial race to the bottom,” Brad Smith, Microsoft’s president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. “Law is needed.”

The new ethics position at Clarifai never materialized. As this New York City start-up pushed further into military applications and facial recognition services, some employees grew increasingly concerned their work would end up feeding automated warfare or mass surveillance. In late January, on a company message board, they posted an open letter asking Mr. Zeiler where their work was headed.

Image
Building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.CreditJohn Hersey

A few days later, Mr. Zeiler held a companywide meeting, according to three people who spoke on the condition that they not be identified for fear of retaliation. He explained that internal ethics officers did not suit a small company like Clarifai. And he told the employees that Clarifai technology would one day contribute to autonomous weapons.

Clarifai specializes in technology that instantly recognizes objects in photos and video. Policymakers call this a “dual-use technology.” It has everyday commercial applications, like identifying designer handbags on a retail website, as well as military applications, like identifying targets for drones.

This and other rapidly advancing forms of artificial intelligence can improve transportation, health care and scientific research. Or they can feed mass surveillance, online phishing attacks and the spread of false news.

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests.

“You functionally have situations where the foxes are guarding the henhouse,” said Liz Fong-Jones, a former Google employee who left the company late last year.

In 2014, when Google acquired DeepMind, perhaps the world’s most important A.I. lab, the company agreed to set up an external review board that would ensure the lab’s research would not be used in military applications or otherwise unethical projects. But five years later, it is still unclear whether this board even exists.

Google, Microsoft, Facebook and other companies have created organizations like the Partnership on A.I. that aim to guide the practices of the entire industry. But these operations are largely toothless.

The most significant changes have been driven by employee protests, like the one at Google, and pointed research from academics and other independent experts. After Amazon employees protested the sale of facial recognition services to police departments and various academic studies highlighted the bias that plagues these services, Amazon and Microsoft called for government regulation in this area.

“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence. But she also told the conference that this change was slow to happen, as the forces of capitalism continued to drive these companies toward greater profits.

Image
“We don’t want to see a commercial race to the bottom,” Brad Smith, Microsoft’s president and chief legal officer, said at The New York Times’s New Work Summit in California. “Law is needed.”CreditJohn Hersey

Employees at Clarifai worry that the same technological tools that drive facial recognition will ultimately lead to autonomous weapons — and that flaws in these tools will open a Pandora’s box of problems. “We in the industry know that technology can be compromised. Hackers hack. Bias is unavoidable,” read the open letter to Mr. Zeiler.

Thousands of A.I. researchers from across the industry have signed a separate open letter saying they will oppose autonomous weapons.

The Pentagon has said that artificial intelligence built by the likes of Google and Clarifai has not been used for offensive purposes. And it is now building its own set of ethical principles, realizing it needs the support of industry, which has snapped up most of the world’s top A.I. researchers in recent years.

But many policy experts say they believe these principles are unlikely to hold any more influence than those laid down at the big corporations, especially because the Pentagon is motivated to keep pace with China, Russia and other international rivals as they develop similar technology. For that reason, some are calling for international treaties that would bar the use of autonomous weapons.

In their open letter, the Clarifai employees said they were unsure whether regulation was the answer to the many ethical questions swirling around A.I. technology, arguing that the immediate responsibility rested with the company itself.

“Regulation slows progress, and our species needs progress to survive the many threats that face us today,” they wrote, addressing Mr. Zeiler and the rest of the company. “We need to be ethical enough to be trusted to make this technology on our own, and we owe it to the public to define our ethics clearly.”

But their letter did not have the desired effect. In the days after Mr. Zeiler explained that Clarifai would most likely contribute to autonomous weapons, the employee who wrote the letter and was originally tapped to serve as an ethics adviser, Liz O’Sullivan, left the company.

Researchers and activists like Ms. Whittaker see this as a moment when tech employees can use their power to drive change. But they have also said this must happen outside tech companies as well as within.

“We need regulation,” Ms. Whittaker said, before name-dropping Microsoft’s chief legal officer. “Even Brad Smith says we need regulation.”

A version of this article appears in print on , on Page F2 of the New York edition with the headline: Promises to Keep. Order Reprints | Today’s Paper | Subscribe

Advertisement