For years, tech’s most influential companies have faced pressure to build ethics checks into their software development process, especially regarding artificial intelligence.
As AI algorithms make their way into ever more services and products, from social media apps to bail recommendation software for judges, flaws in how AI is trained could affect every corner of society. For example, one risk assessment algorithm widely used in US courtrooms was found to recommend harsher prison sentences to black people than white people.
Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)
AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.
If James or others were to lobby against the inclusion of transgender representation in datasets, similar to objections of the Equality bill, the effects could ripple through Google algorithms in subtle ways. For example, Harvard researcher Latanya Sweeney found that simple indicators like race affected search results. “Black” names like Latanya were often placed next to ads for websites with offers like “Find Latanya Sweeney’s arrest records.” And consider that something as simple as the gender of the voice of AI-driven virtual assistants like Siri and Alexa can shape how a generation unconsciously thinks about gender.
Meanwhile, as Quartz reported last week, Stanford’s new Institute for Human-Centered Artificial Intelligence excluded from its faculty any significant number of people of color, some of whom have played key roles in creating the field of AI ethics and algorithmic accountability.
Other tech companies are also seeking input on AI ethics, including Amazon, which this week announced a $10 million grant in partnership with the National Science Foundation. The funding will support research into fairness in AI.
To maintain rigid control of their operations, tech’s top companies have historically used legal loopholes and consolidated voting shares. We should welcome them ceding a little power on AI ethics. But how they do so should also be closely followed, as it could affect nearly all of us down the road.
A version of this essay first appeared in the weekend edition of the Quartz Daily Brief newsletter. Sign up for it here.
from Quartz https://ift.tt/2I19BnU
Dave Gershgorn
For years, tech’s most influential companies have faced pressure to build ethics checks into their software development process, especially regarding artificial intelligence.
As AI algorithms make their way into ever more services and products, from social media apps to bail recommendation software for judges, flaws in how AI is trained could affect every corner of society. For example, one risk assessment algorithm widely used in US courtrooms was found to recommend harsher prison sentences to black people than white people.
Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)
AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.
If James or others were to lobby against the inclusion of transgender representation in datasets, similar to objections of the Equality bill, the effects could ripple through Google algorithms in subtle ways. For example, Harvard researcher Latanya Sweeney found that simple indicators like race affected search results. “Black” names like Latanya were often placed next to ads for websites with offers like “Find Latanya Sweeney’s arrest records.” And consider that something as simple as the gender of the voice of AI-driven virtual assistants like Siri and Alexa can shape how a generation unconsciously thinks about gender.
Meanwhile, as Quartz reported last week, Stanford’s new Institute for Human-Centered Artificial Intelligence excluded from its faculty any significant number of people of color, some of whom have played key roles in creating the field of AI ethics and algorithmic accountability.
Other tech companies are also seeking input on AI ethics, including Amazon, which this week announced a $10 million grant in partnership with the National Science Foundation. The funding will support research into fairness in AI.
To maintain rigid control of their operations, tech’s top companies have historically used legal loopholes and consolidated voting shares. We should welcome them ceding a little power on AI ethics. But how they do so should also be closely followed, as it could affect nearly all of us down the road.
A version of this essay first appeared in the weekend edition of the Quartz Daily Brief newsletter. Sign up for it here.
https://ift.tt/2Q1jpQp March 30, 2019 at 12:03PM
ليست هناك تعليقات:
إرسال تعليق