Algorithmic accountability

0
141



W

hile there needs to be more diversity on the teams developing software in order to truly take into account the different number of scenarios an algorithm may have to deal with, there’s no straightforward, cut-and-dried solution to every company’s algorithmic issues. But researchers have proposed several potential methods to address algorithmic accountability.

Two areas developing rapidly are related to the front- and backend process, respectively, Barocas tells me. The front-end method involves ensuring certain values are encoded and implemented in the algorithmic models that tech companies build. For example, tech companies could ensure that concerns of discrimination and fairness are part of the algorithmic process.

“Making sure there are certain ideas of fairness that constrain how the model behaves and that can be done upfront — meaning in the process of developing that procedure, you can make sure those things are satisfied.”

On the backend, you could imagine that developers build the systems and deploy them without being totally sure how they will behave, and unable to anticipate the potential adverse outcomes they might generate. What you would do, Barocas says, is build the system, feed it a bunch of examples, and see how it behaves.

Let’s say the system is a self-driving car and you feed it examples of pedestrians (such as a white person versus a black person versus a disabled person). By analyzing how the system operates based on a variety of inputs/examples, one could see if the process is discriminatory. If the car only stops for white people but decides to hit black and disabled people, there’s clearly a problem with the algorithm.

“If you do this enough, you can kind of tease out if there’s any type of systematic bias or systematic disparity in the outcome, and that’s also an area where people are doing a lot of work,” Barocas says. “That’s known as algorithmic auditing.”

When people talk about algorithmic accountability, they are generally talking about algorithmic auditing, of which there are three different levels, Pasquale says.

“In terms of algorithmic accountability, a first step is transparency with respect to data and algorithms,” Pasquale says. “With respect to data, we can do far more to ensure transparency, in terms of saying what’s going into the information that’s guiding my Facebook feed or Google search results.”

So, for example, enabling people to better understand what’s feeding their Facebook news feeds, their Google search results and suggestions, as well as their Twitter feeds.

“A very first step would be allowing them to understand exactly the full range of data they have about them,” Pasquale says.

The next step is something Pasquale calls qualified transparency, where people from the outside inspect and see if there’s something untoward going on. The last part, and perhaps most difficult part, is getting tech companies to “accept some kind of ethical and social responsibility for the discriminatory impacts of what they’re doing,” Pasquale says.

The fundamental barrier to algorithmic accountability, Pasquale says, is that until we “get the companies to invest serious money in assuring some sort of both legal compliance and broader ethical compliance with personnel that have the power to do this, we’re not really going to get anywhere.”

Pasquale says he is a proponent of government regulation and oversight and envisions something like a federal search commission to oversee search engines and analyze how they rank and rate people and companies.

Friedler, however, sees a situation in which an outside organization would develop metrics that measure what they consider to be the problem. Then that organization could publicize those metrics and its methodology.

“As with many of these sorts of societal benefits, it’s up to the rest of society to determine what we want to be seeing them do and then to hold them accountable,” Friedler tells me. “I also would like to believe that many of these tech companies want to do the right thing. But to be fair, determining what the right thing is is very tricky. And measuring it is even trickier.”

Algorithms aren’t going to go away, and I think we can all agree that they’re only going to become more prevalent and powerful. But unless academics, technologists and other stakeholders determine a concrete process to hold algorithms and the tech companies behind them accountable, we’re all at risk.

NO COMMENTS

LEAVE A REPLY