Mass quantities of data are being incorporated into predictive systems in an ever-broadening set of fields. In many cases, these algorithms operate in the dark and their use has implications both intentional and unintentional. This talk will cover some of the fairness and accountability issues involved in controlling algorithms for media, policy, and policing.
Decision making is increasingly being performed by intelligent algorithms in areas from search engine rankings to public policy. Algorithmic decision making includes applications as important as who is flagged as a potential terrorist as in the United States’ no-fly list to deciding how police officers will be allocated as in predictive policing.
These systems are getting smarter as we develop better algorithms, as well as more expansive as they integrate more data. Government agencies and corporations are determining how to best convert the mass quantities of data that have been collected on their citizens and customers into meaningful inferences and decisions through data mining and predictive systems.
However, many of these systems consist of algorithms whose operation is closed to the public - constituting a new form of secrecy maintained by powerful entities. The intentional or unintentional impact of some of these systems can have profound consequences.
This talk will cover some of the emerging issues with the widespread use of these systems in terms of transparency and fairness. We need to have some mechanism for verifying how these systems operate. Are these algorithms discriminatory? Are they fair with respect to protected groups? What role can auditing and reverse engineering play? I'll discuss these questions, the current status of this field, and some paths forward.