Imagine this: It’s morning and you are starting your daily routine—go to work, bring kids to school, etc. You are quite late. You lift the door handle of the car; able to recognize your fingerprints, the car unlocks. You speak, “To Helen’s school please, then Martin’s school, then to work.” The car doesn’t respond. It’s rush hour and you need to bring the kids to school on time and then rush to work.
Your car’s machine-learning algorithm predicts you are likely to get a fine or worse, despite your intentions to drive consciously when kids are on board and stay under the speed limit. Almost like the “pre-crime” police units of the Tom Cruise sci-fi hit “Minority Report,” the algorithm uses police report data on parents who speed during rush hour when late for school or work. It neglects your law-abiding attitude. Fortunately, the personal digital assistant in your watch, sensing your car has been disabled, is already contacting the schools and your workplace with your estimated times of arrival and an excuse for your tardiness.
Does this seem like a distant future? Recently, an AI-based system named COMPAS has been used to forecast which criminals are most likely to re-offend. Predictions of re-offending convicts produced risk assessments followed by U.S. judges in courtrooms. However, when the algorithm was wrong it generated higher error rates in minority ethnic groups, e.g. had a higher false-positive rate in Black than in white people. Black people re-offending was more likely a wrong prediction just as for safety-concerned parents in the fictional scenario above.
Although it is not easy (if not impossible) to even loosely predict the future, we know that in the coming decade high-tech products, such as smart drones or driverless cars (so-called near-future technologies), are going to rely heavily on machine learning. Nevertheless, machine-learning algorithms will almost certainly harbor some form of implicit bias, e.g. cognitive, social, racial, etc. For example, Caliskan’s et al.’s academic paper, “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases,” published in the leading academic journal Science, described an autonomous intelligent agent associating words like “parents” and “wedding” mainly to feminine names while on the contrary career-related words like “professional and salary” when assigned to men. Several studies exploring stereotypes data used to train AI provide evidence that the word-associating agent flawed strategy may be used to train a CV-analyzer service with consequences on gender balance. Interestingly, Caliskan et al.’s experiment replicated the extensive proof of bias found in previous studies involving human participants, perhaps reflecting debatable beliefs in our society.
The question, therefore, is: How can we uncover and mitigate bias in near-future technologies before such technologies become integrated into the fabric of society? Another way to put it is: How can designers and engineers developing these near-future technologies ensure that they are no unwittingly discriminating against or excluding certain groups of people.
Algorithmic Social Injustice?
Algorithmic social justice—designing algorithms including fairness, transparency, and accountability—can help expose, counterbalance, and remedy bias and exclusion in near-future technologies. Algorithmic social justice is gaining momentum, for instance, Not-Equal, a recent initiative funded by UKRI (UK Research and Innovation), aims at stimulating initiatives between academics, industry, and the general public to support social justice for a fairer future in the digital economy. Another example is Google’s research on AI, which incorporates some of the main concerns society expressed about technologies heavily relying on machine learning that can reinforce bias and should, then, be accountable to people.
Because machine learning algorithms will almost certainly harbor some form of implicit bias, e.g. cognitive, social, racial, etc., our quest might better be phrased “avoiding algorithmic social injustice.”
Embedding principles of social justice in near-future technologies (e.g. self-driving cars) involves, for instance, counterbalancing unfairness in a “future” product and is beyond the scope of more standard methods of scientific inquiry. For example, autonomous vehicles are speculated to make streets safer by reducing speeding and human error. However, it’s not quite that simple. Driverless cars are still cars, and therefore affect urban space, liveability, noise, congestion, public health, and other foreseeable (and perhaps unsuspected) aspects of daily life.
Design fiction is an interdisciplinary method that can allow participants (e.g. designers, engineers, and futurists) to create and reconfigure concepts into scenarios and help machine-learning experts expose potential bias and reflect on mitigation strategies. By using a mixture of logic and fiction, design fiction prototyping can provide opportunities to reveal aspects of how technology will be adopted. Therefore, design fiction prototypes become conversation starters to discuss implications, ramifications, and effects of technology in the future.
Smart homes, smart drones, humanoid robots, and autonomous vehicles are considered near-future technologies, “future” products that have not yet fully come into being but can have a huge impact on society. For instance, a recent example of design fiction for a near-future technology is Slaughterbots, a 2017 arms-control advocacy video presenting a dramatized near-future scenario, where swarms of inexpensive micro drones use artificial intelligence and facial recognition to assassinate political opponents. The video was released onto YouTube by the Future of Life Institute and went viral, gaining more than two million views. The video, which was also screened at the November 2017 United Nations Convention on Certain Conventional Weapons meeting in Geneva, sparked a debate on near-future technologies at an international policy level, as well as amongst the general public.
However, the literature also points to some drawbacks to the use of design fiction. These include the limited number of scenarios that can be explored; the lack of formal methods to aid the ideation process; and the unbound process of producing a narrative that contains believable elements and incorporates intents, emotions, and technology. Whilst the idea of suspending belief—the temporary acceptance as believable of events or characters that would ordinarily be seen as incredible—is an important tool for designers to portray future scenarios, even acclaimed futurists miss important innovations and subsequent implications for the society.
For instance, in 1997 Ted Lewis (former President and CEO of DaimlerChrysler Research and Technology, North America) and others wrote a piece in Scientific American, titled “www.batmobile.car,” that summarized the results of scenarios that predicted GPS guidance and mapping but not autonomous cars. Not a single futurist involved in the article imagined driverless cars even though it is rather obvious, now.
As Peter Denning wrote in “Don’t Feel Bad If You Can’t Predict the Future,” future forecasts made by experts are no better than forecasts anybody can make. Near futures might be different though: Denning mentioned Peter Drucker, author of the book The New Realities (Routledge, 1989), who wrote a chapter entitled “When the Russian Empire Is Gone” predicting exactly what happened a year from the book’s publication. When asked what his method of forecasting was he replied he made no forecasts. He simply looked at the current realities and told people what the consequences are. He analyses economic trends, politicians’ statements, the media, and different moods of Soviet citizens to speculate the Soviet Union would soon disappear.
Design fiction is useful to help designers, engineers, and futurists see how people might react to different near futures, and then try to influence policies and trends to expose, counterbalance, and remedy bias in near-future technologies, in a word, cultivating algorithmic social justice. But to do so, as suggested by Denning, forecasts (e.g. design fictions) must evolve including methods and tools to make sure models are validated, grounded in observable data, and that assumed recurrences fit the near-future technologies intend to forecast. Therefore, design fiction workshops and the corresponding fictional design toolkits (e.g. design cards, videos, rapid prototyping tools, etc.) must address near-future technologies at different levels, such as design and critical thinking, across contexts and domains involving regulatory bodies, public engagement, and policymaking. For instance, design fictions using adversarial testing and journalistic-style investigation can help machine-learning experts think creatively about potential failure modes (unfair bias or exclusion), and design specific inputs to an algorithmic system to see if they can elicit problematic or undesirable results.
Imagine if, by using design fictions, we could have provided social networks with some safeguard mechanism to spot, for instance, trolls or fake news before it turned into a hard-to-solve problem. Wouldn’t it sound like a better-engineered future?
Malizia, A., Chamberlain, A., and Willcock, I. From design fiction to design fact: Developing future user experiences with proto-tools. In Kurosu M. (eds.) Human-Computer Interaction. Theories, Methods, and Human Issues. HCI 2018. Lecture Notes in Computer Science, vol 10901. Springer, Cham, 2018.