Break the Digital Monoculture

INTERVIEW WITH KANTA DIHAL

We need to break the digital monoculture and challenge Big Tech in their relentless drive to transform our digital environment in its own image. Digital Earth conducted interviews with artists, technologists and activists, who work towards building a pluriform and inclusive digital environment. Our main question: How can we break the digital monoculture and build a more humane digital future?

 

A sculpture depicting the distribution of the Buddha’s relics. Courtesy Los Angeles County Museum of Art/Wikimedia Commons.

Dr Kanta Dihal is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. She leads two research projects, Global AI Narratives and Decolonizing AI, in which she explores intercultural public understanding of artificial intelligence as constructed by fictional and nonfictional narratives. Kanta’s work intersects the fields of science communication, literature and science, and science fiction. She has a PhD in science communication from the University of Oxford: in her thesis, ‘The Stories of Quantum Physics,’ she investigated the communication of conflicting interpretations of quantum physics to adults and children. She is co-editor of the books AI Narratives: A History of Imaginative Thinking About Intelligent Machines (Oxford University Press, 2020) and Imagining AI: How the World Sees Intelligent Machines (Oxford University Press, 2022) and has co-authored a series of papers on AI narratives with Dr Stephen Cave, including ‘The Whiteness of AI’ (Philosophy and Technology, 2020).

You can read more on Dihal’s work here

You have been working extensively over the last years on the intersection of history, popular culture, artificial intelligence, and also critical race theory. How did you arrive at this exciting interdisciplinary work field?

I started off in 2008, as a literature scholar at Leiden University, where I focused on post colonialism. I continued into a research Master’s focused on the narratives of scientific topics which led to my PhD at Oxford University in science communication. I focused on the communication of quantum physics and how really difficult topics such as quantum physics are explained to people with no physics background.

Now as a senior researcher at Cambridge, I moved onto AI where I can bring everything together to ask how the stories about complex scientific topics affect different societies differentially related to their status as former colonies.

With my co-author Steven Cave I'm following up on the research that we started with our paper ‘The Whiteness of AI’ (2020) on the issue of representations of artificial intelligence as ethnically white, and the ideology that is expressed in representations of artificial intelligence in visions of the future, and the people they leave out. Narratives are disproportionately influential on the deployment of AI because it has such a strong narrative history that many other scientific fields do not. So this is a field where my work, having that history of working on narratives of science can make a big difference.

You have a global approach to the topic. How did that happen?

My artificial intelligence research came from initially looking at the narratives that are most prevalent in the US and the UK. So the kind of narratives that are influencing the media stories here in the UK, where I'm based right now. We just noticed how narrow they were, how they are based on a very small set of narratives; it's always the Terminator or Asimov's laws.

We started thinking about alternatives and looking for visions that weren't just Hollywood. As we went on we decided to properly research this and look at how different parts of the world imagine life with intelligent machines. Because the Hollywood narrative is so strong and is being pushed out to parts of the world that are not the subject of these Hollywood films. So how do these Americanised perceptions clash with local perceptions? Of course, we found lots of alternative narratives that might be much more productive to use in discourse around AI around the world, and a much better alternative to the Terminator.

Sophia is “Hanson Robotics’ most advanced human-like robot, Sophia, personifies our dreams for the future of AI. As a unique combination of science, engineering, and artistry, Sophia is simultaneously a human-crafted science fiction character depicting the future of AI and robotics, and a platform for advanced robotics and AI research.” Source: https://www.hansonrobotics.com/sophia/

Could you tell us a bit more about this history of AI narratives in Europe and the US?

The history of narratives about intelligent machines is ancient. It goes back to ancient Greece, where the oldest reference that we found was in the Iliad, where the Greek god Hephaestus created artificial women to help him out in his forge. What we've noticed is that these narratives are not only ancient, they have been prevalent and largely unchanged throughout history. Hephaestus and the female servants is a story and theme that continues to recur and still exists in various forms. You can see it now in 21st century depictions like the movie Her, or like the TV series Humans.

Another one that has been around since ancient Ancient Greece, also attributed to Hephaestus, is a creation called Talos, a bronze giant patrolling Crete, and throwing boulders at pirates. In a sense, Talos is the first killer robot and the first artificial autonomous weapon system. In its embodiment as a bronze soldier it has been recurring throughout history through to the present day. We have fictional depictions in the middle ages of bronze knights and then in the 20th and 21st century, we have the Terminator; a killer robot that looks like a human. Again, the human shaped artificial weapon is literally nearly 3000 years old.

Steven Cave and I have identified sort of four utopian or dystopian depictions of artificial intelligence that are recurring throughout history. That history is so old and so largely unchanged, that these narratives influence how artificial intelligence is talked about today.

And what are those four depictions?

So in terms of the ‘four hopes’, you have hope for immortality, for health – freedom from disease –power, and eventually things like mind uploading, such as Cyborg ideation or transfer into artificial bodies.

The hope for time is one where if you have an infinitely extended life, you don't want to spend it doing all kinds of drudgery work and therefore robots will take care of all those tasks. Then there’s desire gratification, the hope that social interactions too will become automated. This is things like having artificial friends, lovers, or family members. And then power, so the killer robot that defends us from evil; anything that might threaten that kind of utopia.

But those four hopes tap into fears. With the hope for immortality there's a fear of inhumanity. The idea that if we upload ourselves into the cloud, we will lose our personality and we will no longer be human. The flip side of ease is that of obsolescence that robots take over everything that we will have no work, no purpose, and that we will be infinitely bored. With gratification, there's fear of alienation. That is the fear that we will become essentially obsolete to each other. If everyone prefers to have interactions with robots, humans will no longer need each other. And that again, we lose an essential part of our humanity, the social side. And of course, the flip side of the desire for power and protection and security is being dominated by the robot.

Greek vase painting (c. 450 B.C) depicts the death of Talos. Wikimedia Commons / Forzaruvo94

Could you tell us more about different ways that different cultures, not the West, have imagined alternative intelligence?

In many parts of the world, imagining intelligent machines has been part of narratives. We have narratives about intelligent machine-like creations, for example, from ancient China, and India.

In ancient China, there was a story about a robot dressed up as a woman, fooling people into thinking that it was human. In India, there was a story about the cave where the body of the Buddha lay buried, being guarded by silver robots. So while the terminology is very much taken from Europe, the idea has always been there.

And to some extent, these narratives have evolved independently. And the strongest independent evolution of these narratives is in Japan, where there is really a very different perception of what it means to live with intelligent machines. So one sort of shortcut explanation that people usually give is that it has to do with Japanese spiritualism; since everything is imbued with spirits, the border between human/machine/animal/nonliving things is much vaguer and much more blurred. Now, in reality, it is much more complex than that. But so far, what we have seen in Japan is that from the 20th century onwards, artificial intelligence has been predominantly depicted as positive and not as a threat.

What are the major differences between the AI narratives that you have studied around the world?

One important finding is exactly how much the western narratives have influenced very large parts of the world. And that is partly because of older histories of cultural imperialism, meaning the attempt to root out native cultures, through re-education, or through deliberate attempts to not preserve things like writing systems which left a gap that was forcibly filled with the narratives of the colonizer.

That gap has been perpetuated to the moment of decolonization, which for many parts of the world was mid to late 20th century. So we did see, for instance, how the Terminator is a pretty much universally known figure. On the one hand, there are very strong attempts to restore all the ‘lost’ narratives. So the kinds of narratives like the ancient Chinese and ancient Indian ones. On the other hand, there is a push to come up with new narratives that represent people's present lived experience where that lived experience has nothing to do with the life depicted in a Hollywood film. So especially in Latin America, and in Sub-Saharan Africa, there are very strong new narratives being developed with an explicitly decolonial agenda.

 
 

Still from “Doraemon: Nobita's Chronicle Of The Moon Exploration” (2019)

Do you have any examples in which innovations or technologies have been shaped by these different narratives and histories?

One major influence, especially in the West, is that robots must look like humans. That comes from that long history of them looking like humans in the narratives and  to some extent being indistinguishable from humans. And that has really influenced the popular perception of robotics, even as robotics itself was developing in all kinds of directions that are nowhere near human like. Now we have robots in all major factories, and none of them look anywhere like a human.

But you get robots like Pepper, ASIMO and Sophia which are not really that useful. They are really gimmicky, they get a lot of views and a lot of likes on YouTube. There is a huge discrepancy between what people think robots are and should look like and what robots that are successful at their job look like.

As mentioned in Japan it's much more blurred because there's much less anthropomorphism. I mean, one of the most famous depictions of an artificial intelligence in Japanese animation history is the Doraemon, which is a blue cat. When one of your most famous robots does not look like a human it shapes expectations in a very different way.

In regards to your research into AI and whiteness, could you give your interpretation of the whiteness of AI?

In the English speaking West, this history goes back to the development of the term intelligence in the late 19th century. Steven Cave has written an excellent paper on the development of the term intelligence, and how it was measured, and how that influenced what artificial intelligence became.

As people started to think about race in a scientific way – to be able to claim that the white man was peak civilized – and everyone else came below that, various strategies were invented including measuring mental ability. With the aim of proving that men were more intelligent than women, and that white people were more intelligent than people of colour, they had to jump through a bunch of artificial hoops.

Tests were created that were extremely related to people's environments and backgrounds. So people with the right kind of background were much better at answering these questions than people who did not have the educational or social backgrounds required. The LSAT, the US university admissions test, was developed in order to keep universities white. The test has now been modified, scrutinized, and criticized. But the LSAT is still part of the US college admission system.

Intelligence at that time became something measurable to create hierarchies. About 50 years after that the term artificial intelligence was invented. It came with all that baggage about what intelligence is and how you measure it. And this is why measures of AI benchmarks have always been so peculiar, and so closely related to the hobbies of wealthy men such as chess, complex board games, quizzes, and now video games. Skills considered more feminine, like social interactions or care work, were deemed completely irrelevant for what it means to be intelligent, artificial, or human. 

The idea is that AI will become more intelligent than humans. In order to be able to gauge whether that's the case, you have to measure it. And so you use those measures that have historically said that white men are the cleverest. So in order to create an artificial intelligence that is cleverer than a human what is measured is whether artificial intelligence is cleverer than a white man. So the association of artificial intelligence as white is due to the idea that whiteness is the ultimate level of intelligence.

James Bareham/Polygon | Source images: Orion Pictures, Paramount Pictures and Lightstorm Entertainment

You also mention decolonisation as a crucial tool. How do you practically implement decolonisation to achieve this?

When you're looking at a system of narratives, and a system of thinking, that perpetuates ideas about colonialism, or ideas that are grounded in colonialist thought, one thing that a decolonial thought can help with is dismantling those thoughts structures. By making visible where the narratives come from and what their consequences have historically been, and might become, that kind of awareness is a first step to making sure that these consequences are not perpetuated. And then we bring in alternative ways of imagining and implementing certain technologies.

One hegemonic narrative is what we call techno-solutionism, the idea that problems can be solved with technology. One approach is for a rich person, country, or company to create a technology and then give it in a spirit of charity to people or countries or areas that do not have the resources to build an implement or buy it themselves. The problem with that is that this is one way to make these recipients extremely dependent on that technology, but also to create a structure of implicit or explicit debt. Now, that kind of dependency structure is one that goes back to the colonialist period. It really shows the downside of what at first sight seems to be a generous charitable project.

What is the role of artists, movie makers and the role of creatives in countering hegemonic narrative?

I mentioned there has been an extraordinary outburst of alternative new narratives that think about artificial intelligence in new ways over the past decade or so. And even the Terminator films are not what they used to be and have now given a very different view of the role of technology in the near future.

In films like Black Panther, Captain Marvel, or Black Widow, we can see alternative narratives still being very successful. And hopefully that has made people realize that shaking up these narratives does work, have an effect, and that that does create alternative visions that are consumed and listened to.