Photo of mannequin doctor, nurses, & patient at Sydney Hospital Museum.

On the psychosocial dynamics of technological progress

By Oliver Damian

A myopic focus

Too often when we read media stories about technology, the focus is on the technology itself. Sure, it may include the particular technology's effect on current events, and a sprinkling of prediction about the future. However, rarely is there coverage of the background historical, psychosocial, technological, and cultural environment inevitably intertwined with the technology.

One of today's hot topics is AI. There has been a lot of media stories written about how AI will soon replace human jobs. For example, John Detrixhe (2017) writes about how a survey reveals that next year 75% of banks and financial firms will explore or implement AI technologies designed to mine insights from the mountains of data they have. The story does mention that in addition to tons of data now available for AI crunching—thanks to the online data trails billions of people leave every moment—other technologies like advances in GPUs & cloud computing make all this technologically feasible and financially viable. It then goes on to say that AI technologies like machine learning and natural language processing in financial research can crunch data much faster than humans could. This can reveal patterns and correlations obscure to current human consciousness. This could enable banks to offer services which were previously too expensive to offer. It could lead to the increased productivity of human bankers. More human bankers could then be hired given their increased productivity. Or alternatively, the AI could replace some human bankers entirely. An analyst was quoted to have said that perhaps 15% of banking's human research analyst jobs could be at risk.

Consistent with the thread of this dystopian trope are stories of how when AI has reached human-level intelligence, it will fast evolve into a superhuman intelligence that could wipe out the human race. For example, Nick Bostrom (2017, p. vii) warned us that If some day we build machine brains that surpass human brains in general intelligence, […] the fate of our species would depend on the actions of the machine Superintelligence […] Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.

The discourses on the dystopic effects of AI are interesting. But what interests me more is the question of why is there this current preponderance of dystopic tropes? Has it always been this way? Has this happened before? Or is this time different? Of course, why questions are the hardest. Often there is not one possible answer but many.

Acceptance, denial or the fight against technological stagnation

Could it be that as what Paypal co-founder, investor, and well-known contrarian Peter Thiel (Kristol 2014) asserts in Video 1 that apart from a narrow slice of innovation in information technology (the world of bits): from the , we have simply been riding a wave of globalisation while experiencing a period of stagnation in technological innovation in the world of atoms. To Thiel, this could partly be because some elements of a psychosocial tendency to conform (a herd mentality) is stronger than before as we're now living in a more globally–connected transparent world where: (1) it often seems more dangerous for people to express unconventional ideas; (2) there's a public record of the expressions of these unconventional ideas; and (3) people are censoring themselves more than before. Thiel mentions an odd aspect of Silicon Valley is that many of the successful founders seem to be suffering from mild forms of Asperger syndrome and are socially awkward. Thiel thinks this should be interpreted as an indictment of American society where any well–adjusted normal person is deterred from having any unconventional thoughts very quickly. He says that the personality type extremely bad at founding successful companies are someone who got their MBA from places like Harvard University. To him, these MBA programs are hot house environments for extremely extroverted individuals who have no strong convictions of their own and who then get together for two years. At the end of which, they all look to each other for what to do. They inevitably all end up trying to ride the last wave of innovation. Thiel mentioned a Harvard Business School study which found that the largest group of people systematically went to the wrong things. In 1989 they all went to Michael Milken a year or two before he went to jail. They were not interested in technology except in 1999–2000 just before the dotcom bubble burst: the two worst years to go into it. In 2005–2007 they went for financially-engineered subprime mortgages which precipitated the global financial crisis.

Thiel shared the unconventional solution they found at Paypal to counteract internet payment fraud. This became Paypal's technological edge that led to its success. He said that the conventional approach to fraud were either: a (1) pure human solution where a team of human investigators investigated everything; or (2) a pure computer solution where a super computer figured out everything on its own. Both of these approaches did not work given the scale and complexity of transactions. What worked for them was a hybrid solution where they figured out the proper division of labour between humans and computers. Basically it was for the computer to flag suspicious items and for humans to be trained to visualise transactions. This turned out to be a powerful technique which Thiel then similarly applied in the counter terrorism & national security space to help start Palantir Technologies in . Similar to the payments space: (1) there was a pure human solution in the CIA analyst where no one knows what everyone else is doing; or (2) a pure computer solution of the NSA where one collects all the data in the world but one does not know what data one has. The hybrid approach of Palantir is vastly more powerful than either of these pure solution approaches.

Thiel said that this hybrid approach is very under explored in general because we think of computers as substitutes for humans. In reality, computers are complementary to us. Computers are good at different things than what people are good at. He said that the dominant narratives in our society right now are either: (1) Luddite—we have to stop the computers from replacing us; or (2) Utopian in a negative way—the computers will replace us and that is a good thing. Whereas the much greater reality is complementarity. Smarter–than–humans AI is still science fiction or fantasy at this stage of history. Thiel said we're so fixated on this computers replacing people that we have not asked the quesion: how can people work better with computers?

As an early investor, Thiel talked about how Facebook has become the iconic Silicon Valley success among companies founded since . That's why a disproportionate focus is given to it. He warned against the fallacy in the modality of equating specific successes with general success. The fallacy goes:Facebook is a specific success, it must mean some general success;Facebook is a great business therefore it would solve all the world's problems. Thiel's preferred modality is to think that there are specific success but they may be symptomatic of general failure. Facebook will not solve all the world's problems but it may still be a great business. Thiel said we should not turn Facebook into a scapegoat for the lack of innovation elsewhere. He said that the story of a specific success that masks generalised failure is a hard one to tell. Thiel believes that if we broaden the perspective and look back in the last 40–50 years, we are living in a world where there has been significant innovation in information technology, computers (hardware & software), the internet & mobile internet but much less in everything else. One of the reasons for this, according to Thiel, is that we live in a world where bits are quite un-regulated while atoms are heavily regulated. He said that even if it's generally acknowledged we need more STEM, it's still a bit rhetorical. To him, the only engineering that's really doing well at this pointin time is computer engineering and may be petroleum engineering. Thiel even goes further to say that all these new gadgets and devices, they dazzle but also distract us from the way our larger surroundings are strangely old. We're on our cellphones while riding on a 19th century subway system in New York. And in San Francisco, the housing stock looks like it's from the 50s & 60s and it's mostly quite decrepit. And it's incredibly hard to change that sort of thing. Thiel said bits are making progress while atoms are strangely very stuck.

Video 1—video recording of Peter Thiel interviewed by Bill Kristol

One explanation for this discrepancy in the progress of bits over atoms is a natural–limits–to–growth explanation. That is, we've already picked the low–hanging fruits in atoms during the Industrial Revolution. And it's now much harder to make incremental improvements over these low–hanging fruits. Thiel prefers the alternative explanation that this phenomenon is a cultural one. He believes there are still many areas where we can still have progress if we really wanted to have it. To him, it's a question of is there an external reality that makes progress hard? Or is it something in the culture that makes us less ambitious, more risk averse, and more scared to try to do things. He thinks there a bias towards the natural–limits explanation over the cultural one because it exonerates us from the inter–generational responsibility for the innovation slow–down. Thiel believes there is a self–fulfilling aspect to technological progress.

Thinking you can do something is a necessary pre-condition to be able to do it. It may not be sufficient but it surely is necessary. The Princeton mathematician who solved Fermat's last theorem worked on it by himself for 8 years. Solved it after 358 years of people trying. And you know, maybe it was impossible. Maybe it was a fool's errand to spend time on that but if you think you couldn't do it, you're never gonna be the person to do it.

Thiel mentions that this cultural failure to imagine a different future can best be seen in science fiction movies. He said if we look at the science fiction films made in the last 25 years, they basically show: (1) technology that doesn't work; (2) is dystopian; and (3) kills people. One can choose between The Terminator, The Matrix, Avatar, or Elysium. None of which portray a radically different future that is better. He said the exceptions are the Star Trek re-thread movies which are still a throw back to the 60s. To him, the Jetsons are a completely reactionary aesthetic at this point.

Thiel gives the example of nuclear technology as where progress can be made. He said we could be building much safer and cheaper nuclear reactors. To him, it would need a combination of political will and a belief that something like this could work. But if one thinks it won't work then it won't happen. He said that if we sort of have this period of genaralised stagnation, one can: (1) accept it; (2) deny it; or (3) fight it. To him, the modalities that seem to be dominant in our culture toay are acceptance and denial. What is really needed is to fight the stagnation and decline. He said that while acceptance and denial are opposites in some way, they're actually very similar. They both—at their core— say: (1) here's nothing to worry about; (2) there's nothing you can do; and (3) it doesn't really matter.

Historical antecedents: charting the socio-technical imagination

Genevieve Bell (2013) referred to her talk at Stanford University in Video 2 as being about our relationships with, thru, and of technology. And about the construction of the socio–technological imagination. Her fast–paced journey thru the last 500 years of history was indeed breathtaking. I loved the fact that her focus was more on what motivates people rather than about technology per se. Her deep background in anthropology gave a rich palette to explore the intersections between culture and technology.

Bell referred to William Gibson (author of the Neuromancer and other iconic cyberpunk novels) who tweeted about his dream (or nightmare) that took place entirely in Google Maps Streetview. She found it ironic that a man who helped shape our collective socio–technical imagination felt trapped by it.

She then referred to the video about a Furby talking to Apple’s first-generation Siri in Video 3 (awarehead 2011). It appealed to her because it captured a particular moment in our relationship with technology. Bell said the video can be read as a genealogy about: (1) talking things; (2) Neanderthal–level objects vs Human–level objects; (3) talking and listening (the Furby is just talking; Siri is attempting to also listen not just talk). She sees this as the beginning of moving from human–computer interaction to human–computer relationships. This points to a promise that when a computer listens to us, to our needs: it could lead to computers then taking care of us.

Bell then spoke about how when she shared this insight with the engineers at Intel (where she works), their reaction was No. When she probed further, they replied that she should know what happens when computers become intelligent enough to have a relationship with a human: death. The engineers feared that as soon as machines are smart enough to take care of us then they’ll be smart enough to kill us. Bell said this pointed to another genealogy. That of a genealogy of anxiety about technology. She spent the next 18 months after this encounter with her engineers to understand why it was OK for a machine to talk, still OK for a machine to listen, but when we have relationships with machines, it will lead to death: from Furby to Siri to SkyNet.

Video 2—video recording of Stanford Seminar by Genevieve Bell

Bell traced it all the way back to the death of magic that came with the invention of three things. First was the watch (circa 1530) which shaped our relationship to time. Prior to the watch, time was a communal thing that people heard thru chiming of bells in Christian medieval church towers or saw thru Islamic water clocks. The watch made time personal. Time became something that accompanied you. This lead to us being regulated by the time in our watches to fulfil time–based obligations.

The second was the invention of the telescope by Galileo (circa ). This led to the de–centering of the Earth. Humans realised that the Earth was not the centre of the universe.

The third was the microscope (circa ). This led to the exploration of our world by taking things apart. Suddenly, things that previously appeared to be unknowable became—in some way—infinitely knowable. The confluence of these things started the Scientific Revolution: a different way of thinking about and knowing the world. Humans began to see patterns, to make hypotheses and test them. To Bell, Descartes’ expression I think therefore I am marked the nail in the coffin of magic. It marked the transition where the most important thing about being human became our cognitive capacity: our capacity for rational thought.

Video 3—video recording of a Furby talking to a Siri

After magic was killed, Bell said there was a proliferation of people building objects. Using the gears and wind-up technology form making clocks & watches, people began building automatons. She talked about Jacques de Vaucanson who made among other things an automaton that played the flute. To make the flute sound like it was played by a human, Vaucanson put skin—cowhide—on the automaton’s fingers. But it was the Canard Digérateur, or Digesting Duck (in the year )—an automaton duck that walked, flapped its wings, ate and defecated—that made Vaucanson famous. To make this possible, he had to invent new technology. For example, he was the first to use rubber tubing. This was one of the earliest attempts to make life–like objects.

Bell said that making things life–like was a significant intellectual step for humans. She then said that de Vaucanson ended up destroying the Digesting Duck. He moved on to work on more serious stuff which eventually lead to mechanised looms and punch cards. The making of stuff that looked real reached uncanny heights with Thomas Edison’s talking doll in Video 4 (PBS NewsHour 2015) which according to Bell influenced Sigmund Freud’s study of the “uncanny”.

Bell then moved on to talk about the significant year 1812 when there was a widespread backlash against mechanised looms. The men who destroyed the looms wanted a leader but knew as soon as they had one, the leader would be arrested. So they invented one, General Ned Ludd. This time was also the height of romanticism. Riding on the Robin Hood story, they chose Sherwood Forest as the place where General Ludd lived. To distribute Ludd’s manifestos, they invented songs as coded instructions on where to meet and how to smash the mechanised looms. Springing forward from this, the word “Luddite” carried over to mean anti-technology up to today.

During an 18–month period before the Luddite movement dissipated, all England was talking about was the consequences of introducing new machinery as being about: (1) loss; (2) pollution (the “dark Satanic mills”); and (3) replacement of human work, labour & meaning.

The next piece of the story where the fear of technology came from was when Lord Byron, his doctor, girlfriend, and friends went travelling to Europe in . They got stranded in Switzerland because of a volcanic eruption. Byron told his friends over dinner that he was frightfully bored. He asked his friends to write stories for him. This marked a significant moment according to Bell. Three of the most important tropes of the horror genre was born. Byron’s doctor wrote the first vampire story. Byron’s best friend’s half-sister wrote the first zombie story. Byron’s best friend’s girlfriend, Mary Shelley, wrote Frankenstein.

Video 4—video recording of a Thomas Edison talking doll

Bell reminded us how interesting Mary Shelley was. She was the daughter of Mary Wollstonecraft who was a key figure in the Suffragette Movement in England. Mary Wollestonecraft’s husband was a pre-eminent labour historian. So at that time in 1812, the dining room conversation when Mary Shelley was around 14 years old would have centered around the Luddites and the consequences of mechanisation. Mary’s husband Percey Shelley when they were still courting took Mary to Egypt Hall where she saw early experiments in electricity, vivisection of frogs, and making frog limbs move with electricity. She saw how biological tissue can be taken apart then put back together again albeit not that well leading to monstrosities. This could have inspired Mary Shelley to write her Frankenstein novel.

This story to Bell is deeply embedded in our consciousness of where our playing with technology and life can lead to nightmares and our own demise.

Bell moved the story to the Analytical Engine conceived by Charles Babbage who inspired by mechanised looms. Babbage met Lady Ada Lovelace—who happened to be Lord Byron’s daughter—he discussed with her the Analytical Engine. Ada Lovelace then conceived that Babbage's machine could be programmed. Ada Lovelace is considered to be the world's first computer programmer. What was truly amazing was she invented programming before computers as we know them today was invented. Bell then fast–forwarded to World War II in Bletchley Park where the first computer was made to crack the Enigma cryptographic code used by the German National Socialists. This led to the complex story of Alan Turing who was a very talented, socially-awkward, and complex human. Despite his major contribution to computing, Turing was later punished and ostracized from his field because of his homosexual preferences. Despite all these setbacks, Turing published his seminal paper “Computing Machinery and Intelligence” which posed the question: Can a machine think?. This paper eventually led to the idea of a “Turing Test” as a way to test if we’ve created human–level machine intelligence. Essentially the “Turing Test” is a way to ascertain if a machine could think like a human which links back to Descartes’ I think therefore I am: a Procrustean distillation of being human as having the capacity for rational thought.

Bell considered that Turing’s paper was influential in the way we think about machine intelligence—particularly in science fiction. In the movie Blade Runner—based on Philip K Dick’s novella: “Do Androids Dream of Electric Sheep?”—the replicants (human–like machines used as slave labour for outer space colonies) thwart a system to prove whether they’re human or not. The replicants did this not by having the capacity to think but by having: (1) emotions; (2) memories; and (3) a moral core. Philip K Dick was thought to have interpreted Turing’s paper as not about asking if machines could think but about a deeper question of what really makes us human.

Blade Runner reinforces the fear of machines. If thinking is what makes us human and if machines can think then what does this say about us?

Bell then explains that this fear of thinking machines in the Western Cannon is the result of a multi-layered process: from science, science fiction, literary tropes, and political histories. All of which encode a particular set of ideas. Not all of them rational but ideas that have lived in our collective imaginations in really dramatic ways that continue to haunt us.

Bell then pivotted to say that the foregoing is a very particular history as narrow as: (1) the West; (2) the Enlightenment; (3) the technologies of the west and (4) Western Industrial Revolution. She proceeded to look if—given the same set of technologies—one can see the same anxieties arise in other cultures.

She said one can look at the scientific and engineering literature of Islam such as the “Book of Ingenious Mechanical Devices” that had water clocks, and other mechanised objects that did things. None of which aimed at simulacrum or trying to copy real things like de Vaucanson's Digesting Duck. They were beautiful objects nevertheless. For example, there was a peacock–shaped water vessel used to pour water for religious rituals.

Bell wanted to go further in looking at exactly the same technology as what de Vaucanson used. She told the story of how a surplus of clocks from Europe became re–gifted items that reached all the way to China where there was initial interest. After a while, the Chinese became less interested in them. They re-gifted the clocks to other people until the clocks eventually reached Japan. The clocks were promptly taken apart and reversed engineered by the Japanese.

The Japanese took the parts of the clocks to make their own automatons like the Karakuri in Video 5 (ryo000001 2012), which are little people that moved gracefully to do things like pour tea or fire a bow and arrow. Unlike with de Vaucanson, the clockwork machinery was used not to recreate life–like objects or to copy things that already existed but they were used for curiosity, wonder, beauty and aesthetic pleasure.

Video 5—video recording of a Japanese Karakuri automaton

According to Bell, the same trajectory of technology happened in Japan as in the West. They had mechanised looms but there was no Luddite moment. She noticed how robots in current day Japan are not feared like in the West. To the Japanese robots are our friends. This is deeply embedded in Japanese history, political discourse, iconography, pop culture such as in anime like Astroboy. Technology is understood as part of Japanese psyche, landscape and not as a replacement. Robots in Japan are in industry, homes and schools.

To Bell, it was the same technology, same origins but different cultural discourses and stories. She then mentioned that even locally not all technologies produce the same fear. She talked about the early history of introducing electricity in America. In the early days, people drove from the Mid-West to literally see the lights in New York.

Bell reminded us that electricity—when it was first introduced to homes—was a hard sell. It was an infrastructure with only the light bulb as its “killer app”. People already had light in their homes. A man who had surplus electricity to sell used women to show how one can safely light a bulb by running it through their bodies. It was a time when electricity was still distributed at low voltages. London had doorbell ladies that glowed with electricity when they stepped on a charged plate to open the door for visitors. To Bell, this was a time when technology was seen as something wonderful, a spectacle.

She then contrasted it to the story of radium which was first hailed as eternal sunshine because it glowed all the time. People used it everywhere like nail polishes, clock faces, in performances until they discovered that the radiation it emitted was harmful to health.

Bell then looped back to say that when we first encounter a technology for the first time, it’s like encountering magic. They are moments of wonder. She modified Arthur C Clarke’s third law to say that any sufficiently advanced technology should deliver magic. It should aim to deliver that moment of wonder, surprise and delight. It’s not always about being practical, pragmatic, and solving problems.

She said that the drive to make technology more real, more life–like as possible can lead to that state where for example in robotics robots become almost quite human but not quite. This is the “uncanny valley” that makes us feel very uncomfortable. Bell asks may be instead of just aiming for efficiency or simulacrum we could aim to elicit delight and glee.

More importantly, Bell reminds us that it is not just about things we design, it is also about the stories we tell about what we design. It’s the stories we tell about what those designs will be. It’s about the worlds we shape through our actions, our words, our visioning of the future that is hugely important. It’s not just about the work we do, it’s also about the stories we tell. The stories we tell should be a little about wonder, mystery, and magic.

Notes

Asperger Syndrome
A condition on the autism spectrum with generally higher functioning characterised by a developmental disorder affecting the ability to effectively socialise and communicate.
Dystopia
An undesirable state of society often marked by: environmental collapse; breakdown of social order; prevalence of crime, disease, poverty; or totalitarianism and lack of personal freedom.
Psychosocial
refers to the interrelation of social factors and individual thought and behaviour.
Scapegoat
From the book of Leviticus in the Bible: a goat sent into the wilderness after the priest has laid the sins of the people upon it. In everyday usage refers to someone blamed for the faults, mistakes, and wrongdoing of others, mostly for reasons of expediency.
Trope
A commonly recurring motif, cliché, literary or rhetorical device used in discourse.

References

  1. awarehead 2011, Siri VS Furby, video recording, YouTube, viewed 20 October 2017, <https://youtu.be/18UmoIu8lII>.
  2. Bell G. 2013, Stanford Seminar – Magical Thinking: Fear, Wonder & Technology, video recording, YouTube, viewed 20 October 2017, <https://youtu.be/5aKZwKFFDYw>.
  3. Bostrom, N. 2017, Superintelligence Paths, Dangers, Strategies, Oxford University Press, Oxford, UK.
  4. Detrixhe, J. 2017, 'Wall Street’s research jobs are the most likely to be upended by artificial intelligence', Future of Finance obsession in Quartz Blog, weblog, Quartz Media LLC New York, viewed ,<https://qz.com/1113633/wall-street-analyst-jobs-are-the-most-likely-to-be-disrupted-by-artificial-intelligence/>.
  5. Kristol, B. 2014, Peter Thiel on innovation and Stagnation, video recording, YouTube, viewed , <https://youtu.be/F3EBfS9IcB4>.
  6. PBS NewsHour 2015, Edison's talking dolls: child's toy or stuff of nightmares, video recording, YouTube, viewed 20 October 2017, <https://youtu.be/_bgXH7U2Ja0>.
  7. ryo000001 2012, The most famous Japanese "Karakuri" automata that have [sic] made 200 years ago., video recording, YouTube, viewed 20 October 2017, <https://youtu.be/i5zYK9FxORI>.