Illustration created by Gebby Schell
In part 1 of this Primer on Digital Thinking last year, I introduced the fundamental distinction that digital thinking is based on counting (“How many?”) while analog thinking is based on measuring (“How much?”). Digital thinking utilizes data, including words, while analog thinking makes comparisons, typically by utilizing images.
In part 2, published a few weeks later, I described the distinction between explicit knowledge and tacit knowledge, which was made more than half a century ago by Michael Polanyi. For example, learning to tie one’s shoelaces (or a knot in general) is a form of tacit knowledge that is rarely achieved either by words or by images. The routine of going to your own home is a type of tacit knowledge that you convert to explicit knowledge when you give someone directions or draw a map.
Digital thinking often leads to a mindset that relies heavily on statistics, which in turn are used to predict future events based on past events. Perceiving a trend based on a set of data is very important for managing a large population and its infrastructure. However, the use of big data to establish an allegedly permanent pattern in an individual’s behavior can wreak havoc on a person’s life, especially when there’s no way to appeal.
California’s Three Strikes Law (passed in 1994 and modified by a ballot measure in November 2012) and the Zero Tolerance meme view an incident of bad behavior as evidence of an unchangeable tendency that is assumed to be an objective fact; interpretation by a human judge is thus irrelevant. And even when a federal judge intervenes to correct an error, vindication can be delayed by a state attorney general who believes that robotic adherence to procedural rules is essential to justice.
China’s Dang’an system uses big data and surveillance cameras to calculate a citizen’s social credit based not only on individual black marks (e.g., jaywalking, excessive video gaming, playing loud music on trains) but also on the behavior of one’s family and friends. If a person’s score is too low, some government services will be delayed (e.g., slower internet connection) or denied (e.g., getting a passport).
This past August, nearly a year after I published part 2, Mike Elgan described the extent to which such a system already exists in the US. For example, New York State’s Department of Financial Services released guidelines early this year that say an insurance company may use a person’s social media posts when evaluating that individual’s risk to calculate their insurance premium.
Elgan summed up the danger of America’s privatized version of a social credit system: “Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.”
So, what kind of people always behave politely and never offend anyone? Robots, of course. They would be perfect citizens because they never discomfort anyone: they never utter microaggressions, never touch anyone inappropriately, never make jokes.
Welcoming passionless robots as exemplars of politically-correct behavior dovetails with two other recent trends: re-education workshops for college men because “all men are potential rapists”; and the birth striker movement started by women who believe an ecological doomsday is near, therefore it would be cruel to bring new children into the world to live through it.
One popular re-education video called “Tea Consent” explains how offering someone a cup of tea is analogous to requesting consent for sex. The lack of spontaneity and the reliance on explicit rules instead of tacit signals are characteristic of programming a robot.
However, another video notes the inability to prove you followed the rules: “Tea Consent are guidelines to improve society, not a protection for the individual.” This observation resonates with my remark at the start of this essay: a data set that reveals patterns is helpful when managing a big population and its (moral) infrastructure, but believing that those patterns are set in stone means disregarding the fate of individuals.
The birth strikers believe that a key factor in ecological disaster is that the population of the world will keep increasing. The current panic about overpopulation goes back to 1798, when the Rev. Thomas Malthus published a book asserting that population grows geometrically while food supply only grows arithmetically. Malthus’s prediction of dire food shortage did not happen, but his influence remains pervasive in public policies worldwide.
One advantage of the wide availability of data in the internet age is that one can update and/or re-analyze conclusions made in the past. In regard to world population growth, data indicate it is now leveling off and we may already be past the peak of 136 million births in 2015. Total births are likely to decline to 135 million in 2020, because the global birth rate is falling.
A more poetic rebuttal to the gloom and doom of birth strikers was voiced by Carl Sandburg in 1948, who praised life over inanimate objects: “A baby is God’s opinion that the world should go on. A baby, whether it does anything to you, represents life. If a bad fire should break out in this house and I had my choice of saving the library or the babies, I would save what is alive. Never will a time come when the most marvelous recent invention is as marvelous as a newborn baby.”
Yet it seems many people no longer share Sandburg’s belief in the innate value of a human baby. Some even see robots as a replacement for children, portrayed poignantly in Steven Spielberg’s AI (2001), which is a modernization of the Pinocchio story. More recently, Ben Young’s movie Extinction (2018) includes robot children who are adopted and seem to enjoy a normal family life, but the film ignores the fact that they never grow up and don’t need to go to school.
As human kids grow up, they develop a mind of their own. The question of whether artificial intelligence (AI) will “grow up” to develop an independent will is no longer confined to science fiction. The hype generated by researchers who feel that computers will save the world is equaled by the fear of “robots taking over” voiced by other researchers. In fact, machine learning is dependent on algorithmic input by humans, which means even the cleverest robot is essentially applying a pre-established set of rules (or meta-rules) to new situations.
Apps are made by humans who bring a set of assumptions to their work. For example, Accuweather showed a cloudy icon for my wife’s hometown in Central Java day after day, even though I saw completely blue skies. When I mentioned this to a Dutch neighbor, he said that TV weather reports in his country repeatedly err by displaying rain icons in Indonesia during the dry season. The explanation seems obvious: someone programmed a reading of 100% humidity (common in Indonesia early in the morning all year long) to equal clouds or rainfall. The digital mindset robotically assumes that a fact has only one possible interpretation.
Experts still debate whether a robot’s ability to solve problems and figure out tasks proves it has intelligence. After all, when animals evidence such skills, we consider them to have rudimentary intelligence. Skepticism about AI is even stronger in regard to claims that robots are creative. In a long interview with Peter Robinson of the Hoover Institute about his book Life After Google, George Gilder asserts that human creativity is based on surprise. “If a machine surprises you, that’s because it has broken down.”
The word robot was coined by Josef Capek and popularized by his brother Karel in the play RUR in 1920. It is the root of several words in Czech which refer to a drudge or menial laborer. A century ago, dull repetition referred to factory work, such as the then-new application of an assembly line to the construction of automobiles.
In an article in The Huffington Post, Silicon Valley entrepreneur Martin Ford pointed out that repetitive work now includes many jobs other than working on an assembly line. “As specialized artificial intelligence applications (like IBM’s Watson for example) get better, ‘routine and repetitive’ may come to mean essentially anything that can be broken down into either intellectual or manual tasks that tend to get repeated. Keep in mind that it’s not necessary to automate entire jobs: if 50% of a worker’s tasks can be automated, then employment in that area can fall by half.”
Breaking down a human activity into a sequence of tasks that a robot can perform is an example of the Externalization process described in the SECI model of Nonaka and Takeuchi that I mentioned in part 2. Externalization involves converting tacit knowledge into explicit knowledge. The two co-authors describe how the observation of human bakers enabled the redesign of a bread-making machine that had not been kneading dough properly. A more quantitative example would be to determine how much force a robot should use when tightening a screw.
A key feature of self-driving cars is the ability to recognize objects on the road, especially humans or animals. Did you ever notice that the images for a website’s reCaptcha security check (a reverse Turing test whereby a computer presents images that you identify to prove you’re a human) are all outdoors? Traffic lights, signs, trees, bridges, trucks, etc. It’s not a coincidence: Google is using the reCaptcha input of millions of users to convert our tacit knowledge of objects into explicit knowledge to train their AI.
Having briefly reviewed how robots are able to replace human beings, I’d like to address the impact of automation on the economy. Martin Ford noted that “the job market is currently polarized: a great many of the middle-skill jobs that used to support a solid middle class lifestyle have been automated — leaving us with high skill/high wage jobs that require lots of education and training and lots of low skill jobs with very low wages.” A report by the Economic Policy Institute reveals that productivity rose steadily from 1973 to 2013 but hourly compensation remained almost flat for nonsupervisory workers. During the Clinton era, nearly everyone enjoyed a rise in real annual wages, but since the dawn of this century wages for the bottom 90% have been flat.
Ford warned about the impact of automation on consumerism: if it becomes harder for a human to get a decent job, who can afford to go shopping? In an article about one company’s expectation that its human workers be as fast-paced and tireless as its robots, Marshall Auerback asked a similar question: “if the output of each worker … is rising so strongly but the ability to purchase (the real wage) is lagging badly behind, how does economic growth sustain itself?”
In this light, a guaranteed minimum income starts to seem attractive to the business sector, because retail sales may collapse without it. So far, consumer spending has been propped up by credit card debt which is an even more serious problem in the UK than in the US. However, the Australian economist Bill Mitchell favors job guarantees instead of a guaranteed income. In his blog, he explained that the term “gainful employment” dates back nearly 200 years: “The gainful worker was effectively considered to be engaged in activities that advanced private profit rather than societal well-being. Other activities, particularly public sector employment held a lower ‘status’ and in many situations are not considered productive at all.”
Mitchell titled that blog entry “Work is important for human well-being.” For the final part of this essay, let’s explore how the digital mindset has molded our view of the nature of human society and self-worth during the past two centuries. This time period corresponds closely with the emergence of the field of Economics, which has become pre-eminent in the 21st century even though it was not considered important when the Nobel prizes began to be awarded at the dawn of the 20th century.
Jeremy Bentham founded the philosophy of utilitarianism early in the 19th century, based on the axiom that “it is the greatest happiness of the greatest number that is the measure of right and wrong.” Quantifying happiness on a large scale was therefore a key part of his moral compass. In his new book The Unnameable Present, Roberto Calasso writes: “Bentham was searching among human feelings and impulses for something that could be measured. He found it in utility. But that was not enough. Everything had to be measured. And Bentham discovered that it could be, if related to utility. As for utility itself, it could be measured in money.” Calasso sums up by saying that the field of Economics is based on this “false foundation.”
Efforts to quantify human work and improve productivity have emphasized automation for decades, but the focus has now shifted toward robotics and AI. In an article in The Atlantic, Martin Ford explained why a robot might find it easier to replace a radiologist than a housekeeper: “A housekeeping robot would need to be able to recognize hundreds or even thousands of objects that belong in the average home and know where they belong. In addition, it would need to figure out what to do with an almost infinite variety of new objects that might be brought in from outside.”
Perhaps a more significant obstacle when designing a robot to perform housekeeping is that it wouldn’t reduce costs much. A robot that can build a car or drive a truck can save a company a lot of money. But tidying one’s living space — traditionally a key part of a woman’s contribution to family life — is not considered “productive” because the effort is not assigned a monetary value. Worse, it is now considered inferior to materialistic pursuits, as indicated by a popular dichotomy that even feminists echo: “work vs stay at home.” In fact, women who stay at home do a lot of work and should be respected for it! (The same goes for men who perform home repair, housework, etc. without getting paid.)
The contribution of housewives to the economy has become a hot topic in India. “Until few decades ago, women were expected to stay at home, and those who wanted to work were often stigmatized. Today it’s mostly the other way round. In many societies work in the productive economy is valued far more than work in the domestic household economy. As discussed earlier, by the productive economy, we mean the production of services and goods that people will pay for or buy. By the term domestic household economy, we mean cooking, cleaning, ironing, emotional nurturing, care for children and the elderly. When these things are done within the family, these are not paid for. As discussed earlier, society tracks the productive economy in many ways. When we worry about joblessness we often think of jobs in the productive economy.”
The India Infoline article quotes the Governor of the Reserve Bank of India: “If mother A went to look after the children of mother B and mother B went to look after the children of mother A, and they each paid each other an equal amount, GDP would go up by the sum of the two salaries. But would the economy be better off?” Alternatively, if they each pay a babysitter, that creates a job; on the other hand, if they take turns watching each other’s children for free, then there is effort and caring but no economic gain.
Where do we go from here? Wendell Berry and other spiritually inclined writers have suggested that we appreciate the earth and respect all living creatures. Perhaps we can begin by appreciating each other and finding ways to relate to our fellow humans that are neither robotic nor materialistic. I wish I could offer more specific advice than that. Use your brain to question authority. Open your heart and listen to your fellow humans. As Berry wrote in his poem “Manifesto: The Mad Farmer Liberation Front”: “So, friends, every day do something / that won’t compute.”
Additional reading and viewing:
Gracy Olmstead’s rebuttal of birth strikers in a Sept 19 op ed in the New York Times. Quoting Wendell Berry’s comment on interconnectedness from his book The Body and the Earth, she warns against “deciding that we can take human life out of the equation, thus severing the potential hope and life of the unborn from our hopes for worldwide healing.”
An article about the history of modern birth control and its eugenic influences.
In addition to linking creativity and surprise, George Gilder talks about the “eschaton” (final achievement) of technology as a static viewpoint that runs counter to human creativity. And he points out how cloud computing now requires huge installations with massive cooling (like The Dalles in Oregon), similar to 19th century factories.
Humans Need Not Apply, a video about how automation is replacing jobs.
A BBC excerpt from the 2018 documentary The Cleaners about people in Manila who are contracted to moderate videos uploaded to the internet. The full movie may be available here.
A brilliant, beautiful, and strangely hopeful piece (and series). Thank you.
Comments are closed.