The Anti Anti Technological Unemployment ManifestoPosted: December 9, 2012
What is technological unemployment and why does it matter?
Technological unemployment is the theory that technology can actually cause permanent, structural unemployment. This is opposed to what critics of this theory believe; that although technology can displace workers in the short run, in the long run they will be reabsorbed into the economy as the market finds new jobs for them.(1)
Mainstream interest in this theory has skyrocketed since 2008; search interest for the term has more than increased a hundred-fold since the beginning of the century. (2) It is not hard to understand why interest in this concept has skyrocketed. More and more people are beginning to suspect that technological unemployment is partly responsible for giving us one of the worst recessions in history — approximate 12 million Americans and half a million Canadians had lost their jobs in the Great Recession of 2008. Of course, recessions have always created joblessness, so this may not be particularly surprising, but the scope of effect is. This figure represents a 5.7% jump, the largest since the World War. And although nominal unemployment rates have since improved (7.3% as of Oct 2013), these is evidence that they do not truly reflect the job market. This is because unemployment rates do not take into consideration part time work or those that have given up looking for jobs. (3)(4)(5)
Instead of using unemployment figures, since we are talking about technological unemployment, a much better measure of joblessness is simply the percentage of the population being employed. The following figure shows that since the great recession, the number of jobs has dropped from 63% to 58%, and has not recovered since then. As a result, many economists are beginning to call this phenomenon the ‘jobless recovery’. (6 )(7)
Technological unemployment asserts that the cause of much of this unemployment is the result of the incredible pace at which computers and other labor saving technology manage to replace tasks originally assumed to be possible only by humans.
The incredibly capabilities of artificial intelligence is a story that is difficult to tell through mere statistics alone — since this process is not merely quantitative, but qualitative as well. As a result, the best way to introduce the reader to the capabilities of automation is simply to tell three different narratives of AI progress in recent history, in order to build a more accurate mental model of what machines could potentially do in the future.
The Chess playing algorithm
In 1988, Garry Kasparov, the best chess player in the world, predicted that no computer program would ever be able to defeat a human grandmaster in the game. He then famously declared that “If any grandmaster has difficulties playing computers, [he] would be happy to provide [his] advice”. Kasparov was horribly mistaken. (8)(9)
This prediction had been invalidated the same year it was made — when chess grandmaster Bent Larsen was defeated by the program Deep Thought created at the Carnegie Mellon University. And although Kasparov himself did not lose to Deep Thought in his own match, it would be merely 7 years later that he would be famously defeated by Deep Thought’s successor — IBM’s Deep Blue, in Philadelphia on 1996. This news greatly shocked the world, for it had previously been thought impossible for a computer, regardless of processing power, to defeat the top human chess player. (8)(9)
This story is not yet over. During the peak of his career, Kasparov achieved a chess ELO rating of 2851. This would be the highest human rating ever recorded until 2013. Recent chess engines, such as the Rybka, have an ELO rating of over 3200. Deep Blue was a supercomputer that was the size of half a room, containing 30 top-notch processors running in parallel. Rybka could run on a standard laptop. That event marked the end of human superiority in chess — since then, no human has even been capable of defeating a top chess AI. (9)(10)(11)
Watson is a supercomputer designed for playing the trivia game show Jeopardy!. It consists of contestants win prize money by answering questions on a wide variety of topics. As these questions may involve puns, wordplay, obscure references, and jokes, it is often difficult to figure out precisely what is being asked. In short, playing the game show has been previously assumed to require human levels of cognitive ability to merely compete.(12)
Fortunately, this assumption was badly mistaken. Inspired by the success of deep blue, IBM took on a different challenge that critics claimed were “an impossible task”. (12) As of 2008, Watson’s sophisticated AI and pattern matching algorithms has allowed it to regularly defeat Jeopardy champions on a regular basis. In 2011, Watson managed to successfully challenge and defeat the two most accomplished human contestants in the history of Jeopardy!. One of the two competitors, Ken Jennings, made a written response to the tournament’s last question, “I for one welcome our new computer overlords.” (13)(14)
Reinventing the Car
Finally, we come to the example of cars. In the first half of the last decade, many predictions around the capabilities of AI have been propounded. Economists and other futurologists have attempted to discern between tasks that be done by AI, and tasks that cannot.
In 2005, economists Frank Levy and Richard Murdane, argued that although mundane tasks such as performing arithmetic or repetitive labor can be easily automated, tasks involving intelligence and pattern recognition — where the rules are more complex — cannot never be automated. Driving, they claimed, fits this category. Their predictions led to the widespread belief by many in the field of AI, including those in the US Government, that driving could never be automated. (15)(16)
As with the above examples, initially pessimism seemed to be the case. The United States Department of Defense promised millions of dollars in competitions to build a driver-less vehicle that could navigate through an unpopulated desert — in an event called the DARPA Grand Challenge. This was an utter failure. The best vehicle couldn’t even make it eight miles into the course, and took hours to do so. For a while, this really seemed like an impossible task. (15)(16)
However, technology continued to advance. In 2010 — a span of merely five years, Google had succeeded in inventing cars that could drive thousands of miles on actual roads without human involvement at all. It now plans on releasing those cars for full commercial use in 2016 — compatible anywhere and in all weather conditions. Technology has once again won. (15)(17)(18)(19)
The truly interesting news is that this new technology stands to displace a huge portion of the working population — possibly up to numbers as large as 5%. The trucking industry employs around 3% of the entire workforce. Add in taxi and bus drivers, chauffeurs, delivery services, transportation companies, and you get an idea of how big an impact this will cause. (20)
The moral of these stories is not merely that technology is becoming increasingly capable at doing human-level tasks. It is that we humans have consistently underestimated how both the scope and speed at which technology advances. A common reason why history is taught is to educate future generations about the mistakes of the past, and so we can rely on past data to make better decisions today.
Historically speaking, when claims of what technology could do are presented, we have ridiculed them as outlandish and impossible. More often than not, this gut instinct is wrong. When attempting to estimate the possibility of a task being automated, we usually rely on the availability heuristic to evaluate it, meaning that we try and picture in our minds what this process of automation could involve. Since we have not automated them yet, we usually come up with a blank, which leads to evaluating that task as impossible, despite the fact that the nature of the task may become a lot easier once we spend a few more years trying to solve the problem. (21)
Instead, a far more accurate way of evaluating its likelihood is to take an outside view of it. When we look back at historical examples of tasks that were claimed to be impossible to automate, we begin to realize that those predictions were very likely to be wrong, and we should calibrate in consideration of this fact.
The March of Automation
These stories told were merely one of many. Many other tasks have been automated, and many more will be. Economist and futurologist Martin Ford has estimated that up to 40% of jobs can be automated in the coming decades. To anyone even remotely interested in technological unemployment, this should be worrying. It should come as no surprise that this has caused skyrocketing interest in technological unemployment! (20)
Critics can point towards these narratives as merely weak evidence for technological unemployment. They would be correct. The three examples given were merely anecdotal evidences for the power of automation, and could have been cherry-picked.
If I had ended my argument here, I would be committing a strawman fallacy, because no technological unemployment skeptic actually believes that jobs are not being displaced by technology – we have already seen plenty of cases of this occurring. The question in dispute is whether such displacements are permanent or temporary. The above stories were told not as part of the main argument, but in order to change beliefs that certain industries or jobs can never be automated away. This is important because if one begins to evaluate technological unemployment under false premises of what technology is incapable of, one will very likely come to false conclusions as well!
This is dangerous. If technological unemployment is true and we fail to recognize it in time, we will waste our resources on poor solutions that do not solve the root of our problem. Our unemployment rates will continue to grow, taking billions of people into unemployment and poverty, all while leading to great political and economic instability. Human society as we know it will collapse. This is not hyperbole — history has shown us that when the livelihood of large groups of people are threatened, they will often resort to drastic, desperate, and violent solutions. The consequences would be cataclysmic. (23)
For this reason, it is imperative that we begin evaluating the plausibility of technological unemployment today, and bring the conversation to the mainstream. I hope to argue for its case in this essay.
A history of Luddites
Although I say we should begin talking about it “today”, the concept of technological unemployment is by no means new. In the midst of the industrial revolution of the 19th century, a group of textile artisans calling themselves the Luddites protested against new labor-saving machinery of the stocking frame and the loom. The movement famously smashed and destroyed machinery under the belief that they had been responsible for the hardships suffered by the working class. The Luddites argued that this new technology was threatening their livelihood and employment. (24)
Since then, history has associated the concept of technological unemployment with the Luddites. However, they were not the only ones who argued for the concept. In the 19th century, the infamous Karl Marx warned that the inherent crisis’s of capitalism — its simultaneous destructive and constructive nature, will result in massive overproduction. He believed that this overproduction would lead to widespread unemployment as the Bourgeoisie would end up having to cut back their labor force in order to compensate for the increased levels of productivity — the tendency for the rate of profit to fall. He used this to argue that capitalism contained within itself its own seeds of destruction, and for the inevitability of a socialist economy. (25)(26)
Even the arguably most famous economist in history, John Maynard Keynes himself, argued strongly for it. In his paper ‘Economic possibilities for our Grandchildren’ (1930), he wrote: ”We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment”. He believed that since we were displacing labor faster than we could find new uses for it, future generations would see massive unemployment. However, he did not see this as a necessary tragedy, for he saw it as a way to dramatically improve the living conditions of individuals, and believed that future generations might only work 15 hours a week. (27)
Albert Einstein, although not an economist, was a believer in technological unemployment as well. He declared that “Technological progress frequently results in more unemployment rather than in an easing of the burden of work for all”. Incidentally, like Karl Marx, he used this to justify the necessity of a socialist economy, where goods are distributed based on need, rather than ‘for profit’. (28)
However, at present it appears as though all these people were wrong! We can see today that far from harming the working class, labor saving technologies like the loom have been responsible for increased productivity and quality of life. We have had no significantly greater unemployment today than we had three centuries ago, despite our advances in modern technology. Nothing significant changed when our economy shifted from being 95% agricultural to 3%, or when the horse and buggy industry were put out of business. Most economists have universally united in stating that new technology does not affect long-term unemployment, a rare thing for a field that is reputed to be unable to agree on anything. (29)(30)
As a result, in modern times the “Luddite Fallacy” is used pejoratively to disparage anyone who questions the possibility that technological unemployment might exist. (31)(32)
After all, under conventional neoclassical economic assumptions, technological unemployment shouldn’t happen. The textbook definition of economics is that it is the study of how to allocate limited resources to agents with unlimited wants. Under such assumptions, if productivity in a sector of the economy increases, although diminishing marginal utility will cause the market to demand less labor for that sector, the economy will simply readjust; people from a more productive sector will switch to a less productive sector with the highest marginal utility.(33)
The example of the hot dog is given. If it takes 2 units of labor to produce a hot dog and 1 unit to produce a bun, a simplified economy with 30 units of labor should produce 10 hot dogs and 10 buns. If automation makes one sector of the economy more efficient — for instance, if it now takes only 1 unit of labor to produce a single hot dog, then the economy will simply shift; now the new equilibrium will produce 15 hot dogs and 15 buns instead. In theory, this makes complete sense. (34)
The only known exception to this rule, Jevon’s paradox, where increased efficiency in a sector can actually boost employment, also increases rather than decreases employment. As a result, improved productivity should increase standards of living rather than cause structural unemployment. (35)
This is so commonly believed that its inverse is considered an insult and a fallacy — the lump of labor fallacy. This so called fallacy is the contention that the amount of total work available to laborers is fixed. As a result, if productivity increases, the number of hours laborers can work will decrease, resulting in greater unemployment. However, since we have not seen widespread permanent unemployment so far, this is clearly false. The theory of the hot dog bun seems to back this up — there is no reason why temporarily displaced workers couldn’t get rehired at a different sector of the economy where they will contribute the greatest marginal good. (36)(37)(38)
Or is this really true? Past performance does not guarantee future results, and models on unemployment may not be correct. Just because widespread unemployment has not yet occurred due to technology does not mean it will forever remain that way, and just because we have a logical sounding explanation for why technological unemployment is impossible does not mean it reflects reality. I believe it can occur.
This is against the mainstream consensus of academic economists. In a great majority of cases, this is not a good idea. Contrary to the tenets of populism, expert consensus is strong evidence for a position being correct. The vast majority who disagree with the opinions of experts have been proven wrong time and time again. After all, expertise is called expertise for a reason: these people tend to spend more time than anyone else studying the issue. Chess experts tend to win games, and medical experts tend to diagnose conditions better than laymen. As an aspiring economist myself, I tend to agree with the vast majority of things known to be economic consensus, such as tariffs and price ceilings being bad, and floating exchange rates and free immigration being good. (40)(41)(42)(43)
Therefore in order to assert a statement like that, I would have to shoulder a great burden of proof to counteract the already existing a priori evidence against technological unemployment. I hope I can. I will not only argue for technological unemployment will result, but also explain my reasoning for why such unemployment have not occurred despite centuries of technological advancement.
Technological Unemployment in our time.
Technology has a tendency to move at an accelerating, rather than linear, rate. Furthermore, this exponential growth of technology is surprisingly steady. The fact that this occurs should be obvious in retrospect — technology is capability enhancing. The more technology and science we have, the greater our capacity for scientific research. Isaac Newton remarked upon this by saying “If I have seen further it is only by standing on the shoulders of giants.” This is a virtuous cycle — this feedback loop of ever increasing rate of technological progress continues to inspire greater progress, and so on. This phenomenon is illustrated in the logarithmic graph below created by Ray Kurzweil in his book The Singularity is Near (2005). It is a compilation of fifteen independent lists of ‘paradigm shifts’ considered to be the largest scientific advancements in human history. (43)
However, critics may quickly point out that even these independent lists of historical paradigm shifts are subjective and may contain a bias towards recent inventions. This may be true. Fortunately, this is not the only way to measure technological change. There are other more objective measures of human technological progress showing similar trends. Probably the best way of measuring technological progress is simply GDP per capita. Since we’re talking about the pace of labor-saving technology being invented, it makes sense that the average amount of goods and services, which correlates to GDP per capita, is a good metric. We can see that in the charts below, United States GDP per capita follows a relatively smooth exponential curve as well. Similarly, the world GDP per capita also follows the same trend, albeit with a small change at the start of the industrial revolution.
The problem with most people’s mental model of technological progress is that it exists linearly. Most people assume that the next fifty years of technological change will be equal in size of the last fifty years.
However, we know this is untrue. Since Human GDP per capita on average doubles every 15 years, the amount of technological progress in the next fifty will be approximately equivalent to 14 times the level of the last fifty! Since we have linear heuristics, in order for us to be able to intuitively understand this, we would have to conceptualize the year 2700 in order to accurately imagine the year 2050. This is an important point because it explains why the next few decades of innovation is likely to be significantly different than the last few centuries. We will come back to this point shortly. (44)
A falsifiable hypothesis.
Is there evidence pointing towards current models of unemployment being wrong? The answer is yes! The main arguments given against technological unemployment can be falsified.
In order to elaborate on this, let me first propose an alternate model of how technological unemployment will affect the working demographic. Firstly, it is sometimes assumed that technological unemployment will affect all types of people equally. As a result, they will point towards the ability of some people to retain jobs as evidence for all people being able to retain jobs. This is untrue. Permanent displacement caused by technology does not cause equal displacement across the economy. Instead those with lower cognitive abilities are the most vulnerable. For some people, this may be a sensitive topic. Many are tempted to deny that objective levels of intelligence exist — a temptation much like both the rich and poor like to underplay differences in wealth. However, different levels of cognitive ability is so essential to the argument that it simply cannot be left out.
The best statistical measure of intelligence, or G, is the Intelligence quotient (IQ). As a result, for the purposes of this essay, IQ and differences of cognitive ability will be used interchangeably.
Why the low-IQ are more vulnerable.
Automation does not displace jobs or industries at random. Instead, the easiest tasks to automate are the ones which will be displaced first. As aforementioned in this paper, tasks involving repetition and easily quantifiable rules are ones that can be automated most easily, whilst jobs involving complex, flexible, non-routine creative thinking are far harder. The former category happens to include many more jobs associated with the less intelligent, such as factory line jobs, whilst the latter category is associated with jobs commonly thought of as belonging to the intelligent, well-educated elite such as professors or politicians. Although rare exceptions such as Chess Grandmaster (which we have seen can be easily beaten by computers) or housekeepers (which are notoriously difficult to automate due to difficult objective recognition requirements) do exist, they are the exception, rather than the norm. (45)
Compounding this effect is the fact that with very rare exceptions related to the highest end of intelligence, any job that can be done by a person with an IQ of X can be done by a person with an IQ of X+1, and so on. By induction, we can therefore say that displacement will always affect the less intelligent more than those gifted with superior cognitive ability.
Although one could object and argue that
(1) The theory of multiple intelligences refutes this argument — since people can have different levels of aptitude for different fields.
(2) This argument ignores the effects of specialization and training. In other words, someone with a lower intelligence could perform a job that someone with a higher intelligence is unable to; since the former might have had education and training in a job that the latter did not.
This does not invalidate the aforementioned statement. Firstly, the theory of multiple intelligences has had very little empirical evidence justifying it — children who tend to be good at mathematics also tend to be good at English, music, or kinesthetics. (46)(47)
Secondly, even if it were true, the same argument would apply to each specific type of intelligence, as some people must be worst off, objectively speaking, than others, even taking all categories of intelligence into account.
Next, (2) is invalid because we are specifically refuting technological unemployment. In this context, we are ignoring the effects of education or training because technological unemployment makes a statement about structural, rather than temporary unemployment that can be alleviated through retraining.
How can we tell if this is really true?
Well, we can seek out the relationship between intelligence and unemployment. Since intelligence is backwards compatible, and automation is more likely to affect the less intelligent, it should follow that those with lower IQs should have higher unemployment rates. Indeed, this is the case. IQ negatively correlates with unemployment with a coefficient of -0.73. The average IQ for an unemployed person is 81. The lowest percentile (<75) for IQ has an average unemployment rate of 12% compared to the median’s 7% and the 2% of those with an IQ of higher than 125.(48)(49)
Furthermore, nations which tend to have lower IQs also have worse off unemployment rates, even adjusted for the effects of high economic stability and GDP on education and nutrition, which do boost IQ. (50)
According to the classical assumptions of economics, this makes no sense! Although the low IQ are less capable of performing certain job, such as perhaps being professors or doctors, in theory this should not make a difference on unemployment rates — since these people are perfectly capable of finding jobs in other industries! And yet here is hard data disproving the classical model — those with lower cognitive abilities are far more likely to be unemployed!
Some may object and argue that exogenous factors may be at stake. For instance, perhaps the lowest percentile of IQ encompasses those who suffer from mental illness, or are otherwise not sufficiently competent to work. This objection is precisely the point I am trying to prove! The classical economic arguments against technological unemployment makes the assumption that everyone has useful labor to contribute. The arguments for technological unemployment is that this is not true — useful labor exists on a sliding scale, and the advancement of technology ensures that those on its far end will be left behind by technological change!
So we conclude that the ones unlucky enough to have lower cognitive ability are destined for permanent unemployment. They are, however, not the only ones at risk. Since automation is only useful for processes which involve easily quantifiable rules, many hold the false belief that complex, high status jobs that require good cognitive ability can never be automated away. This is untrue because like the example of the automation of driving, complex systems are merely an emergent phenomena of quantifiable rules — the difficulty merely lies in finding out what those are.
How quickly can we expect the advancement of automation to replace each marginal worker? Unfortunately, statistics on how the low-IQ unemployed have changed over time is not available. However, a good proxy is simply the level of education.
Firstly, education is already known to be a good metric because we know it has a strong correlation with IQ (in both directions!), and that there is a definite relationship between the adoption of technology and education necessary to use it. In other words, there is a minimum threshold of either IQ or education required to use any piece of technology, and individuals who fall below this requirement show great difficulty in adapting. (51)(52)(53)(54)
This is bad news, because in the last century, the unemployment rates for lower levels of education has increased significantly. Currently more than half of all high school dropout above the age of 25 are unemployed — and this is discounting those who have given up looking for work. We hypothesize that this is for the exact same reason that low-IQ individuals are prone to dropping out of the workforce. (55)
This is a problem because the number of people with no degrees has stagnated, which affirms that there is a physical hardwired limit to the maximum educational attainment that sections of the population can reach. This phenomenon can be seen below in this figure released by the US Census Bureau, where the rate of high school graduates has capped at 85% for both both genders, in 1977 for the males and in 1997 for females. (56)
The worse news is that this stagnation of human capability is not limited to education alone. There seems to be a hard wired limit to the maximum level of human intelligence as well. Some readers may be familiar with what is known as the ‘Flynn effect’, the observed statistical phenomenon that intelligence test scores has been increasing by roughly 5 to 25 points every generation, and that this growth has been incredibly linear and consistent across multiple countries. This growth is not due to a statistical error, but to changing environmental factors such as better nutrition, schooling, decreased rates of infectious diseases, and iodine supplements. (57)(58)(59)
Unfortunately, there is evidence that the Flynn effect is about to end. In Norway, the rate of IQ increase has dropped from 3 points per decade in the 1950s to 1.3 points in the last decade. In Australia, a measure of general IQ in kids aged 6-11 has shown no increase since 1975. In the UK, tests carried out since the 1980s has actually shown a decline in average IQ by 2 points. Suffice to say that it seems pretty likely that the low-hanging fruit of IQ gains has been plucked. (59)(60)(61)
So if human capital growth is starting to decline, whilst technology is growing at an ever accelerating rate, what does this mean for unemployment? The answer depends on which economic model is being used. It turns out that the assumptions that one uses in order to produce economic predictions and models are important — because the conclusions drawn can be radically changed as a result of small shifts of axioms.
In most (but not all) of mainstream economics, economists have a tendency to model labor-saving technology as compliments to labor, rather than substitutes. A complementary good is something that is more useful when used with another product, and therefore increases in demand when the other product is plentiful. On the other hand, a substitute good is something that is capable of replacing another product, thus its existence will reduce the demand for that product or vice versa. (62)(63)(64)(65)(66)(67)
And this is not necessarily a bad idea. After all, throughout most of human history, technology has acted as a complementary good to human labor — by increasing the productivity of the worker, it increases demand for labor, since each marginal unit of labor now produces more utility. (68)
However, just because something has always had a historical precedent does not imply it will forever remain that way. We have shown that the reason historical increases in productivity have increased net wages is because the increased difficulty of using that labor saving technology still falls under the maximum difficulty that workers can adjust to. This effect fails when the worker in question falls below the minimum required cognitive ability to utilize that level of technology. Since this would force this worker to revert to the last known labor-saving device that he is capable of using, despite being in a market with greater supply, the technology would now act as a substitute for that worker, as demand for him now falls. Some readers may be tempted to accept this conclusion, but then point out that this does not necessitate widespread unemployment — only a serious fall of wages so that an equilibrium of prices will be obtained.
This is incorrect. There is a minimum salary under which workers will not accept jobs. This is not unreasonable — there is a minimum level of income necessary to provide for basic necessities and the means to survival. Food happens to cost money, so any wage that is being offered below this threshold will therefore be rejected by the worker, since accepting it is unsustainable and they will begin to lose money. Therefore there is a limit to how far wages can fall, below which unemployment will start to occur. (69)(70)(71)(72)
The only reason why technological unemployment has not yet occurred because so far is because human capital growth has outpaced technology. However, we have seen that the former is something that is slowing down, whilst the latter is speeding up due to exponential growth. This phenomenon is aptly pictured in the graph below — where although computer technology starts out slow, eventually the two lines will overlap; the moment when technological unemployment will occur. (72)
In order to evaluate our theory, we must form testable predictions about what would result if it were true. Incidentally, the evidence seems to fit!
If machines compliments labor, then demand for labor would rise when automation increases. On the other hand, if machines are substitutes for a certain percentage of the population, then wages for that demographic would remain stagnant or even fall, as demand for those workers go down. Statistics on this are available. Throughout most of history, the former has been the case. Any statistical measure of capital, such as GDP, or labor productivity has had a strong correlation with worker wages. As a result, any technological breakthrough in increasing worker productivity has resulted in increasing the standards of living for workers across the board. However, since the 1980s (incidentally the same time university degree growth has begun to stagnate) this trend has changed, in a phenomenon referred to as ‘the great decoupling’. Now, despite great increases in both GDP and productivity, worker wages have actually stagnated. This trend can be seen in the graph below. (73)
A final objection to all I have said so far is not a criticism against these arguments, but rather an appeal to other arguments to explain the possible decline in unemployment.
Some examples of these objections are that increased government-imposed barriers to entry, high tax rates, difficulty in retraining, lack of innovation, legislative bureaucracy have imposed restrictions preventing jobs from returning. Whilst to a certain extent it is likely that all of these factors play a part in our current level of unemployment, this argument fails to take into consideration that the existent of one factor does not prevent other factors from existing. It is entirely possible that a solution to any of these factors could improve our unemployment rates, but it is technological unemployment adds to those rates in some kind of synergistic effect. (74)(75)
And although technological unemployment is often derided by most modern economists, it does not lack advocates in present day. Jeremy Rifkin and Martin Ford has argued for it by writing the book End of Work. Erik Brynjolfsson and Andrew McAfee has done the same, in Race Against the Machine. Hopefully, more economists will begin to take this theory seriously.
The Road Ahead
If technological unemployment is true — and I have made a strong case for it being so — then we need to start discussing solutions right now. The fact that it has not happened in any significant way currently does not mean it will never happen. The first step to solving a problem is to acknowledge that it is a problem, and that is what I hope people begin to do, starting with an extensive discussion on technological unemployment in mainstream economics, instead of dismissing it as a mere “luddite fallacy”.
Although throughout the paper I have made references to what mainstream economists believe, it is also important to note that not all economist are technological unemployment deniers. The concept does have advocates in present day. Jeremy Rifkin and Martin Ford has argued for it by writing the book End of Work. Erik Brynjolfsson and Andrew McAfee has done the same, in Race Against the Machine. Hopefully, more economists will begin to take this theory seriously.
Once we have begun to acknowledge it, only then can we begin to propose solutions to this problem. However, this does not mean solutions have not yet been proposed.
There already exists a number of suggestions. It is important to note that an unacceptable solution would be a knee-jerk reaction to stifling technological advancement in the name of helping the jobless. Although no doubt well intentioned, such solutions only diminish wellbeing in the long run — and this is the primary reason why the Luddites were wrong. Instead, technology is a tool that could potentially uplift billions of people out of the poverty trap and a Malthusian crisis. To not use it is a waste.
Instead, some more reasonable solutions include a basic income. Although this may sound completely ludicrous, analysis shows that it is surprisingly more plausible than initially assumed. Typically right wing economist such as Friedman and Hayek have strongly supported it. Multiple experiments of basic income in multiple parts of the world such as Panthbadodiya, India, or in Dauphin, Manitoba, have resulted in the government actually saving money in the long run, due to better health and education related outcomes. Unfortunately the analysis of a basic income is beyond the scope of this paper, so we will leave thinking about this problem as an exercise to the reader. (76)(77)
(7) Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy, Erik Brynjolfsson (2011)
(8) Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk, “A Grandmaster Chess Machine,” Scientific American, October 1990.
(9) Signal and the Noise, Nate Silvers. (2013)
(14) The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Martin Ford. (2009)
(15) The New Division of Labor: How Computers Are Creating the Next Job Market Frank Levy & Richard J. Murnane
(21) Schwarz, Bless, Strack, Klumpp, Rittenauer-Schatka & Simons, 1991, “Ease of retrieval as information: Another look at the availability heuristic.” Journal of Personality and Social Psychology, 61(2), 195– htm 202)
(23) Collapse: How Societies Choose to Fail or Succeed, Jared Diamond (2011)
(24) ^ Palmer, Roy, 1998, The Sound of History: Songs and Social Comment, Oxford University Press, p. 103
(36) Why economists dislike a lump of labor, Tom Walker, Review of Social Economy, 2007, Vol 65, Issue 3 Page 279-291.
(43) Ray Kurzweil, The Singularity Is Near : When Humans Transcend Biology, Viking Adult, 2005,pg 19.
(45) The New Division of Labor: How Computers Are Creating the Next Job Market, Frank Levy (2004)
(59) David F Marks (2010). “IQ variations across time, race, and nationality:an artifact of differences in literacy skills”.
(60) Cotton, S. M.; Kiely, P. M.; Crewther, D. P.; Thomson,, B.; Laycock, R.; Crewther, S. G. (2005). “A normative and reliability study for the Raven’s Colored Progressive Matrices for primary school aged children in Australia”.
(72) Lights in the Tunnel, Martin Ford.