글로벌 트렌드내서재담기 

책표지






  • Our Neglected Physical Economy and the Alleged End of Work

    For many years America has been guilty of hollowing out the physical, intellectual and human capital needed to compete effectively in the industries which deliver physical solutions.

    Why is this?

    Beginning in the 1990s, there was a widespread perception that the digital techno-economic revolution would lead to a world in which only virtual things really mattered. Physical things became an after-thought.

    And the implication was that those physical activities could be done by anybody, anywhere. In that virtual world, outsourced digital supply chains magically produced the food, energy, infrastructure and devices we needed to live our lives on-line.

    In such a world, Americans were supposed to design things and provide services to each other.

    To the extent that physical products were still needed, the consensus assumed that “someone else, somewhere else” would reliably turn those designs into finished products; these would magically show up exactly when and where our businesses and households needed them.

    That’s how Taiwan Semiconductor Manufacturing Company became the indispensable pivot-point for the world’s global economy.

    An important corollary to this axiom was the assumption that as technology-driven productivity soared and demand was satiated, we’d have an ever-growing surplus of labor, especially outside the STEM sector. So, it’s not surprising that the 21st century has seen the resurgence of what we call the “end-of-work-myth.”

    How is all of this related? And why does it make a huge difference in terms of public policy, investment opportunities and our quality of life?

    Consider the facts.

    As economist Alexander Field lucidly documented in his book A Great Leap Forward: 1930s Depression and U.S. Economic Growth, America’s productivity by 1929 had jumped by two-thirds compared to 1919. Field elucidated the cause as not one invention but the confluence of three trends:

    1) the machines that allowed the electrification of industries, 2) the advances in knowledge of underlying production processes and 3) the emergence of new materials with the advent of modern chemistry.”

    That’s important because there are clear parallels between the 1920s confluence of three sets of analogue trends and the 2020s confluence of corresponding digital trends.

    The parallel 2020s trends are related to 1) the machines that allow access and manipulation of information on a practically unlimited scale, 2) the accumulation of granular real time information about nearly “everything” and 3) the emergence of new materials enabled by our new understanding of nanotech & biotech.

    As shown in a chart in the printable issue, the ebb and flow of productivity growth is neither constant nor linear. Enormous surges like we saw in the 1920s reappeared in the 1960s and the Dot-Com era. Meanwhile, the huge productivity slump in the 1930s was echoed in the Carter and Obama years.

    The message is clear: we have ups and downs in innovation, which lead to ups and downs in productivity growth. Those “ups” have emerged when complementary trends appear with respect to machines, materials and knowledge. And, as Trends subscribers know, all three such trends are primed to make the coming decade a time of extraordinary opportunity, much like the 1920s.

    Highly respected technology futurist Mark Mills examined these and other crucial relationships in his most recent book, The Cloud Revolution.

    He says, “In looking at what awaits us in the 2020s, it’s clear that innovations have shattered what we implicitly and explicitly think about the limits to growth.” 

    Yet, according to Mills, “Many claim that the acceleration of AI will entail a permanent elimination of myriad jobs, from factory workers to fast food employees.

    A coterie of economists predicts that automation, robots, and especially AI, portend the end of work in huge portions of the economy. The unemployment levels will demand, we’re told, that society put in place a “universal basic income” for those who are, permanently, unemployable.”

    “Though it’s now framed in a novel way, this, of course, is neither a new argument nor a new solution. In fact, it goes back at least to ancient times. Fears of machines displacing human labor took full flower in its modern form in the 1930s, when industrial automation was rapidly expanding, taking society past its very long history of craft production into the era of mass production.”

    And while the GDP and productivity slumps of the 1930s didn’t remain, fear of technology-driven unemployment persisted. “Anxieties over automation resurfaced in the 1960s when…President Johnson would follow with “a Blue-Ribbon commission.”

    [It] concluded that technology did not threaten employment, [but] nonetheless recommended an “insurance policy” against such a possibility, proposing that the government create “a guaranteed minimum income for each family.”

    Later, the “end of work” crowd found another reason to worry when the mass production paradigm was roiled by the first stage of the Digital Techno-Economic Revolution. As Mills suggests, fast forward to 1976, when Wang Laboratories introduced the first practical word processor, Word processing - first as stand-alone machines, then absorbed into PCs - quickly supplanted the old corporate “typing pool” as well as most secretarial jobs, which, at that time, were mostly held by women.

    As Mark Mills explains in The Cloud Revolution, a convergence of technologies will drive an economic boom over the coming decade, one that historians will characterize as the “Roaring 2020s.”

    It will come not from any single big invention, but from the confluence of radical advances in three primary technology domains: microprocessors, materials, and machines. Microprocessors are increasingly embedded in everything.

    Materials, from which everything is built, are emerging with novel, almost magical capabilities. And machines, which make and move all manner of stuff, are undergoing a complementary transformation based largely on data and software. Accelerating and enabling all of this is the Cloud, history’s biggest infrastructure, which is itself based on the building blocks of next-generation microprocessors and artificial intelligence.

    This clerical labor-saving technology, it bears noting, came on the scene around the same time that there was also a huge increase in the number of women entering the workforce. - Yet again, as the data show, neither general nor female unemployment soared.

    The same pattern followed the introduction of spreadsheet and computer-graphics programs, which eliminated many number-crunching and drafting jobs. Those new labor-saving tools, along with many other similar technological advances of the 1980s and 1990s, were contemporaneous with overall U.S. employment growth in those decades.”

    Mills sums this up by observing that, “episodic recessions were and are inevitable. And blaming unemployment and economic downturns on labor-saving automation - rather than poor governance, incompetence, shortsightedness, or other human failings - is a very old tradition.”

    However, we know two things about the effect of the continual flow of technology changes that have occurred since the late nineteenth century.

    The first is that profound advances in technology have led to greater productivity, by being “labor saving,” which in turn has boosted the overall economy so much that real per capita wealth in the U.S. has grown ten-fold.

    The second is that despite all the “labor saving,” about 95 percent of willing and able people have, on average, continued to be employed over that entire 150-year period, episodically fluctuating due to cyclical recessions.

    If labor-saving technology were a net job destroyer, the unemployment rate should have been continually rising over all that history. But it wasn’t.

    MIT economist David Autor has been particularly eloquent on the apparent paradox of the continued rise in employment despite inexorable advances in labor-reducing technologies, observing that, with regard to the prospects for employment growth, “the fundamental threat is not technology per se but misgovernance.”

    Of course, where and how most people are employed has changed over time with agricultural jobs almost vanishing, manufacturing shrinking and services exploding.

    Nevertheless, we’re now told that the technologies of AI and robotics will do the same thing to the service and manufacturing industries that industrial technology did to agriculture.

    However, this analogy between factories and farms is fallacious! The error starts by ignoring a central fact: the consumption of “manufactured goods” is not constrained by the same physical realities as the consumption of food. In mature economies, food demand and food production rise roughly along with population growth.

    Only in under-nourished emerging markets is there a significant potential for increase in demand; but even in these there is only a two-fold difference in per capita calorie intake between wealthy and subsistence diets. Meanwhile, demand for manufactured items can grow as fast as wealth; i.e., far faster than a population grows.

    As incomes rise, people buy more products that create comfort, convenience and entertainment. And innovators continually create new demands by inventing new products, a feature impossible in agriculture.

    This core difference between food and fabricated things is clearly visible in the data. Agricultural consumption has closely tracked population growth over the past half-century in America, both rising about 80 percent. Meanwhile, the consumption of industrial goods has increased about 300 percent.

    Even without considering myriad yet-to-be-commercialized or yet-to-be-invented products, we know that dramatic growth in global demand is possible. That’s especially true for life-changing items such as air conditioners, cars, computers, appliances and other so-called “luxuries,” as costs are driven down. 

    In many cases, potential demand is 10 to 100 times today’s level. For instance, billions of people live in countries where the share of homes with air conditioning or even automobiles is 10 percent or lower, compared to nearly 90 percent in America.

    That means the coming demise of physical products has been greatly exaggerated. Obviously, soaring untapped demand for existing and new products is going to come on-line as technology makes them ever more capable and inexpensive.

    Furthermore, the infrastructure required to support the exponential growth of the virtual economy involves enormous amounts of hardware and giga-watts of power to run it. Interestingly, the way we account for things hides this from view. 

    That’s because when a manufacturer or utility outsources things ranging from pure research to design to data centers to supply chain management to marketing support to certain elements of contract manufacturing, the jobs and economic activity show up as “business services,” rather than manufacturing.

    This gives the false impression that manufacturing is dwindling faster than it really is. However, that’s not the whole story, because the U.S. has in fact been ceding its leading-edge manufacturing capabilities to offshore suppliers primarily located in Taiwan, Germany, South Korea and Japan.

    So don’t be fooled. Surging productivity and falling product costs in the age of AI and robotics will lead to more, not less, work. The question is whether Americans will be doing that work. And much of that will be determined by whether Americans are willing and able to do it.

    Interestingly, research implies that only a small fraction of America’s jobs in the 2020s will be in STEM, per se. There is a skills shortage in America. But the primary shortages are found in the skilled trades, ranging from machine operators to technicians and welders, where a half a million openings are left unfilled each year.

    STEM jobs overall, which include much more than scientific and engineering jobs, still constitute only about 6 percent of the total workforce. In fact, despite the recent rush to encourage every grade-schooler and retiree to learn to write software code, there are still fewer people employed as coders than as farmers and agricultural workers.

    In fact, America is not facing a deficit of STEM–educated graduates. It is true that there is intense demand for and a shortage of people with certain specific degrees - especially in data analytics, machine learning, and AI. But, overall, America produces each year roughly 50 percent more STEM graduates than there are STEM job openings. As a result, more than 11 million Americans today have a STEM degree but are employed in a non-STEM job.

    Furthermore, we also know from history that engineers will strive to make technology not only better and cheaper, but also easier to operate by non-experts. We can see particular success in this regard with the intuitive software many people use casually today, accomplishing computational feats that only a handful of experts could have performed in earlier times.

    AI of the future, even coding, will become increasingly easy for laypeople’s use. The effect of that will help more people become “knowledge workers” and have what coders call “natural-language” access to expertise anywhere, at any time.

    And AI is now also bringing greater productivity to writing code for new AI software itself. One new company touts their AI-based automated coding system that can produce critical software 10 to 100 times faster than a human alone. 

    The system is reducing expert labor hours needed to create AI, while simultaneously democratizing the use of AI by non-experts.

    None of this obviates the fact that society is migrating toward an era of ambient computing in which every business and job will have increasingly knowledge-centric features and thus a collateral need for knowledge-capable workers.

    As Mark Mills observes, “when it comes to STEM skills, some pundits seem to conflate three related but different issues: 1) the role that STEM workers will play in propelling the infrastructure of the new era, 2) the number and kinds of other jobs that an expanded economy will generate, and 3) the fact that the combination of AI & the cloud upskills everyone.”
     
    The post-coronavirus economy has accelerated attention to all of these issues. Policymakers around the world are again focused on re-shoring jobs and supply chains associated with essential industries, healthcare not least among them. But when it comes to reinvigorating manufacturing itself, we’re told that this won’t lead to significant gains in employment because automation and information tech will take those jobs.

    Instead, we’ll see, the claim goes, a continuation of the trend for modern economies in recent decades toward declining factory employment. But the data doesn’t support this outlook. For instance, from 2010 to just before the COVID shutdowns, both employment and output in the U.S. manufacturing sector actually increased.

    Meanwhile, the enormous quantity of physical resources dedicated to enabling the virtual economy continued to grow at an accelerating yet underappreciated rate. That includes unimaginably large datacenter networks connected by unprecedented bandwidth to tens of billions of increasingly capable devices.

    Rather than leading to fewer jobs and less production, the refinement, maintenance and enabling of this capability will require more of both.

    Given this trend, we offer the following forecasts for your consideration.

    First, between now and 2032 the price-performance of information technology will improve 1000-fold, dramatically improving our quality-of-life.

    That will lead to far lower prices and exponentially greater functionality. And that cheap, powerful technology will be available to more people in more places. Applications that were once thought impossible will become commonplace and ubiquitous.

    Second, the cloud-enabled explosion in price-reformance will require ever-increasing investments in information infrastructure.

    U. S. business already spends roughly $1 trillion a year on “information infrastructure” and there is every reason to believe this pace will accelerate throughout the coming decade. Information infrastructure includes cell towers, data centers, fiber networks, satellites and many other things.

    And we have to recognize that all of these are physical entities which require raw materials extraction, manufacturing processes, logistical support and installation. That means employing lots of people who are neither hapless baristas nor electrical engineering PhDs.

    Third, aided by AI and robotics, the number and variety of new and useful molecules will increase at an explosive rate opening up new product and service opportunities.

    Just since we wrote Ride the Wave, the cumulative number of substances available to researchers has more than tripled. Meanwhile, an under-appreciated opportunity lies in recycling molecules as a pathway to efficiently creating next-generation solutions.

    Automated chemical design, synthesis and testing are becoming increasingly cost-effective. Whether we’re talking about cures for cancer and dementia, building flying cars or realizing the potential of low-cost space vehicles advancing the frontier of chemistry is key. The result will be new companies and better lives.
     
    Fourth, the surging productivity and falling product costs enabled by AI and robotics will lead to more, rather than less work.

    That certainly includes the production of cloud-enabled devices and the building of information infrastructure. Already there are roughly four Internet-connected devices for each human being on the planet. And this is just the start.

    Even without considering myriad yet-to-be-commercialized or yet-to-be-invented products, we know that dramatic growth in global demand is possible. That’s especially true for life-changing items such as air conditioners, cars, computers, appliances and other so-called “luxuries,” as costs are driven down.

    In many cases, potential demand is 10 to 100 times today’s level. For instance, billions of people live in countries where the share of homes with air conditioning or even automobiles is 10 percent or lower, compared to nearly 90 percent in America. The physical economy is alive and well. And,

    Fifth, Americans will be the ones doing much of the well-paying work in the physical economy of the 2020s if we make the right policy decisions.

    The pandemic and the invasion of Ukraine have made us realize that physical things are still very important.

    That means it’s not enough for Americans to design things and provide services to each other. And we can’t simply assume that “someone else, somewhere else” will reliably turn those designs into finished products which will magically show up exactly when and where our businesses and households need them.

    In order to minimize risk, reduce costs, increase flexibility and increase effectiveness, physical manufacturing will be re-shored and reintegrated. Fortunately, the United States already has a critical mass in terms of raw materials, intellectual property and consumer demand found nowhere else. The big problem is a shortfall in terms of middle-skilled personnel prepared to work in these industries.

    Resolving this problem will mean forgetting about dysfunctional solutions such as “universal basic income” and focusing on repositioning America’s human capital to address the realities of the Roaring 2020s.

    That means providing Americans with the opportunity to gain the skills they need to perform real jobs in the real world. Beginning in the next Congress, expect to see a new combination of regulatory reforms, tax incentives and targeted support programs that can help remove barriers and kick-start the process.

    Resource List:
    1. AEIdeas. June 7, 2022. Bret Swanson. Robots and Good Jobs on the Other Side of “COVID-flation.”

    2. AEIdeas. February 12, 2022. Bret Swanson. Hard Industries, Hard Work, and Big Opportunities.

    3. Encounter Books. November 2, 2021. Mark Mills. The Cloud Revolution: How the Convergence of New Technologies Will Unleash the Next Economic Boom and A Roaring 2020s.

    4. AEIdeas. November 29, 2021. Bret Swanson. How the Cloud Powers Moore’s Law, and More.

    5. RealClearPoilicy.com. January 07, 2022. Bronwyn Howell. Does Artificial Intelligence Really Reduce Jobs? A Historic Perspective.

    6. AEIdeas. January 12, 2022. Shane Tewes. How Can Technology Help the Supply Chain? Highlights from My Conversation with Glenn Richey.