rp2

Shane Greenstein on Jobs, Inequality, Financial Crises, and the Future of the Internet

May 2018. GrowthPolicy’s Devjani Roy interviewed Shane Greenstein, the Martin Marshall Professor of Business Administration at the Harvard Business School and co-chair of the Harvard Business School Digital Initiative, on jobs, inequality, financial crises, and the future of the Internet. | Click here for more interviews like this one.

Links: Shane Greenstein’s faculty page at Harvard Business School | How the Internet Became Commercial: Innovation, Privatization, and the Birth of a New Network (Princeton University Press, 2015) | NBER research page | Harvard Business School Digital Initiative

Growthpolicy.org: Where will the jobs of the future come from?

Shane Greenstein: I am a technology economist at heart, so let me give an economic answer that stresses jobs created by advances in technology. Many jobs are created when a new technology becomes scalable, and firms deploy it widely and users adopt in a wide set of circumstances. There tend to be good jobs associated with both the transitional building and deployment of such businesses, as well as their regular operations. These types of jobs are especially common in electronics, information, chemicals, medicine, and other science-based activities.

I expect these jobs to continue to grow. All hype aside, the U.S. tends to do this well, especially in comparison to most other countries.

More narrowly, a few technologies on the horizon appear to have the capacity to create many jobs. Among them are the deployment of new uses for AI, such as neural networks. Others are applications of big data to distribution and operations, 5G for wireless data transmission, VR/AR for a wide set of applications, support services for many new “autonomous services,” and a range of complementary activities around all of those. The medical field also holds the potential for a range of new jobs affiliated with the deployment and use of precision medicine and related forms of immunotherapy that use a person’s unique biological makeup. These are just examples, but enough to give you the idea.

Growthpolicy.org: What should we do about economic inequality?

Shane Greenstein: Economic inequality has many complex causes. I have studied only a small part of its causes—principally how technology contributes to regional inequality in the US. Let me speak about that.

Start with a simple fact: some regions are richer than others. How does technology interact with that fact? More precisely, does the deployment and diffusion of technologies make existing regional inequality better or worse? I did some work with Chris Forman (at Cornell University) and Avi Goldfarb (at the University of Toronto), which examined that question during the deployment of the commercial Internet, i.e., during the first buildout of the Internet in U.S. business in the late 1990s.

The answer is subtle, and not simple or intuitive. The Internet comparatively benefited areas that were already doing well. It is important to stress the “comparative” aspect because little evidence suggests the first generation of the Internet made a region worse off. Rather, the evidence suggests technically led growth enabled rich areas to gain more than other areas, even when everyone was growing. Rich areas pulled away more from less-rich areas, making regional inequality worse.

I have learned a few lessons from this work. Among them, I avoid simple platitudes about inequality and technology. New technology contains the potential to alleviate suffering in the human condition, but this does not happen easily or spread in a pre-ordained pathway. Worsening inequality also does not have a single cause and will not be changed with simple policy prescriptions about technology. And, most of all, the hype in public U.S. political conversation about this topic—which presently gets its fuel from witty resentments, old tropes, and Twitter feeds—is often misguided. It confuses the consequences of technology with the consequences of international trade and dozens of other topics and has little relationship to the deeper causes.

Growthpolicy.org: How should we prevent the next financial crisis?

Shane Greenstein: This is a complex topic. There is no way to give a short answer to this question. Sorry, here goes a long one.

Financial crises come in different forms. When I wrote a history of the recent past—How the Internet Became CommercialI found myself studying the type of crisis that arose during the dot-com bust. Self-dealing and dishonest financial reporting played a role in making that situation worse. Yet, to be clear, the lessons from 2008 differed in some substantial ways because the causes also differed. So let me focus on lessons from the 1990s, the case I have studied.

Two things happened in the dot-com boom of the 1990s. One of them was ok. There was plenty of speculation in the late 1990s. The ultimate level of value for many new businesses was unknown until the businesses generated revenue, and until it developed mature operations at scale. Entrepreneurs conducted experiments (of sorts). It involved considerable learning from experience, but lacked the hallmark of many experiments, namely, deliberately arranged control and treatment. Said simply, it did not take place in a laboratory. It took place in a market with actual customers and involved quite a lot of risk on the part of entrepreneurs and businesses. Investors had to wait for those experiments to mature before resolving their uncertainty. In other words, some part of the run-up in stock values in the 1990s reflected speculative activity and would have arisen in any honest economy.

A second thing happened, however, and it was not ok. The arguments of the late 1990s became distorted away from a healthy place. When the situation is healthy, facts and competing visions vie for primacy in the prevailing view. In contrast, during the 1990s, some participants tipped these arguments in directions that wasted money, because some participants did not voice the bad news they knew. That made the dot-com bubble worse—it lasted longer and encouraged investments that should not have occurred.

Here are two egregious examples. Enron represented its trading of broadband futures as a material success when the division was, in fact, bringing unrealized revenue forward as if it were guaranteed. That was an accounting trick for looking better than reality merited: it assumed an optimistic future and booked gains before it happened and without accounting for risk. This accounting assumption was not publicly acknowledged, so outside evaluators had no idea it was used, and did not discount appropriately. Eventually the reality of the markets proved the assumption invalid, and it became an outright and unreported lie. In the meantime, however, plenty of analysts used those reports to affirm a viewpoint that trading in broadband futures could succeed (when it did not) and continued to invest in national backbone assets and technologies, as if they had value (when they had less). As it turned out, this was but one of many questionable accounting tricks undertaken by Enron, which eventually paid the ultimately price for its ways, and went bankrupt.

In another well-known example, WorldCom publicly represented its activities as an actual success when, in fact, it was not. In that case, the chief financial officer literally cheated on the books by not properly accounting for expenses. The CEO had acted as a charming face for the firm, which had undertaken numerous mergers and acquisitions, restructuring the telephone and data networking industries. He and the CFO had shown numbers that reinforced the impression that their previous mergers had found a magic that had eluded others. There had always been skeptics, but these skeptics did not have the upper hand in the public conversations, and their skeptical questions were kept at bay for a couple of years by the dishonest accounting. The fraud was not exposed for several years, and, remarkably, would not have been discovered except for the actions of an honest internal employee who came across some issues during a routine internal auditing job, and pursued the fraud secretly. As it turned out, plenty of analysts used WorldCom’s numbers as evidence that mega mergers in communications markets had more value than was true, in fact. It took a while for the consensus to realize that these newfangled approaches to creating value did not create much value at all.

In both examples, the lying hurt the stockholders and employees of the company, and in both cases, had the truth been known sooner the prevailing view of analysts would have changed. We cannot know by how much, but we know the direction [in which] it would have changed: the values of some closely related speculative business activity would have been lowered. It is also possible that the redirection of investment would have raised the value of other unrelated speculative business activity, though today there is no way to know. In any event, the scale of the fraud affiliated with these two examples alone was so large there can be no doubt that the absence of either one would have led to less wasted money.

The book contains a longer discussion of more examples. There were many questionable Wall Street practices which raise similar questions.

How to stop the abuses affiliated with those causes? The point is not to skimp on auditing functions, and not to suppress bad news. Users, investors, and the vast majority of firms would be better off with access to all information, both fact-based and subjective assessments. We had a financial system that allowed for some amount of self-dealing, as well as conflict of interest. That led to a subtle form of suppression of bad news, and it gave financial investing in technology an optimistic bias.

Some have argued that the events in 2008, though not as connected to speculative technology investment, had some of the same root institutional causes—namely, the incentives for poor auditing, and the potential for self-dealing. I will let others sort that out.

It has been very interesting to watch the most recent generation of technologies in that light. There are still some of the same biases against bad news, and today there is more effort to fight those biases. But I suspect we won’t know how well the fight is going until we reach the end of this present round of speculative investment—in blockchain, and artificial intelligence, and immunotherapies. When it is over, we will be able to look back and assess the accuracy of information in the public discussion. As an interim assessment, the reporting issues affiliated with Theranos, where there was quite a lot of fraud, suggests issues remain.

Growthpolicy.org: In the introduction to your latest book, How the Internet became Commercial, you note that “a crucial question of this book might be rephrased as ‘What role does innovation and commercialization play in creative destruction?’” Have you come closer to answers to this question and, if so, in what way(s)?

Shane Greenstein: The book stresses the role of “Innovation from the edges” as an important and underappreciated contributor to creative destruction—which, incidentally, I continue to regard as a positive and vital feature of a dynamic growing market economy. Three features enable innovation from the edges. Succinctly stated, these relate to place, power, and perception. More elaborately, there are benefits to society when previously peripheral firms acquire enough power to implement a business that reflects their perception about how to generate value. Their point of view about the source of value may differ from the prevailing consensus, which the leading firms had supported. It is good for society to sample the variety of points of view.

Since publishing the book, I have learned a number of lessons about how to explain this thesis. Two phrases I use today are “specialists” and “outsiders,” though neither appears in the book. “Specialist” reflects the simple fact that no firm in technology provides everything. Most services involve a range of partners. The phrase “outsider” is a linguistic simplification for describing a firm with a point of view that differs from the consensus, which usually supports leading firms.

As illustration, consider Google, whose origins I discuss in Chapter 13. From its founding and continuing into the present, Google is a search specialist. To deliver its service to your home, Google must partner with every data carrier around the world—both wireline broadband firms and wireless smart phone supporters. In addition, it must partner with other firms using web technology, smartphone markers, ad exchange operators, content delivery network providers, browser and web server makers, and dozens of others. Moreover, though it may be dominant and visible in its specialty today, this achievement was not a foregone conclusion when it was founded in 1998. It had a point of view that differed from the consensus, and in multiple aspects. For many years it was regarded by analysts as a minor player in the experience of web users, and not a particularly valuable company. Even the leading players in the industry would not license Google’s technology, though they were given the opportunity. Though it doggedly pursued its vision from 1998 and onward, for many years the consensus regarded Google’s views as outside the mainstream.

So, said succinctly, I now explain one of the key lessons of my book with different language. Instead of suggesting ways to encourage innovation from the edges, I say, in addition, that policy should foster markets which support specialists who are outsiders. The audience seems to appreciate the main point more readily when I put it that way.

Growthpolicy.org. Given your expertise and wide-ranging knowledge of the evolution of the internet, what, according to you, does the end of Net Neutrality in June 2018 imply for the future of the internet?

Shane Greenstein: This is a misunderstood topic in public discussion. As above, witty observations, old tropes, and twitter feeds—are not particularly useful. I do not know how to give a short answer to this question, so here goes.

Net neutrality policy grew out of a long history. Typically, communications providers are required to serve all potential customers without discrimination; to be transparent about pricing and service quality; and, sometimes, to limit the line of businesses. Typically, these mandates are imposed as conditions for the rights to hold an exclusive license or other government-granted monopolies. In this sense, net neutrality regulation is nothing new in principle; it is an application of old principles to broad Internet networks.

Aligning with these historical trends, the principal issues in the U.S. arise from the presence of monopoly in access networks—namely, selective competitiveness of regional and local markets for data access over broadband channels, which are usually supplied either by firms whose primary business is local telephone service and/or cable television service. In some of these locations, there are competitive forces and in some there are not. It is not magic, but over time, in the competitive locations things get better faster in comparison to the locations where competitive pressures are not present.

Do we see competitive U.S. broadband markets in all locations? Look, the answer to that is just no. Competitive situations are rare, and that is so by every available measure. It is not difficult to understand why. Entry is limited in many markets by franchise restrictions, by laws forbidding new entrants, by limited access to rights-of-way, and by the inherent expense of building out capacity when users sign up for multiyear contracts. It is no secret what results: There is a lack of choice over broadband providers in many locations, and in both business and household markets. Users rarely have more than two choices in broadband access markets, and [these markets] look nothing like competitive markets elsewhere.

To be sure, there are legitimate debates to have about how many markets approach de facto monopolies and how many approach some degree of competitive conditions, and which places are which. We also see a small amount of “cord-cutting,” which are users punishing cable television providers in favor of solely viewing entertainment over one or two local broadband options. But these are secondary questions in comparison to the elephant in the room: the presence of monopoly is most locations.

Why the pessimism? Year after year we also see the same things: cable companies have users with the lowest satisfaction ratings. Moreover, the consumer price indices for cable television and Internet access also have not declined in a decade, and though I believe that probably gives too pessimistic a view, it is extraordinarily unusual for an electronic good and service to have that record. Even wireless access prices have declined in the last decade. And when industry lobbyists say—accurately—that speeds have gone up, I respond that is not indicative of anything. Even monopolies will improve their service when technology improves, as it has in this industry. More to the point, the recent experience tells us nothing about would have happened had more firms faced competitive conditions; I continue to believe that many more lines would have improved, sooner and in many locations, had those locations lived with competitive markets.

What does a government do when competitive discipline is lacking? One approach is to promulgate net neutrality rules aimed to prevent distortions affiliated with lack of competition in access markets. That said, there are legitimate debates about which distortions really matter, and whether these amount to much. In some locations the organizations have professional managers that resist the temptation to exploit the monopoly power, and in other locations poorly managed organizations cannot seem to respond effectively to even the simplest competitive incentives.

Moreover, the actual experience with the distortions in the U.S. is thin—because broad markets are quite young, and there were ad hoc rules preventing distortionary behavior in the recent past. In addition, other developed countries have organized their markets much [more] differently than the U.S., making it difficult to transport lessons from other places.

Accordingly, one side says, oversimplifying, we have few examples because regulators did their job. This side asked to codify the previous generation of ad hoc rules. The other side says, again oversimplifying, that firms are generally well-behaved and the likelihood of distortions have been exaggerated. They assert that the absence of rules will not make things worse. That is a long way of saying: it is rather unclear what will happen next.

For the rest of any forecast, the details matter. Most of what we call “net neutrality” contained several elements—rules against blocking, rules requiring transparency, rules requiring equal access to all content providers, and rules preventing discriminatory pricing. These were about setting up a “path” for preventing future distortions, looking to prevent what had been rather rare events up until now. The present chairman of the FCC sought to take away almost all of those rules, giving quite a lot of discretion to carriers.

My own view is that four other factors will make it difficult for carriers to have full discretion in the near term. First, the U.S. carriers do not want to come out looking badly in comparisons with other countries, and that will restrain some of their behavior. Second, carriers do not want to overcommit to one administrative regime, only to find that the next regime punishes them. Third, the U.S. high tech community watches every carrier and loves catching firms “in the act.” No firm wants to be the target for that scrutiny. Lastly, the courts have been asked to review the decision. Though the courts tend to defer to the judgment of agencies, they also require a certain amount of due process in the decision making. It is not clear this FCC has done enough to clear that legal hurdle in this instance.

I am as curious as anyone. I will watch with interest.

Growthpolicy.org: One of your most-cited research papers makes the case for neutrality on Wikipedia, arguing that despite the heterogeneity in contributors’ ideological slants, these slants are not reflected in the articles they edit, with slants becoming less extreme over time. Have your views on this evolved or changed in any way given the changes in U.S. politics since the paper was first published?

Shane Greenstein: To address this, we have to start by describing Wikipedia’s unusual system, which most people do not understand. Wikipedia gives enormous discretion to its contributors. It couples that discretion with explicitly stated aspirational principles about the site’s goals. Among its many goals, a key principle is “neutral point of view,” or NPOV for short. Simplifying for purposes of this explanation, NPOV can be summarized as “Assert facts, including facts about opinions—but do not assert the opinions themselves.” When participants contribute to the site they must accept the aspirations, and, rather than argue about their own views, they argue about whether they have represented the views of others accurately. That works remarkably well because there are editors and participants committed to those principles. I do not see that changing unless editors and participants at Wikipedia grow tired of enforcing the principles.

Professor Feng Zhu and I have studied this system, mostly recently with the considerable help of a graduate student, Yuan Gu. Our evidence suggests this system works remarkably well, even in very challenging situations, such as debates over contestable knowledge—where topics involve subjective, unverifiable, and controversial information. And it seems to work rather well with most science and engineering, where topics are objective, verifiable, and uncontroversial. Quite frankly, the biggest issue for Wikipedia is manpower. There simply aren’t enough people making contributions to keep the site growing as fast as the demands readers make from it. (Look, if you benefit from Wikipedia, then give something back. It is easy. Just stop and contribute.)

It surprises me that more of the media platforms do not adopt similar models that stress aspirational principles. For example, it would have been rather easy for Reddit or Twitter or Facebook to state without reservation that they will remove factually false news from unverified sources, and I have been surprised at how slowly and reluctantly some of those platforms have waded into processes resembling Wikipedia’s. It is as if they perceive severe difficulties implementing it. Call me cynical, but some of the reluctance appears to come from the fact that the fix is not really a cool engineering problem, or it costs money, or it angers a fringe of users. In Facebook’s case, it finally seems to have dawned on management that the reputation for the platform was becoming damaged by the behavior of a few polluters of the conversation. That put the entire site at risk.

As for Wikipedia, I worry that it could become a victim to the same shenanigans that shook up other platforms. That is not cynical; it is realistic. The staff for many corporations and politicians already have been caught in the act of editing pages about those corporations and politicians. And we have only seen the ones who got caught, which implies there must be others. More to the point, I would be surprised if the Russians did not have an outfit trying to change some of the political articles in the English language Wikipedia. It also would not be surprising to find the Koch brothers doing the same, as well as the National Rifle Association, the Sierra Club, Mothers Against Drunk Driving, and every determined crank with a computer account. I think Wikipedia’s manpower issues may make it difficult to respond to deliberate and disingenuous editing.

Growthpolicy.org. In your research on the economics of online attention, one of your critical findings is that income plays an important role in determining the allocation of time spent on the Internet. You offer evidence for persistent attention inferiority—namely, that higher income households spend less total time online per week. What are the implications of this for policy makers?

Shane Greenstein: It is simple: economic accountants should start paying attention to how much time users spend online, and some of the properties of how they allocate time. It is an error to solely focus on spending. Some of the most important contributions of these technologies to societal growth generate no revenue.