Building a better data economy

It’s “time to wake up and do a better job,” says publisher Tim O’Reilly—from getting serious about climate change to building a better data economy. And the way a better data economy is built is through data commons—or data as a common resource—not as the giant tech companies are acting now, which is not just keeping data to themselves but profiting from our data and causing us harm in the process.

“When companies are using the data they collect for our benefit, it’s a great deal,” says O’Reilly, founder and CEO of O’Reilly Media. “When companies are using it to manipulate us, or to direct us in a way that hurts us, or that enhances their market power at the expense of competitors who might provide us better value, then they’re harming us with our data.” And that’s the next big thing he’s researching: a specific type of harm that happens when tech companies use data against us to shape what we see, hear, and believe.

It’s what O’Reilly calls “algorithmic rents,” which uses data, algorithms, and user interface design as a way of controlling who gets what information and why. Unfortunately, one only has to look at the news to see the rapid spread of misinformation on the internet tied to unrest in countries across the world. Cui bono? We can ask who profits, but perhaps the better question is “who suffers?” According to O’Reilly, “If you build an economy where you’re taking more out of the system than you’re putting back or that you’re creating, then guess what, you’re not long for this world.” That really matters because users of this technology need to stop thinking about the worth of individual data and what it means when very few companies control that data, even when it’s more valuable in the open. After all, there are “consequences of not creating enough value for others.”

We’re now approaching a different idea: what if it’s actually time to start rethinking capitalism as a whole? “It’s a really great time for us to be talking about how do we want to change capitalism, because we change it every 30, 40 years,” O’Reilly says. He clarifies that this is not about abolishing capitalism, but what we have isn’t good enough anymore. “We actually have to do better, and we can do better. And to me better is defined by increasing prosperity for everyone.”

In this episode of Business Lab, O’Reilly discusses the evolution of how tech giants like Facebook and Google create value for themselves and harm for others in increasingly walled gardens. He also discusses how crises like covid-19 and climate change are the necessary catalysts that fuel a “collective decision” to “overcome the massive problems of the data economy.”

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Omidyar Network.

Show notes and links

We need more than innovation to build a world that’s prosperous for all,” by Tim O’Reilly, Radar, June 17, 2019

Why we invested in building an equitable data economy,” by Sushant Kumar, Omidyar Network, August 14, 2020

Tim O’Reilly – ‘Covid-19 is an opportunity to break the current economic paradigm,’” by Derek du Preez, Diginomica, July 3, 2020

Fair value? Fixing the data economy,” MIT Technology Review Insights, December 3, 2020

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is the data economy. More specifically—democratizing data, making data more open, accessible, controllable, by users. And not just tech companies and their customers, but also citizens and even government itself. But what does a fair data economy look like when a few companies control your data?

Two words for you: algorithmic rent.

My guest is Tim O’Reilly, the founder, CEO, and chairman of O’Reilly Media. He’s a partner in the early-stage venture firm O’Reilly AlphaTech Ventures. He’s also on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He recently wrote the book WTF?: What’s the Future and Why It’s Up to Us. If you’re in tech, you’ll recognize the iconic O’Reilly brand: pen and ink drawings of animals on technology book covers, and likely picking up one of those books to helped build your career, whether it’s as a designer, software engineer, or CTO.

This episode of Business Lab is produced in association with a Omidyar Network.

Welcome, Tim.

Tim O’Reilly: Glad to be with you, Laurel.

Laurel: Well, so let’s just first mention to our listeners that in my previous career, I was fortunate enough to work with you and for O’Reilly Media. And this is now a great time to have this conversation because all of those trends that you’ve seen coming down the pike way before anyone else—open source, web 2.0, government as a platform, the maker movement. We can frame this conversation with a topic that you’ve been talking about for a while—the value of data and open access to data. So in 2021, how are you thinking about the value of data?

Tim: Well, there are a couple of ways I’m thinking about it. And the first is, the conversation about value is pretty misguided in a lot of ways. When people are saying, ‘Well, why don’t I get a share of the value of my data?’ And of course, the answer is you do get a share of the value of your data. When you trade Google data for email and search and maps, you’re getting quite a lot of value. I actually did some back-of-the-napkin math recently, that basically it was about, well, what’s the average revenue per user? Facebook annual revenue per user worldwide is about $30. That’s $30 a year. Now, the profit margin is about $26. So that means they’re making $7.50 per user per year. So you get a share that? No. Do you think that your $1 or $2 that you might, at the most extreme, be able to claim as your share of that value is Facebook’s worth to you?

And I think in a similar way, you look at Google, it’s a slightly bigger number. Their average profit per user is about $60. So, OK, still, let’s just say you got a quarter of this, $15 a year. That’s a $1.25 a month. You pay 10 times that for your Spotify account. So effectively, you’re getting a pretty good deal. So the question of value is the wrong question. The question is, is the data being used for you or against you? And I think that’s really the question. When companies are using the data for our benefit, it’s a great deal. When companies are using it to manipulate us or to direct us in a way that hurts us or that enhances their market power at the expense of competitors who might provide us better value, then they’re harming us with our data.

And that’s where I’d like to move the conversation. And in particular, I’m focused on a particular class of harm that I started calling algorithmic rents. And that is, when you think about the data economy, it’s used to shape what we see and hear and believe. This obviously became very obvious to people in the last U.S. election. Misinformation in general, advertising in general, is increasingly guided by data-enabled algorithmic systems. And the question that I think is fairly profound is, are those systems working for us or against us? And if they are turned extractive, where they’re basically working to make money for the company rather than to give benefit to the users, then we’re getting screwed. And so, what I’ve been trying to do is to start to document and track and establish this concept of the ability to control the algorithm as a way of controlling who gets what and why.

And I’ve been focused less on the user end of it mostly and more on the supplier end of it. Let’s take Google. Google is this intermediary between us and literally millions or hundreds of millions of sources of information. And they decide which ones get the attention. And for the first decade and a half of Google’s existence and still in many areas that are noncommercial, which is probably about probably 95% of all searches, they are using the tools of, what I have called, collective intelligence. So everything from, ‘What do people actually click on?’ ‘What do the links tell us?’ ‘What is the value of links and page rank?’ All these things give us the result that they really think is the best thing that we’re looking for. So back when Google IPO’ed in 2004, they attached an interview with Larry Page in which he said, ‘Our goal is to help you find what you want and go away.’

And Google really operated that way. And even their advertising model, it was designed to satisfy user needs. Pay-per-click was like; we’ll only pay you if you actually click on the ad. We’ll only charge the advertiser if they click on the ad, meaning that you were interested in it. They had a very positive model, but I think in the last decade, they really decided that they need to allocate more of the values to themselves. And so if you contrast a Google search result in a commercially valuable area, you can contrast it with Google of 10 years ago or you can contrast it with a non-commercial search today. You will see that if it’s commercially valuable, most of the page is given up to one of two things: Google’s own properties or advertisements. And what we used to call “organic search results” on the phone, they’re often on the second or third screen. Even on a laptop, they might be a little one that you see down in the corner. The user-generated, user-valuable content has been superseded by content that Google or advertisers want us to see. That is, they’re using their algorithm to put the data in front of us. Not that they think is best for us, but they think is best for them. Now, I think there’s another thing. Back when Google first was founded, in the original Google search paper that Larry and Sergey wrote while they were still at Stanford, they had an appendix on advertising and mixed motives, and they didn’t think a search engine could be fair. And they spent a lot of time trying to figure out how to counter that when they adopted advertising as their model, but, I think, eventually they lost.

So too Amazon. Amazon used to take hundreds of different signals to show you what they really thought were the best products for you, the best deal. And it’s hard to believe that that’s still the case when you do a search on Amazon and almost all of the results are sponsored. Advertisers who are saying, no, us, take our product. And effectively, Amazon is using their algorithm to extract what economists called rents from the people who want to sell products on their site. And it’s very interesting, the concept of rents has really entered my vocabulary only in the last couple of years. And there’s really two kinds of rents and both of them have to do with a certain kind of power asymmetry.

And the first is a rent that you get because you control something valuable. You think of the ferryman in the Middle Ages, who basically said, yeah, you got to pay me if you want to cross the river here or pay a bridge toll. That’s what people would call rents. It was also the fact, that the local warlord was able to tell all the people who were working on “his lands” that you have to give me a share of your crops. And that kind of rent that comes as a result of a power asymmetry, I think is kind of what we’re seeing here.

There’s another kind of rent that I think is also really worth thinking about, which is that when something grows in value independent of your own investments. And I haven’t quite come to grips with how this applies in the digital economy, but I’m convinced that because the digital economy is not unique to other human economies, what it does. And that is, think about land rents. When you build a house, you’ve actually put in capital and labor and you’ve actually made an improvement and there’s an increase in value. But let’s say that 1,000, or in case of a city, millions of other people also build houses, the value of your house goes up because of this collective activity. And that value you didn’t create—or you co-created with everyone else. When government collects taxes and builds roads and schools, infrastructure, again, the value of your property goes up.

And that kind of interesting question of the value that is created communally being allocated instead to a private company, instead of to everybody, is I think another piece of this question of rents. I don’t think the right question is, how do we get our $1 or $2 or $5 share of Google’s profit? The right question is, is Google creating enough of a common value for all of us or are they keeping that increase that we create collectively for themselves?

Laurel: So no, it’s not just monetary value is it? We were just speaking with Parminder Singh from IT for Change in the value of data commons. Data commons has always been part of the idea of the good part of the internet, right? When people come together and share what they have as a collective, and then you can go off and find new learnings from that data and build new products. This really spurred the entire building of the internet—this collective thinking, this is collective intelligence. Are you seeing that in increasingly intelligent algorithmic possibilities? Is that what is starting to destroy the data commons or both perhaps, more of a human behavior, a societal change?

Tim: Well, both in a certain way? I think one of my big ideas that I think I’m going to be pushing for the next decade or two (unless I succeed, as I haven’t with some past campaigns) is to get people to understand that our economy is also an algorithmic system. We have this moment now where we’re so focused on big tech and the role of algorithms at Google and Amazon and Facebook and app stores and everything else, but we don’t take the opportunity to ask ourselves how does our economy work like that also? And I think there’s some really powerful analogies between say the incentives that drive Facebook and the incentives that drive every company. The way those incentives are expressed. Just like we could say, why does Facebook show us misinformation?

What’s in it for them? Is it just a mistake or are there reasons? And you say, “Well actually, yeah, it’s highly engaging, highly valuable content.” Right. And you say, “Well, is that the same reason why Purdue Pharma gave us misinformation about the addictiveness of OxyContin?” And you say, “Oh yeah, it is.” Why would companies do that? Why would they be so antisocial? And then you go, oh, actually, because there’s a master algorithm in our economy, which is expressed through our financial system.

Our financial system is now primarily about stock price. And you’d go, OK, companies are told and have been for the last 40 years that their prime directive going back to Milton Friedman, the only responsibility of a business is to increase value for its shareholders. And then that got embodied in executive compensation in corporate governance. We literally say humans don’t matter, society doesn’t matter. The only thing that matters is to return value to your shareholders. And the way you do that is by increasing your stock price.

So we have built an algorithm in our economy, which is clearly wrong, just like Facebook’s focus on let’s show people things that are more engaging, turned out to be wrong. The people who came up with both of these ideas thought they were going to have good outcomes, but when Facebook has a bad outcome, we’re saying you guys need to fix that. When our tax policy, when our incentives, when our corporate governance comes out wrong, we go, “Oh well, that’s just the market.” It’s like the law of gravity. You can’t change it. No. And that’s really the point of the reason why my book was subtitled, What’s the Future and Why It’s Up to Us, because the idea that we have made choices as a society that are giving us the outcomes that we are getting, that we baked them into the system, in the rules, the fundamental underlying economic algorithms, and those algorithms are just as changeable as the algorithms that are used by a Facebook or a Google or an Amazon, and they’re just as much under the control of human choice.

And I think there’s an opportunity, instead of demonizing tech, to use them as a mirror and say, “Oh, we need to actually do better.” And I think we see this in small ways. We’re starting to realize, oh, when we build an algorithm for criminal justice and sentencing, and we go, “Oh, it’s biased because we fed it biased data.” We’re using AI and algorithmic systems as a mirror to see more deeply what’s wrong in our society. Like, wow, our judges have been biased all along. Our courts have been biased all along. And when we built the algorithmic system, we trained it on that data. It replicated those biases and we go, really, that’s what we’ve been saying. And I think in a similar way, there’s a challenge for us to look at the results of our economy as the results of a biased algorithm.

Laurel: And that really is just sort of that exclamation point on also other societal issues, right? So if racism is baked into society and it’s part of what we’ve known as a country in America for generations, how is that surprising? We can see with this mirror, right, so many things coming down our way. And I think 2020 was one of those seminal years that just prove to everyone that mirror was absolutely reflecting what was happening in society. We just had to look in it. So when we think about building algorithms, building a better society, changing that economic structure, where do we start?

Tim: Well, I mean, obviously the first step in any change is a new mental model of how things work. If you think about the progress of science, it comes when we actually have, in some instances, a better understanding of the way the world works. And I think we are at a point where we have an opportunity. There’s this wonderful line from a guy named Paul Cohen. He’s a professor of computer science now at the University of Pittsburgh, but he used to be the program manager for AI at DARPA. We were at one of these AI governance events at the American Association for the Advancement of Science and he said something that I just wrote down and I’ve been quoting ever since. He said, “The opportunity of AI is to help humans model and manage complex interacting systems.” And I think there’s an amazing opportunity before us in this AI moment to build better systems.

And that’s why I’m particularly sad about this point of algorithmic rents. And for example, the apparent turn of Google and Amazon toward cheating in the system that they used to run as a fair broker. And that is that they have shown us that it was possible to use more and more data, better and better signals to manage a market. There’s this idea in traditional economics that in some sense, money is the coordinating function of what Adam Smith called the “invisible hand.” As the people are pursuing their self-interest in the world of perfect information, everybody’s going to figure out what is their self-interest. Of course, it’s not actually true, but in the theoretical world, let’s just say that it is true that people will say, “Oh yeah, that’s what that’s worth to me, that’s what I’ll pay.”

And this whole question of “marginal utility” is all around money. And the thing that’s so fascinating to me about Google organic search was that it’s the first large-scale example I think we have. When I say large scale, I mean, global scale, as opposed to say a barter marketplace. It’s a marketplace with billions of users that was entirely coordinated without money. And you say, “How can you say that?” Because of course, Google was making scads of money, but they were running two marketplaces in parallel. And in one of them, the marketplace of organic search—you remember the 10 blue links, which is still what Google does on a non-commercial search. You have hundreds of signals, page rank, and full text search, now done with machine learning.

You have things like the long click and the short click. If somebody clicks on the first result and they come right back and click on the second link, and then they come right back and they click on the third link, and then [Google] goes away and thinks, “Oh, it looks like the third link was the one that worked for them.” That’s collective intelligence. Harnessing all that user intelligence to coordinate a market so that you literally have for billions of unique searches—the best result. And all of this is coordinated without money. And then off to the side, [Google] had, well, if this is commercially valuable, then maybe some advertising search. And now they’ve kind of preempted that organic search whenever money is involved. But the point is, if we’re really looking to say, how do we model and manage complex interacting systems, we have a great use case. We have a great demonstration that it’s possible.

And now I start saying, ‘Well, what other kinds of problems can we do that way?’ And you look at a group like Carla Gomes’ Institute for Computational Sustainability out of Cornell University. They’re basically saying, well, let’s look at various kinds of ecological factors. Let’s take lots and lots of different signals into account. And so for example, we did a project with a Brazilian power company to help them take not just decide, ‘Where should we site our dam as based on what will generate the most power, but what will disrupt the fewest communities?’ ‘What will affect endangered species the least?’ And they were able to come up with better outcomes than just the normal ones. [Institute for Computational Sustainability] did this amazing project with California rice growers where the Institute basically realized that if the farmers could adjust the timing of when they released the water into the rice patties to match up with the migration of birds, the birds actually acted as natural pest control in the rice paddies. Just amazing stuff that we could start to do.

And I think there’s an enormous opportunity. And this is kind of part of what I mean by the data commons, because many of these things are going to be enabled by a kind of interoperability. I think one of the things that’s so different between the early web and today is the presence of walled gardens, e.g., Facebook is a walled garden. Google is increasingly a walled garden. More than half of all Google searches begin and end on Google properties. The searches don’t go out anywhere on the web. The web was this triumph of interoperability. It was the building of a global commons. And that commons, has been walled off by every company trying to say, ‘Well, we’re going to try to lock you in.’ So the question is, how do we get focus on interoperability and lack of lock-in and move this conversation away from, ‘Oh, pay me some money for my data when I’m already getting services.’ No, just have services that actually give back to the community and have that community value be created is far more interesting to me.

Laurel: Yeah. So breaking down those walled gardens or I should say maybe perhaps just creating doors where data can be extracted, that should belong in the public. So how do we actually start rethinking data extraction and governance as a society?

Tim: Yeah. I mean, I think there are several ways that that happens and they’re not exclusive, they kind of come all together. People will look at, for example, the role of government in dealing with market failures. And you could certainly argue that what’s happening in terms of the concentration of power by the platforms is a market failure, and that perhaps anti-trust might be appropriate. You can certainly say that the work that the European Union has been leading on with privacy legislation is an attempt by government to regulate some of these misuses. But I think we’re in the very early stages of figuring out what a government response ought to look like. And I think it’s really important for individuals to continue to push the boundaries of deciding what do we want out of the companies that we work with.

Laurel: When we think about those choices we need to make as individuals, and then as part of a society; for example, Omidyar Network is focusing on how we reimagine capitalism. And when we take on a large topic like that, you and Professor Mariana Mazzucato at the University College of London are researching that very kind of challenge, right? So when we are extracting value out of data, how do we think about reapplying that, but in the form of capitalism, right, that everyone also can still connect to and understand. Is there actually a fair balance where everyone gets a little bit of the pie?

Tim: I think there is. And I think the this is sort of been my approach throughout my career, which is to assume that, for the most part, people are good and not to demonize companies, not to demonize executives, and not to demonize industries. But to ask ourselves first of all, what are the incentives we’re giving them? What are the directions that they’re getting from society? But also, to have companies ask themselves, do they understand what they’re doing?

So if you look back at my advocacy 22 years ago, or whenever it was, 23 years ago, about open source software, it was really focused on… You could look at the free software movement as it was defined at the time as kind of analogous to a lot of the current privacy efforts or the regulatory efforts. It was like, we’re going to use a legal solution. We’re going to come up with a license to keep these bad people from doing this bad thing. I and other early open source advocates realized that, no, actually we just have to tell people why sharing is better, why it works better. And we started telling a story about the value that was being created by releasing source code for free, having it be modifiable by people. And once people understood that, open source took over the world, right? Because we were like, ‘Oh, this is actually better.’ And I think in a similar way, I think there’s a kind of ecological thinking, ecosystem thinking, that we need to have. And I don’t just mean in the narrow sense of ecology. I mean, literally business ecosystems, economy as ecosystem. The fact that for Google, the health of the web should matter more than their own profits.

At O’Reilly, we’ve always had this slogan, “create more value than you capture.” And it’s a real problem for companies. For me, one of my missions is to convince companies, no, if you’re creating more value for yourself, for your company, than you’re creating for the ecosystem as a whole, you’re doomed. And of course, that’s true in the physical ecology when humans are basically using up more resources than we’re putting back. Where we’re passing off all these externalities to our descendants. That’s obviously not sustainable. And I think the same thing is true in business. If you build an economy where you’re taking more out of the system than you’re putting back or that you’re creating, then guess what, you’re not long for this world. Whether that’s because you’re going to enable competitors or because your customers are going to turn on you or just because you’ll lose your creative edge.

These are all consequences. And I think we can teach companies that these are the consequences of not creating enough value for others. And not only that, who you have to create value for, because I think Silicon Valley has been focused on thinking, ‘Well, as long as we’re creating value for users, nothing else matters.” And I don’t believe that. If you don’t create value for your suppliers, for example, they’re going to stop being able to innovate. If Google is the only company that is able to profit from web content or takes too big a share, hey, guess people will just stop creating websites. Oh, guess what, they went over to Facebook. Take Google, actually, their best weapon against Facebook was not to build something like Google+, which was trying to build a rival walled garden. It was basically to make the web more vibrant and they didn’t do that. So Facebook’s walled garden outcompeted the open web partly because, guess what, Google was sucking out a lot of the economic value.

Laurel: Speaking of economic value and when data is the product, Omidyar Network defines data as something whose value does not diminish. It can be used to make judgments of third parties that weren’t involved in your collection of data originally. Data can be more valuable when combined with other datasets, which we know. And then data should have value to all parties involved. Data doesn’t go bad, right? We can kind of keep using this unlimited product. And I say we, but the algorithms can sort of make decisions about the economy for a very long time. So if you don’t actually step in and start thinking about data in a different way, you’re actually sowing the seeds for the future and how it’s being used as well.

Tim: I think that’s absolutely true. I will say that I don’t think that it’s true that data doesn’t go stale. It obviously does go stale. In fact, there’s this great quote from Gregory Bateson that I’ve remembered probably for most of my life now, which is, “Information is a difference that makes a difference.” And when something is known by everyone, it’s no longer valuable, right? So it’s literally that ability to make a difference that makes data valuable. So I guess what I would say is, no, data does go stale and it has to keep being collected, it has to keep being cultivated. But then the second part of your point, which was that the decisions we make now are going to have ramifications far in the future, I completely agree. I mean, everything you look at in history, we have to think forward in time and not just backward in time because the consequences of the choices we make will be with us long after we’ve reaped the benefits and gone home.

I guess I’d just say, I believe that humans are fundamentally social animals. I’ve recently gotten very interested in the work of David Sloan Wilson, who’s an evolutionary biologist. One of his great sayings is, “Selfish individuals outcompete altruistic individuals, but altruistic groups outcompete selfish groups.” And in some ways, the history of human society are advances in cooperation of larger and larger groups. And the thing that I guess I would sum up where we were with the internet—those of us who were around the early optimistic period were saying, ‘Oh my God, this was this amazing advance in distributed group cooperation’, and still is. You look at things like global open source projects. You look at things like the universal information sharing of the worldwide web. You look at the progress of open science. There’s so many areas where that is still happening, but there is this counterforce that we need to wake people up to, which is making walled gardens, trying to basically lock people in, trying to impede the free flow of information, the free flow of attention. These are basically counter-evolutionary acts.

Laurel: So speaking about this moment in time right now, you recently said that covid-19 is a big reset of the Overton window and the economy. So what is so different right now this year that we can take advantage of?

Tim: Well, the concept of the Overton window is this notion that what seems possible is framed as sort of like a window on the set of possibilities. And then somebody can change that. For example, if you look at former President Trump, he changed the Overton window about what kind of behavior was acceptable in politics, in a bad way, in my opinion. And I think in a similar way, when companies display this monopolistic user hostile behavior, they move the Overton window in a bad way. When we come to accept, for example, this massive inequality. We’re moving the Overton window to say some small number of people having huge amounts of money and other people getting less and less of the pie is OK.

But all of a sudden, we have this pandemic, and we think, ‘Oh my God, the whole economy is going to fall down.’ We’ve got to rescue people or there’ll be consequences. And so we suddenly say, ‘Well, actually yeah, we actually need to spend the money.’ We need to actually do things like develop vaccines in a big hurry. We have to shut down the economy, even though it’s going to hurt businesses. We were worried it was going to hurt the stock market, it turned out it didn’t. But we did it anyway. And I think we’re entering a period of time in which the kinds of things that covid makes us do—which is reevaluate what we can do and, ‘Oh, no, you couldn’t possibly do that’—it’s going to change. I think climate change is doing that. It’s making us go, holy cow, we’ve got to do something. And I do think that there’s a real opportunity when circumstances tell us that the way things have been need to change. And if you look at big economic systems, they typically change around some devastating event.

Basically, the period of the Great Depression and then World War II led to the revolution that gave us the post-war prosperity, because everybody was like, ‘Whoa, we don’t want to go back there.’ So with the Marshall Plan, we’re going to actually build the economies of the people we defeated, because, of course, after World War I, they had crushed Germany down, which led to the rise of populism. And so, they realized that they actually had to do something different and we had 40 years of prosperity as a result. There’s a kind of algorithmic rot that happens not just at Facebook and Google, but a kind of algorithmic rot that happens in economic planning, which is that the systems that they had built that created an enormous, shared prosperity had the side effect called inflation. And inflation was really, really high. And interest rates were really, really high in the 1970s. And they went, ‘Oh my God, this system is broken.” And they came back with a new system, which focused on crushing inflation on increasing corporate profits. And we kind of ran with that and we had some go-go years and now we’re hitting the crisis, where the consequences of the economy that we built for the last 40 years are failing pretty provocatively.

And that’s why I think it’s a really great time for us to be talking about how do we want to change capitalism, because we change it every 30, 40 years. It’s a pretty big change-up in how it works. And I think we’re due for another one and it shouldn’t be seen as “abolish capitalism because capitalism has been this incredible engine of productivity,” but boy, if anybody thinks we’re done with it and we think that we have perfected it, they’re crazy. We actually have to do better and we can do better. And to me better is defined by increasing prosperity for everyone.

Laurel: Because capitalism is not a static thing or an idea. So in general, Tim, what are you optimistic about? What are you thinking about that gives you hope? How are you going to man this army to change the way that we are thinking about the data economy?

Tim: Well, what gives me hope is that people fundamentally care about each other. What gives me hope is the fact that people have the ability to change their mind and to come up with new beliefs about what’s fair and about what works. There’s a lot of talk about, ‘Well, we’ll overcome problems like climate change because of our ability to innovate.’ And yeah, that’s also true, but more importantly, I think that we’ll overcome the massive problems of the data economy because we have come to a collective decision that we should. Because, of course, innovation happens, not as a first order effect, it’s a second order effect. What are people focused on? We’ve been focused for quite a while on the wrong things. And I think one of the things that actually, in an odd way, gives me optimism is the rise of crises like pandemics and climate change, which are going to force us to wake up and do a better job.

Laurel: Thank you for joining us today, Tim, on the Business Lab.

Tim: You’re very welcome.

Laurel: That was Tim O’Reilly, the founder, CEO, and chairman of O’Reilly Media, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of the Business Lab, I’m your host Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us inference on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Main Menu