A Sustainable Future: Professor Anton Korinek, University of Virginia, on AI's Power to Reshape Labour Productivity and Inequality

Listen to Jason Mitchell discuss with Professor Anton Korinek, University of Virginia, about the comparative advantage humans will bring to an AI-powered economy.

 

In a world of artificially generated content and ideas, where is the comparative advantage for humans? Listen to Jason Mitchell discuss with Professor Anton Korinek, University of Virginia, about how to think through the economic—and specifically labour productivity—implications of generative AI, what AI could potentially mean for the last 40 years of wage inequality, and why it’s critical we rethink traditional forms of learning given the impact that AI could have on education.

Recording date: 17 August 2023

Anton Korinek

Anton Korinek is a Professor in the Department of Economics and at the Darden School of Business at the University of Virginia as well as a David M. Rubenstein Fellow at the Brookings Institution. His current research and teaching analyse the implications of Artificial Intelligence for business, for the economy, and for the future of society. He is also a Research Associate at the NBER, at the CEPR and at the Oxford Centre for the Governance of AI, and he is an editor of the Oxford Handbook of AI Governance.

 

Episode Transcript

Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity.

 

Jason Mitchell

Welcome to the podcast, Professor Anton Korinek. It's great to have you here and thank you for taking the time.

Anton Korinek

Thank you for having me, Jason. I very much look forward to this conversation on what I think is one of the most important topics to talk about. These things.

Jason Mitchell

I can't agree with you more and I've been prepping and really looking forward to this conversation for a while. So let's start with some level setting. I've asked this question in an earlier podcast with my colleague who's more of an A.I. generalist, but it feels particularly relevant for you as an economist who studies the productivity effects of A.I..

Jason Mitchell

So the question is do start out. Does generative air represent a new age of enlightenment or a new industrial revolution?

Anton Korinek

That is a good question. I would say it's not so much a new industrial revolution as a new cognitive revolution. So if you look at the Industrial Revolution, what it was about was the automation of physical tasks of essentially building machines to change the world around us, to build things, to produce things in factories and so on. But advances in A.I. and in particular the most recent advances in generative AI are doing is they're instead performing cognitive tasks.

Anton Korinek

They are not physical machines, but machines of mind. So I would call it a new cognitive revolution. And yeah, as to the question of whether they will lead to a new age of enlightenment, I think that's very much up to us humans. I think it carries both so much potential and also so many dangers of abuse. And I think we have to use our enlightenment to use AI in ways that can deepen enlightenment.

Jason Mitchell

Historically speaking, there's a clear legacy of fear around technological disruption with regard to labor markets. And I'll be honest, it's something I'm fascinated about, this aspect of economic history. So let me cite a few examples for context. In 1927, the US Secretary of Labor first voiced concerns about automation and job loss. In 1930, John Maynard Keynes, who we all know about, coined the term, quote, technological unemployment and quote, In 1940, the fear of robots in automation was strong enough that Senator John O'mahoney proposed a tax on automation.

Jason Mitchell

In 1964, a US government commission convened to examine the dangers of automation and job loss. And in the 1980s, the Nobel laureate Vassily Lowe. Hentoff predicted mass unemployment owing to automation. So given the estimates out there and I'm sure you're aware of it, I'm thinking about the balloon do paper which cites that almost 50% of the workforce could eventually see I perform half or more of their job tasks.

Jason Mitchell

I guess my question is, how do you think about generative AI either as a job killer or simply a productivity boost against these historical anxieties?

Anton Korinek

Yeah, so it's definitely all of this. It's going to be a job killer in certain circumstances. It's definitely going to be a productivity boost and I think the fears that you were describing and we can go all the way back to the beginning of the Industrial Revolution when the Luddite movement started, the fears are justified. But at the same time, it's not all doom and gloom.

Anton Korinek

So let me walk through it in a couple of steps. I think the first thing we need to realize is that whenever you have a technological revolution, there are some losers and those are hit hard. And then society at large broadly gains from the greater productivity. So if you are a worker who is fired because of a new innovation, because of some automation machine project, that's not a lot of consolation.

Anton Korinek

But society at large benefits from the productivity gains that we obtain from more and more automation. The second thing I want to say is that not all automation and not all technological advance is the same. So there are some innovations that tend to complement human advantages a little bit more and others that tend to substitute for human workers to just take away their job without leading to tremendous productivity gains.

Anton Korinek

And as a society at large, we are much better off from the innovations that compliment workers. But now, as my last point on this question, let me forward a little bit further into the future. And let me say that I think in principle it may be possible for AI to automate everything that we humans are doing at work.

Anton Korinek

So that would require what people call an artificial general intelligence, and that would be a game changer. That would completely change these regularities that we have known from the past and from the Industrial Revolution. So if we do really achieve artificial general intelligence, then I think all Labor will be redundant. And the only reason for us to continue to work is because I really enjoy it.

Anton Korinek

But for productive purposes, machines could do it just as well, and machines would probably be a fraction of the cost of humans and all these forecasts.

Jason Mitchell

So, I mean, not to jump ahead because I do want to put a pin on AGI or artificial general intelligence, but I mean, do you think we ultimately realized that vision of Keynes in terms of a 15 hour working week and, you know, the anxiety he had about what people will do with all their leisure time outside of work?

Anton Korinek

Yeah, how about this zero hour work week and you try to do. You're right. Let's not jump ahead too far. So one of the interesting things about Keynes, his prediction in 1930 is he was absolutely right about the pace of productivity gains, but he was just wrong about how societies decided to spend those gains. So that Keynes thought said, well, let's be honest and productive people will choose to work less and will choose to enjoy more leisure time.

Anton Korinek

But instead, people are, in some areas, work even harder now than it did in the 1930s, just spending it all on more consumption. And I think one of the reasons is that humans also have this very competitive streak. They want to outdo others. Do you want to be more successful at their jobs than others? And that makes them work hard and work to earn more money, even though ultimately it's a rat race and perhaps we would all be much happier if we just worked 15 hours a week.

Anton Korinek

And I guess we are already at this stage where in advanced countries where we do have significant rents, we could afford to do that.

Jason Mitchell

It's interesting, you immediately sort of remind me of that quote comparison is the thief of joy. Let's talk about the impact of generative AI from a macroeconomic perspective. Given the AI expectations out there. For example, I'm thinking about the McKinsey estimates where generative I could add between 2.6 trillion and $4.4 trillion in value to the global economy. In your Brookings report, Machines of Mind the Case for an AI powered productivity Boom, which I want to spend a lot of time on.

Jason Mitchell

In this podcast, you propose scenarios of 10 to 20% productivity gains for cognitive workers based on the current in the next generation of models. Basically that's two X, the productivity growth gains over the next ten years that we've seen over the last two decades. So can you talk about the underlying assumptions of that?

Anton Korinek

Yeah. So I should emphasize, I could imagine those gains that you just described, even if you have no further expenses in AI, even if we just deploy the models that we currently have throughout the economy. And the reason is the following Roughly two thirds of the workforce are engaged in what we broadly called cognitive work, white collar work.

Anton Korinek

And there are some early estimates that for the average cognitive worker, these new large language models can deliver productivity gains between ten and 20%. And if you multiply two thirds of the economy getting ten or 20% more productive, that automatically gives you already a significant productivity gain. Now, what's important is that cognitive workers are also the ones who are engaged in research and development, in developing new innovations.

Anton Korinek

So if all the innovators are suddenly ten or 20% more productive, you can also expect that they are going to come up with new innovations, even thoughts, and drive productivity growth even further through that mechanism. And so based on these two channels, I'm going to be focused, be more productive on the one hand, and innovators leading to faster innovations, we expect the productivity gains in the next two decades.

Anton Korinek

It's going to be twice as fast than what we had in the past two decades, and that's not even reliant on future generations of AI systems that maybe even more amazing. So if you add in, let's see the next iteration of AI systems of large language models or whatever your favorite type of systems in your sector is, I could well imagine that we will outpace those estimates and that growth will really take off in the next decade or two.

Jason Mitchell

It seems so, and particularly with sort of the estimates of these models, chronic doubling every six months. But I'm curious about what that means for AGI or artificial general intelligence. What's its potential and its productivity growth curve? And maybe frame or do some scene setting around the scenarios since there's clearly some disagreement in opinions on this?

Anton Korinek

Yeah, I think you're right. Some scene setting is really important when it comes to the topic of HCI. And I should add, I'm an economist, You know, I'm not an AI experts directly, but I listen to the world's leading experts. And even there there is a lot of disagreement. Two of the if you call them godfathers of the AI predicts that artificial central intelligence may happen very soon.

Anton Korinek

Geoff Hinton is on record as saying that it may materialize within five years or 20 years or anything in between and then a third one. Yann LeCun says that it's ridiculous to worry about artificial central intelligence. So I think the range of potential scenarios is really broad and the best thing that we can do is something that is very dear to economists, which is a portfolio approach.

Anton Korinek

I think we all need to be prepared for a range of different scenarios and one of them is that HCI may happen very soon, very quickly, possibly even within five years. And another one is that it may take a lot longer and that will have generated AI systems that still haven't reached that level within let's say, three decades.

Anton Korinek

I think all of these scenarios are plausible and we should be at some level prepared for them, in particular in the sense that we shouldn't be completely surprised if they suddenly materialize. So now you asked about the productivity effects of HCI, and in some sense they would be absolutely tremendous. So artificial Central intelligence means that machines can perform literally everything that humans can perform on a cognitive side.

Anton Korinek

I think the advances in robotics and so on that are going on right now are so fast that the physical side of automation is not that far behind. So it turns out if you look up these large language models, for example, with robots, they are suddenly a lot more capable than they were before. And that's why receiving records advanced to robotics.

Anton Korinek

What it implies for productivity is that we could produce a lot more for a lot less. And I guess the big downside is if everything can reproduce so cheaply and without human labor input, what are we humans going to live on? And that's going to present a really huge distributive challenge to our society. So right now we live in a world where the vast majority of us derive their income from labor.

Anton Korinek

We do. It's all paid for it. We work as entrepreneurs and we obtain income from that. But once our labor isn't particularly valuable anymore because each year there we're we're going to have to find a new way of distributing this fantastic amount of wealth that I could potentially produce or otherwise, if we don't, we would have mass starvation and instability and tremendous misery.

Anton Korinek

And that would be such a shame because there's so much opportunity, so much potential from AI. And even if we just distributed a small fraction of the potential gains, we could all be much better off.

Jason Mitchell

It's such an interesting point. I'm not trying to sort of go into a more abstract space, but I think these distribution curves are really important. If you could kind of describe the two examples that I think you've historically talked about. I guess what I mean is do we increasingly retreat deeper into the long tail distribution of infinite tasks towards more complexity as as humans?

Jason Mitchell

Or is there kind of a ceiling, you know, a finite maximum complexity of tasks that the human brain can perform, which AGI ultimately replicates? It feels like a very stark kind of binary outcome that we're sort of facing.

Anton Korinek

Yeah, that's indeed the million, or maybe I should say $1,000,000,000,000 question. So to to rephrase the question, when we think about automation in the past, we have always automated some things and then we humans have retreated into the remaining tasks that have not yet been automated. And one of the main challenges for why we couldn't automate everything in the past is because our machines were just too dumb.

Anton Korinek

Now we are suddenly at a stage where our machines are a lot smarter, inverted, coming closer and closer to what we humans can do. But so far they still can't do everything. And so this million trillion dollar question is will machines eventually be able to perform all those cognitive tasks that we humans can perform? Or will there always be a tale left for us humans that only we can perform?

Anton Korinek

And if that's the case, then it means there's always going to be scope for human work. There's always going to be value in the work to be humans going to do, because if you automate everything else, then the automated bits just becomes more and more and more about your and. That's the lesson of the industrial Revolution. If you automate some things and make them really cheap, then the remaining tasks that we still need to create, the outputs that we want to create becomes more and more valuable.

Anton Korinek

So again, in that scenario where humans can always retreat into new work tasks, we will always be better off in the end. But in the other scenario where the machines can do everything good will be tested and then labor will no longer have value because machines would be able to do it much more cheaply.

Jason Mitchell

Can we talk about these things in the context of labor markets, in how these scenarios reverberate across labor markets? What are the potential disruptions, especially given Labor is the main economic determinant for how we distribute income, frankly? What are the historical parallels that we can draw upon which have seen supernormal productivity gains over small periods and subsequent labor market disruptions?

Jason Mitchell

I think you've talked about the agricultural mechanization of the 1920s and thirties, for instance.

Anton Korinek

Yeah, that was indeed a period in which we saw very rapid disruption of jobs and the chance to grow disrupted at a time where those of small plant farmers who were essentially replaced by tractors. And so in the past we have always seen these, I want to say, pockets of automation, for example, the farming sector being disrupted. Or if you go back to the 1980s and 1990s, a lot of industrial production was automated and workers in those sectors have lost out.

Anton Korinek

And in both of those episodes, we could see that the affected workers really suffered. So if you were a farmer who lost their farm during the Great Depression, that was certainly no fun. And it also wasn't fine if you lost your factory job in the American Rust Belt in the 1980s. So workers who were affected by that type of technological disruption really experienced lasting income losses.

Anton Korinek

And if you want lasting scars in the labor market now, all those examples obviously affected only limited parts of the economy, though. And so the danger of HCI is that it may affect all human labor. And that means the farmer who loses their job or the factory worker loses their job could no longer switch to something new and retrain and move to a different area of the country where there may be more work because each guy would be everywhere and it could do things more cheaply and more effectively anywhere in the country, anywhere in the world.

Anton Korinek

So it would really be without a precedent. And I think that's what makes it so hard to imagine. And it's something that but also for a lot of people to take seriously. So I myself sometimes Thrivers today really believe that this is possible. And if you look at historical precedents that would suggest, no, it's impossible. We've never seen this type of complete disruption of the labor market.

Anton Korinek

But then again, if you listen to leading AI experts and you just look at the ongoing trend of automation and you think about the limited cognitive capacity of our brains, our brains are amazing, but they're still limited. Sometimes it feels like it's inescapable that at some point machines might be able to perform all that which our brain can do.

Anton Korinek

And then from that perspective, it seems it's just a question of time.

Jason Mitchell

It's easy to get a little bit depressed on this, but to draw on David Ricardo, what is the human comparative advantage in an AI context? You've talked about the need to shift from content generation to content discrimination, but doesn't that eventually get whittled away with better trained models in the future?

Anton Korinek

Yeah, I think both of that is absolutely right. So right now we are at the stage where I can produce a lot and if you as a human can harness that and edit it a little bit, add your own colors to it, that can make you a lot more productive. And that means you have leverage to your comparative advantage.

Anton Korinek

But you're right, as these systems become better and better, it's not clear that that will remain a competitive advantage. And in some ways, if machines can do everything far cheaper than us, then comparative advantage isn't really a very useful concept anymore because it which has not given us for us humans to engage in those activities that machines can conduct at a fraction of the cost that it would take us to do.

Anton Korinek

That's got.

Jason Mitchell

Huge implications. Robert Solow, the Economist, observed The solo paradox is, I'm sure you know, the productivity paradox, saying that quote, You can see the computer age everywhere, but in the productivity statistics and frankly, I mean, he said that in I think the late eighties, I think 1989 pre the real kind of ramp up in the age of the Internet.

Jason Mitchell

And so my question to you is, will we see AI in the productivity statistics over the next five years? And as an add on how much of the generative AI productivity question is frankly a measurement problem, that the standard productivity statistics don't really reflect it discretely and we don't know what to look for, where to look for, at what point do we start identifying these productivity gains at a macro level?

Anton Korinek

Yeah, I do think that we will see the impact of AI on productivity within the next five years. You should see there's always a little bit of a lag because it takes organizations, corporations, individuals time to adopt these new technologies. But with generative A.I., I think that time lag is actually going to be sure to because we can already all use it.

Anton Korinek

We just need to use the same computers everyone really has. We don't need to invest in new hardware. We just need to adapt our organizational processes. And that is, I think, what is the main challenge right now and the main reason why we don't see an immediate productivity impact, because organizations still need to essentially wrap their heads around how do we best use and deploy these tools.

Anton Korinek

I think there is also something to what you said that we won't necessarily measure all of it right away, and that's because of the peculiarities that we measure. Output and productivity for a lot of the cognitive workforce, output is actually not measured by much data produced, but by how much they are paid. So, for example, if you look at me as a professor divided, I show up in productivity studies to this dimension, my salary to do measurement.

Anton Korinek

I wrote more papers or fewer papers in a given year, and if my salary goes up, then that shows up as an increase in output in the statistics. And a lot of the output of generative AI is of a cognitive nature, of a digital nature that makes it very hard to measure. So I think we will see productivity gains.

Anton Korinek

Some of them will be the statistics, some of them will be kind of hidden from official statistics, but we will be able to see them and that means they will still make us better.

Jason Mitchell

How do you think about AI's inequality problem? And I accept that inequality feels like a loaded term. I feel like we kind of need to dimensionalize it in a couple of different ways. The last several decades has been characterized by wage polarization, specifically the rise in the skill premium. I'm referring to John Orton's work, which I know that you're very familiar with.

Jason Mitchell

In other words, wages for high skilled workers have gone up, while wages for less skilled workers have stagnated. And so how does this wage dynamic play out with generative A.I.? Do the greater productivity gains of generative AI benefit cognitive workers, or do they help lift up less skilled workers on the lower end of the wage distribution?

Anton Korinek

So there are a couple of really interesting early results that suggest that less skilled workers, less experienced workers may actually benefit more from general API than more highly skilled workers. Now, I'm a little bit concerned about these results because an alternative interpretation is that generative AI is essentially devaluing the value of skilled, the value of experience, because the computer can just substitute for that.

Anton Korinek

Although it is an encouraging headline to see that generative AI reverse this trend in the skill premium and helps lesser skilled workers more. If the sole reason is because it reduces the value of the more highly skilled, then that's not particularly good news.

Jason Mitchell

Then that's that's an interesting take on it. I had not thought about that. You know, in in it kind of brings to mind two issues, right? I mean, there's this sort of this reversal in that polarization effect that we've seen over the last couple of decades. But there's also this larger question about the balance of power between capital and labor and how that plays out with the advent of AGI.

Anton Korinek

That's right. I think we we would actually be in urgent need of reversing that trends that we've seen in the past few decades. All the signs point towards generative AI as a tool that's essentially powered by a lot of capital, shifting the balance further away from labor. But I think there is also some hope, which is that if these generative AI tools can do so many things and to convert really cheaply, then hopefully we will find a way of distributing some of those gains and hopefully we will be able to make sure that even those workers who really over time become redundant will not be worse off, but will in the end, once they have.

Anton Korinek

and I should say they I should say we will, right, because it would affect you and me as well once we have accepted our fate, that we would actually be happier and that we would follow Keynes's suggestion to work less and less, more and enjoy nation.

Jason Mitchell

Sort of expand this kind of question around inequality. What do you think AI represents in terms of that global north and global South divide? There's already been some troubling signs that generative AI is kind of creating and underclass a subaltern of Mechanical Turk like crowd workers of Annotators and Maskers, who label data used to train these limbs, these large language models.

Jason Mitchell

In some ways it feels like a regulation in this sense is essentially about labor regulation. But how do you think about these marginal workers and how they figure in to this kind of question around inequality? My background is sort of political economy, so I'm kind of drawn into this kind of question of of dependency theory, right? I mean, do we sort of see kind of a new age of dependency theory where sort of the global South is under generative AI or an AGI model kind of captured in a constant state of underdevelopment by the global North?

Anton Korinek

There's been wondering about that myself. And honestly, I can't quite make up my mind about the future forces in both directions. So if it is true that generative, devalued skills were first of all, then that would suggest that the Global South is actually not losing as much as the North. And it would suggest that there is going to be some degree of evolution between laboratories in the global North and Global South Sea, between virtues, because of course competition would be the other component. 

Anton Korinek

That other component is what gives me much more pause. So if the capita that develops all these systems is all owned by the global north, then that would of course Kerry did potentially to increase inequality quite a bit, putting the two together, I'm not sure, but I'm more worried about the global North or the global South. I think both are in for quite some disruption.

Anton Korinek

I don't think the global South really necessarily be to lose out.

Jason Mitchell

Is this would they be the winner out of this? I mean, this idea of this reversal of polarization, the fact that kind of lesser skilled workers see a catch up, Right. Kind of a rising tide relative to higher skilled workers?

Anton Korinek

Yeah. So if you wanted to make the case for that, we could say, well, older workers in the Global South can now access. You seem highly capable to its advantage to go in unless you can access the educational opportunities afforded by these tools, you can access to productivity benefits, and maybe that actually makes them better off compared to where India has be.

Anton Korinek

And certainly it won't hurt to as much as to work in the global in the US. But as we emphasized before, so much depends on how it gains tax capital on the minds of China. You say I really will, redistributions. That's really the crux of the matter. If we find a way of distributing some of the gains and it just needs to be a sliver in an equitable way, then everybody could be better us if we don't find that way, it would be a really depressing outcome.

Jason Mitchell

It's so interesting. I wanted to switch lanes a little bit. You've written about the I alignment problem in your paper aligned with whom direct and social goals for AI systems. As you write more robust implementation addresses direct alignment problems. But how do we address social alignment problems? As AI systems increasingly impose externalities on society and the general interest?

Jason Mitchell

I guess I'm trying to open this up to this kind of externality problem that you increasingly touch on. How do you see governance and norms evolving given the absence of a global regulatory framework?

Anton Korinek

Yeah, ultimately the question of distribution that we have been discussing is part of the larger question of how do we align with society at large And the alignment problem, as it is usually discussed among machine learning, research focuses more on how do I get AI to do X and not unintentionally to buy and doing basic and things like how do we get AI to perform as a work and not unintentionally kill humanity and things like that?

Anton Korinek

That's what the research community in this area focuses on. But I think another really important question, and that's what I call the social alignment problem, is how do we make sure these AI systems don't only do what we tell them to do in a really narrow technical sense, but also that they don't wreak havoc with the fabric of society.

Anton Korinek

How do we govern them? How do we integrate them into the governance systems that, to be honest, how do we make sure they don't wreak havoc with our system of income distribution? That was the question we've just been discussing. And this is one part of this larger question. Without compensating the users. And it does very much seem like we really need a global solution to this problem.

Anton Korinek

On my optimistic days, I can see two potential solutions that may be able to get us there, and maybe it would be a combination of the two. One of them is the some of the likes. Some of the companies working on artificial central intelligence are quite aware of the responsibilities that they are incurring. When you develop HCI and they are thinking about things like how we resolve a distributed challenge, if a lot of workers or maybe even all workers go unemployed, and if they manage to develop AI systems, then I have some hope that they'll also manage to solve this distributed problem and set up a scheme that allows for the distribution of some small 

Anton Korinek

share of the benefits of HCI. So it's a small share, but potentially still large enough to make sure that all the workers who are displaced are not worse off. The second potential solution that I envision on my optimistic case is the critical mass of governments around the world will be able to get together and form a global organization that oversees the most advanced AI systems and makes sure that of the benefits of these systems are distributed to humanity at large so that all humans who are alive will benefit from the tremendous opportunities and tremendous economic growth these systems may produce.

Anton Korinek

Obviously, there's no guarantee that we get there. And I also have less optimistic days on which I feel quite gloomy about the challenge that we're facing.

Jason Mitchell

I want to push you on this a little bit more. Are we essentially setting ourselves up for a Facebook fail scenario, but one I worry that is compounded by the multitude of AI systems and the externalities that they produce.

Anton Korinek

So it would be as Facebook finds on absolute steroids, I think Facebook sites were kind of the tip of an iceberg of all the damages and dangers and externalities that social media has imposed on humanity. But something like an advanced form of generative AI or artificial intelligence could impose dangerous externalities that are just orders of magnitude as large as we saw from this letter from Center for the AI Safety that was signed by hundreds of experts in the space of the AI.

Anton Korinek

And I should say I also signed the letter myself. But there is a growing number of people who are really concerned about the existential risks emanating from such systems as one. So yes, we very much have to worry about something like the Facebook phase where we develop systems that intentionally or unintentionally wreak massive amounts of havoc. But the amount of time would just be orders of magnitude greater.

Jason Mitchell

What's your view on the environmental negative externalities of AI? It's been a little bit frustrating. I mean, there have been some articles, but it's been hard given the lack of transparency to kind of understand, I guess, the true impact that it could have.

Anton Korinek

Yeah. So my understanding is right now that environmental impact actually isn't all that large yet. So at the moment the total amount of power, for example, gets thrown by server farms. KFC in the U.S. is less than 2% of our total power consumption. And if you compared it to something like air travel or so, it almost pales in comparison.

Anton Korinek

I think it is fair to be concerned that if we continue doubling the amount of compute that we put into our cutting edge systems, that in a couple of years we may have significantly larger our refineries. I and then at that point, we probably should start to worry quite seriously about the environmental impacts of that. I hope that's again the optimist in me now.

Anton Korinek

I hope that the AI is going to be able to help us address these problems and really get better to.

Jason Mitchell

How do you see the market structure evolving. It feels like we're drifting a little bit towards regulation, etc. but do frontier models like open air chat, GPT four and Google Deepmind's part to essentially become natural monopolies that ultimately attract antitrust attention?

Anton Korinek

Yeah, this is a question that has been very much on my mind in recent months. I'm just about to come out with a comprehensive paper on this question and what my coauthor and I arguing is that this landscape of cutting edge foundation models, which is the set of models to which I'm on his blog, is very much a natural monopoly kind of landscape.

Anton Korinek

So these systems, they require massive amounts of investments to produce, but then once you have them, you can actually operate in relatively cheap. And that is kind of like, for example, utility networks or utility networks that have similar operating. It's massively expensive to build them, but then they are relatively cheap to run. And what you see in markets where you have state phenomenon is that there is a very strong force towards concentration because if you are the first one who develops one of those systems or exceeds one of the first two or three, because there's always a little bit of scope for differentiation, then you can just charge sufficiently low prices to keep other entrants

Anton Korinek

out of the market. And if you have such a natural monopoly structure, there are kind of two angles that regulators really need to look out for. The first is that monopolies tend to charge excessive prices and to extract rents from the rest of society. So you're not seeing this right now when it comes to large language mines or generically, way more broadly.

Anton Korinek

But we do have to keep an eye out for monitoring behavior in this sector. And the second language is regulators. Here you need to pay attention to is that if there is no competition, a lot of the other benefits of competition are mitigating. So normally competition is what ensures that your market produces what we want. If you are, let's say, a car company that produces cars and nobody wants, then you're going to go bankrupt.

Anton Korinek

And competition is a very strong force that steers producers towards fulfilling consumer's wishes. If you are in a natural monopoly position, on the other hand, then that competitive force is blunted. That means that companies who are operating these monopolies, they have very little incentive to actually listen to the consumer, very little incentive to do things like watching out for consumers privacy, watching out for information security, watching out to refrain from bias.

Anton Korinek

Discrimination aren't free statements because the competitive forces of the market do just that, pushing them that way. And that means that there is a role for regulators to step in to ensure that there are, as you know, egregious violations of what consumers would want them to do. So having these natural monopolies really makes it important that regulators be careful attention to that sector.

Jason Mitchell

That's interesting. What role do you see open source LIMS currently being developed play in the near and medium term? I mean, they're not really competitive with commercial offerings, despite what some people might say. And it's it's doubtful that gap will close in the future. I guess my point is the future will look very different in a world where open source models are standard versus a world where the only viable tools that are offered commercially and exist is natural monopolies.

Anton Korinek

I think that's right. And I should say that I have very mixed feelings towards open source models in this space. So as an economist generally, I'm very much in favor of open source because as you say, it carries the potential of making these models accessible to anyone essentially at cost, which is very, very low. And economics generally, I would look very different if you have one or two big monopolies charging consumers for their use versus if you have equally capable open source models that can do the same thing and essentially give us access to amazing capabilities and close to zero costs.

Anton Korinek

So in that economic sense and a big fan of open source, however, there is another aspect of this I want to call the Frontier E-Systems that gives me a little bit of cost, and that is that as these systems become more and more capable, the risk of abuse and the risk of them doing unintended things, the risks of accidents are also rising.

Anton Korinek

I think if we open source the most advanced and most capable systems in like one or two generations from now, that to me really imposes significant safety risks. And that's why I would be very cautious about that right now. My inclination would be to say that we want to really impose significant level of regulation on these types of models, and doing that with open source would be very difficult.

Jason Mitchell

As an educator, how do you see generative AI impacting? University? Seems clear that educators need to, I guess, rethink what they want to teach kids, given the strengths of AI. I guess I'm thinking back to your paper language models and cognitive automation for economic research. Obviously it's sort of geared towards economists and their research, but what does it reveal about the areas of disruption for cognitive research?

Jason Mitchell

What are the recommendations about the capabilities to focus on? Personally, I assume that Gen I would be primarily focused on mundane tasks. So for me it was quite interesting that it can be so powerful at the ideation level.

Anton Korinek

Yeah, working on this paper has been really an eye opening experience to me, so I wrote the first version in February 2023. That was when we only had two 53.5 back then. The AI system that we all had access to check, 250 or 53.5 incentives could do more than I expected, and it was actually fairly pressed and I thought, Hmm, I really have to make an effort to kind of incorporate these tools into my workflows because I can see that they're tremendously effective and really allow me to be more productive.

Anton Korinek

Now, eight months ago, I revised the paper to incorporate all the advances that had occurred between February and July 2023, and I was honestly blown away. So first of all, we all are aware that 254 is significantly more powerful than 53.5, but it can really do lots of tasks that I as an economic researchers would engage in on a daily basis.

Anton Korinek

And as you emphasized, it has become really useful brainstorming for me. It has become really useful in giving me feedback, offering me alternative perspectives. And so on. So it's a really creative writing computer code. And one of the tools that tends to be offering is Code interpreter, which allows the language model to both write the code and execute it to do something that you want it to do.

Anton Korinek

So, for example, for me as an economist, that is developing charts or running data analysis and so on and so forth, and it is already really capable of doing that. So I think saves economic researchers. But I would, I should really say cognitive workers at large. It saves a tremendous amount of time if we deploy these systems strategically for tasks and which they're good at.

Anton Korinek

Now, what does this all imply for university education? I think that's a question that so many schools around the country, around the world are really struggling with these days. And I want to answer it in two parts. And I should say I don't really have deep answers to it, but the first way of thinking about it is what should students learn?

Anton Korinek

Given the tools that we currently have available? Their mindset would be definitely to learn to use these tools as best as they can and use them to be more productive in the work that they're doing because when they graduate, they will have to use tools at their disposal in their jobs, and it's going to make them better at their jobs.

Anton Korinek

So I think educators who say, Oh, we should forbid these tools and we should employ detectors that make sure that nobody ever uses the words generated by censorship, they're missing the boat. We absolutely want to prepare students to use those tools, and we want them to learn how to use them as effectively and as productively as possible. But then the second part of the question is what do we anticipate on the horizon and how good are those tools going to be in one year from now, in five years from now?

Anton Korinek

And this brings us back to the question we discussed earlier. How close are we to HCI if HCI is possible, if HCI is on the horizon, then what should you teach students from that perspective? I think one answer answer to the question is, do you have to make peace with the fact that maybe our cognitive skills are not quite as important as we thought you were a year ago?

Anton Korinek

Maybe They can be supplanted by machines, and maybe that will be possible sooner than we thought. And then what is it that remains of us humans? How can we make sure that we still live our full humanity, even though we may not be useful? Cognitive workers? And I think one of my tasks as a professor, as an educator, is to make sure to develop that human perspective in both myself and my students as we are facing the progressive automation of cognitive tasks.

Anton Korinek

Because that's the one part that really remain. Even when we may no longer be useful on the job, we will still remain humans. And I hope we will all be better humans when we are freed from the need to work. That's again, the critical condition if our material needs are taken care of.

Jason Mitchell

So last question what's next for you? You're incredibly prolific. I've really enjoyed a lot of the papers and research that you've done, so I'm wondering what you're working on within that spectrum of economic and governance research related to AI. 

Anton Korinek

Right now, my main focus is on preparing for different scenarios. So far, artificial general intelligence. I think we as a society are utterly unprepared and we need to think about it. We need to make the best preparations we can because the challenge that we're facing is formidable. And I think there are a whole range of aspects. The problem, some of them that I hope I as an economist can make a small contribution to.

Anton Korinek

So think about what it really implies for labor markets, but it really implies for our social safety nets and how we can reform them and update them. So that's the area to hopefully contribute to automatically making sure that some of the gains are distributed across society. How will we adapt our systems of taxation, which rely to a very large extent on labor right now?

Anton Korinek

And if we want to be able to distribute resources to workers who lose their jobs, we will need tax revenue for that. But then there are also many other questions that are related to outside of the domain of economics that are so important for people to think about how will we govern these systems? That's already what we referred to earlier.

Anton Korinek

It goes actually AI alignment problem. How will we make sure that our democratic institutions will survive artificial intelligence, especially if there are massive disruptions in labor markets and maybe other aspects of society. So I think there are really so many important questions that we need to think about, and we don't know how much time we have left.

Jason Mitchell

Super, super interesting. So it's been fascinating to discuss how to think through the economic and specifically labor productivity implications of generative AI and AGI over the long term. What I could potentially mean for the last 40 years of wage polarization and inequality and why it's critical we rethink traditional forms of learning given the impact that I could have on education.

Jason Mitchell

Anton, thank you so much. This has been a fascinating conversation.

Anton Korinek

Thank you very much as well.

User Country: United States (237)
User Language: en-us
User Role: Public (Guest) (1)
User Access Groups:
Node Access Groups: 1