A Sustainable Future: Dr. Mike Kollo, Evolved Reasoning, On the Fear of Gen AI and What Comes After

Is AI your co-worker or your job replacement? Listen to Jason Mitchell discuss with Dr. Mike Kollo, CEO of Evolved Reasoning, about how to think through the implications of generative AI.


Is AI your co-worker or your job replacement? Listen to Jason Mitchell discuss with Dr. Mike Kollo, CEO of Evolved Reasoning, about how to think through the implications of generative AI.

Recording date: July 2023

Dr. Mike Kollo

Dr. Mike Kollo is CEO of Evolved Reasoning, a firm that helps organisations create effective strategies to de-risk AI adoption. Mike brings a strong technical foundation in statistical AI systems. He’s also led experienced global research teams at Blackrock, Fidelity, Axa Rosenberg as well as HESTA, an Australian superannuation industry fund. More recently, he’s transitioned to the development of AI systems for institutional investment programmes, while continuing to teach courses quantitative techniques and analysis at the London School of Economics and Imperial College.


Episode Transcript

Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity.

Jason Mitchell:

Welcome to the podcast, Mike Kollo. It's great to have you here and thank you for taking the time today.

Dr. Mike Kollo:

Hi, Jason. It's absolutely a pleasure.

Jason Mitchell:

Excellent. It is indeed. It's been a long time coming our past conversations around this, so really looking forward to this. So, Mike, let's start out with some scene setting. I think it's pretty important in this area.

Let's start with two questions. First, what is generative artificial intelligence or generative AI? And second, does it represent a new Age of Enlightenment or a new industrial revolution?

Dr. Mike Kollo:

So, the easy questions to begin with, hey, Jas? All right. Let's give it a shot. So, what is generative AI? So, generative AI, I suppose in a more formal technical definition is one branch of artificial intelligence that probably most people in financial services haven't really heard of or moved into.

And I suppose it's one area where at the early stages of natural language processing for some of us who have worked in the field, went on to become, and essentially it went on to become a form of artificial intelligence, which focuses on transformers, and that's really what the GPT stands for.

Within ChatGPT and much of the other elements of generative AI as well, the transformers are essentially encoding and decoding information usually with the view of translation of text, or in this particular case, the completion of text as well as things like image generation, now increasingly voice and music generation as well.

So, the reason that it's called generative AI louder than anything else, I suppose, is because it has this feature in it. The way that it produces information is like the next incremental best step, whether that's the next note or the next word, the next sentence that it creates.

But it's still very much in the realm of unsupervised models, machine learning models and neural networks, probably with a sense of that recurring neural networks on top of as well. And certainly, reinforcement learning as well. I think in terms of a branch of technically definition, BERA, a branch of AI, it lives in that space.

I think within the conceptual realm of AI, it certainly has made a huge splash within the realm of conversation and text and the way that it's able to handle information embedded in language. And I think flowing onto your next question now, does it represent a new Age of Enlightenment or a new in industrial revolution?

I would like to think that it represents a very important component of both of those things. So, I would like to think that whereas before our view of AI might have been a little bit more technical and a bit more mathematical and statistical, the future belonged to people from a poison or programming background, with STEM backgrounds, with statistics and so on.

And we very much viewed the future as a problem to solve, as reduced form problem that we had to put together and solve to create more efficient use of data industries outcomes for everything from production lines to other things as well, I think this AI puts a very human face on top of it.

And I think it enables us almost like a mask to deliver a lot of information, content, products, services or just engagement to human beings in a much more human way, but not as in... so, for example, we had this conversation a year ago, we would've been talking about customer preferences and trying to forecast these.

I think with the generative AI, the level of engagement that we can achieve will be orders of magnitude greater than what we've been able to achieve previously with combination of chatbots and clever, UX and UI designs in terms of how close we'll be able to get to these conversations and how close we can get to people. To the point where we can basically hack, I suppose, and some of the fear and the opportunity comes from here as well, hack people's beliefs and their trust, and maybe in their emotions that we can create with these digital agents.

So, I think it is a very human face of the next sequence toward those things that you talked about, the new Age of Enlightenment and new industrial revolution. And therefore, I think it'll be a very public and very important component of that journey.

Jason Mitchell:

Let's pick up on the fear. Because I think that's really important as well as uncertainty playing two big factors in this space. What in your mind is the precautionary principle of generative AI? We've seen some big names in AI like Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, even Sam Altman all warn about the existential risks of AI more broadly.

Even a survey at the Yale CEO Summit says 42% of CEOs think AI could destroy humanity in the next five to 10 years. And I note it on your own website, you cite that University of Queensland says, "In recent surveys, only 40% of Australians say they trust AI in the workplace." I guess it brings a lot of questions. How do we mitigate for this risk? What risk management or risk protocols do we need?

Dr. Mike Kollo:

I think it's a fascinating topic and it's one that is really gripped the international media over the last six months, some of which I agree with and some of which I don't. So, this will be slightly controversial, but hopefully useful as well. The part that I definitely agree with is that these systems are in order of magnitude more powerful than what we've seen before.

The things that they're able to do in terms of engagement, conversation, and the ease with which they're able to engage with us as human beings makes them a lot more relatable and a lot more, I suppose, sticky as most of us have seen with the amount of users that have logged onto ChadGPT in the first week or the first month of its life.

So, we can see that this is spreading like wildfire, especially among the young people. We've seen enormous adoption across a range of different industries and even a lot of adoption within the working folk as well. I think the primary fear that I've seen really comes from two areas.

One is existential one and one is more of a practical one. The existential one speaks to this question around super intelligence. And the idea that if a system is more intelligent than us or better than us at something, won't it want to just kill us or control us in the same way that we have tended to kill and control and subjugate the planet of species that we have found to be less intelligent than ourselves.

And so, I suppose it's a cautionary tale that we've taken from our own history books and we've thought, "Oh, no, if we make another entity that's better than us, but it's still taught by us on some level, doesn't that mean that it will do that?"

And I think for those of us that have thought about this from more of an existentialist consciousness, and I suppose the difference between, for example, intelligence and will, and all of those elements will tell you that it doesn't necessitate those two things.

You can have a super intelligent system like a calculator that is super good at doing maths. It doesn't mean that it has a desire or a will or an internal world. And so, I think in a similar way, AI and super intelligence alone does not mean that it could be dangerous. In the wrong hands is another story. And that's really a question of we've just discovered uranium and the world powers act a little bit afraid of and rightfully so. Who's going to make bombs and who's going to make power reactors out of it and therefore what will be the consequences?

And so, I think a lot of the existentialist threat is for so-called bad actors using these systems. But it's not so obvious that these systems can be picked up and therefore used. If all it took to break into our deepest cybersecurity systems was a re-hacking of existing publicly available code, then you might say we're probably not in a very secure place anyway.

And so, I suspect that some of this is conceptual and it's not that pragmatic and realistic, but there is a conversation to be had here that if we continue to go down this technology route, over time, we will seed more and more control and decision making to algorithms.

And I suppose there are two camps. One camp would say, "Oh, my gosh, we never want to do that because we're humans and we're the best." For people like me, I'm a little bit more pragmatic.

And I think actually there's a lot of systems and problems that we as human beings have to deal with that are so difficult now that our brains, I suppose our mammal brains, are just not suited to them. And so, complexity is already getting the better of us.

I think that fear of existentialism and what happens with super intelligence is really what's driving a lot of the headlines, but it's not a practical fear. It's a little bit of an anonymous and nebulous fear in the sense that it just hovers around you and makes you feel uncomfortable, but you don't really know what to do about it.

I think the second fear is a lot more practical, which is just about job loss. So, it's the age-old problem of, well, there's going to be change. This technology is going to come along and do part of the job that I'm doing today. And therefore, my workplace might decide to get rid of me because of that. And we had the similar thing with self-driving cars. They didn't quite work out the same way we thought they were going to. But we certainly have had that along the way every single time that we've used any system, intelligent or not, to do automation amongst the workforce.

And I suppose because ChatGPT has taken this technology and put it in the hands of so many people around the world, I think it's just become very, very obvious that its capabilities already significantly eat into a part of the workforce, a part of the workforce that's in white collar work.

And therefore, I think people who are using this system earnestly are thinking to themselves, "Gee, whiz, this thing can do what I do day to day, so I better change it up. I better do something else." And again, that fear, that sense of the world might not be such a nice place for me in the future because my skills may not be so valued is a legitimate fear. It's just probably, again, a little bit flared up in recent press.

Jason Mitchell:

I definitely want to come back to the implications around jobs, labor markets and productivity. But first, let's open it up a little bit more. Let's talk about the potential. I recall in the 1930s, Keynes predicted that the rate of technological innovation and productivity increases would make a 15-hour working week possible.

In fact, he worried that the biggest problem that future generations would face is what to do with all their leisure time, when the reality is that leisure time per capita at least hasn't dramatically changed much in the last century. Proponents of AI, even the likes of Bill Gates seem to enthusiastically echo Keynes' sentiments. What do you think, is it naive to think that generative AI could help fulfill this vision or is it simply too disruptive?

Dr. Mike Kollo:

I really like this line of reasoning, but I think it fails to integrate capitalism, which is unfortunate because that's the context that these technologies are developed in. So, let me take a step back here. Let's say you have a capitalistic system. And in the capitalist system, you have obviously inequality of outcome.

You might have equality of opportunity in some countries, maybe more so than others, but you certainly have an inequality of outcome based upon talent, luck, capability and so on. And that inequality of outcome means that there's the rich and the poor, there's the winners and there's the losers. And this is the dominant way that we manage our societies and that economies are based on today. So, any technological advantage that enters that system by nature gets dissolved and integrated into that system. It's not a geopolitical or existentialist threat to that system in terms of when AI doesn't say, "Hey, let's speak more socialist."

Or "Let's be more monarchist or anarchist or any of those things." It basically enters the system as a capability. And the system picks it up in this case capitalism and makes more profit from it and distributes those profits in much the same way that it has distributed in the past.

And so, while you might have things like open source, which is a wonderful thing, I think from the beginning of the internet, that means that AI evolution and technological advancement is distributed and made accessible to much wider range of people.

So, the equality of opportunity I think is maintained because of that. The equality of outcome is not. So, there are fewer winners and there are many losers. There's one Spotify, there's not a thousand. There's a handful of technology majors at the top, there's not hundreds and thousands. And while there's constant competition and maybe there's competition could become more and more available with different open-source language models, it's more than likely that this technology would simply get integrated into those majors if nothing else, we've seen that from Microsoft already. So, I don't think that a technology by itself will actually change the way the outcome of these systems is, I suppose. Produced or given out to different kinds of people, I think that these technologies like generative AI and others will come in, create efficiencies.

And those efficiencies will continue to service capitalism if anything make it stronger as well. And I think that does have a lot of implications for equality and equality of outcome as well.

And let me just pause on this very quickly because again, one of the interesting assumptions about this point that you made about 15 hour working weeks is that the outcome of a particular technology evolution is distributed across a workforce or distributed across a set of people rather than contained amongst just a few people at the top. Now, the way so far AI has gone is because of the digital scale that is achievable using a clever piece of software, the value of that can be captured within a single company.

Again, I use Spotify because I've just used it a moment ago, and then the value proposition is distributed globally. So, Spotify became a powerhouse with 50 developers, a global powerhouse. That didn't create equality necessarily, that just created a very strong amount of concentration of value that is Spotify in one country, in one location for one very, very small number of people. And so, again, if we thought that the world was going to be going to a place where we could produce food and clothing and entertainment for everybody in the planet and share those in a more equal way, then you might say, "Well, therefore, we don't need to work as much."

Because simply using technology and so on, we are sharing the benefits of those much more equally. I don't see that happening in the capitalist system that we're currently in.

I see more of that, I suppose the idea, the earnings and the big company and the tech and so on. If anything, even more as the system become more complex and require more specialized data and expertise.

Jason Mitchell:

It's interesting, yeah. Because the big question sitting on most people's minds is what's the workforce impact is generative AI, and I'm specifically framing this around generative AI, not AI more broadly. But is it a white-collar killer or a productivity boost to mundane tasks like report generation across different platforms? I'm thinking areas like email copywriting and formatting.

Dr. Mike Kollo:

I think it's a great question. Look, I used to work for a wonderful company called Fathom. And Fathom was a company that was trying to calculate, quantitatively calculate the impact of different technologies in the workforce. So, we framed it as either automating or augmenting.

So, augmenting technologies would essentially create efficiencies of a few hours, let's say for a worker per week or maybe even more, a little bit more.And automating would take a significant chunk of the worker's capability and put it onto a machine.

Importantly for augmentation technologies, the control still retained in the worker's hands. So, an augmenting technology made what you do faster, but ultimately it was still your work and you signed it and you said this was mine. And automating technology, I suppose, took a task and said, "Well, it doesn't matter." "It's been done by human hands or signed up by human, there's no risk behind it, so we'll just put this through an algorithm and make it automatic." So, I think that with generative AI, certainly some of the lowest hanging fruit that we see today are exactly what you said.

So, report generation, distilling information, taking information and moving it around, et cetera. I think in the medium to long term, there's a much bigger potential here, which some people have referred to as a platform killer, whereby you might ask the question, why do we currently use computers with funny boxes and clicking mouses and squares and circles? When really what we want to do is we just want to solve a problem.

We just want to say somebody, "Hey, can you just take care of this thing for me?" In the computer world, we've transformed that into boxes and circles and containers and so on. But in the future, this technology might move it right back to a conversation and then right back to the idea that you can speak to your computer, in fact, you prefer to speak to your whatever that is, your handheld device, your laptop, whatever else it is. And that handheld device will not only be able to understand the context of what you're asking for and how you're asking for, but also start to action that much more intelligent. And I think we've certainly seen early indications of that from the products that have been put out by Google and by Apple and so on, but those are still very much execution-based products. I'm asking those systems to make an appointment, to set an alarm, to show me a recipe or something like that.

I think the next generation of these digital agents will see a lot more integrated. And then, in terms of the workforce impact, what that means is certainly anything to do with white collar work, which ultimately is about moving and shuffling around information, and contained within language in the guise of things like reports and products and so on and so on. I think we'll get an enormous shakeup from the fact that these tools will push that information around and certainly understand and integrate that a lot more, even if you take nothing else but emails.

Let me give you a quick example. Even if you take emails, typically, people in the [inaudible 00:20:20] industry will spend somewhere in a three hour to four-hour day writing and resend, reading emails and receiving information by emails. And a lot of them, if you ask them, are these emails all vital and essential and really important will probably say, "No, not really."

And I think the whole point is like we as human beings communicate with some level of efficiency, but not a great level of efficiency because it requires nuance and context and some understanding and so on. And so, actually written communication often is misleading and difficult, and that's why people tell you, "Just pick up the phone. Just make a call. Just talk to that person." I think this technology could make that a lot more efficient, but thereby also just in aggregate, the white-collar work will get a significant shakeup.

Jason Mitchell:

Yeah, this is super interesting. It reminds me of that Robert Solow quote, the economist who said that, "You can see the computer age everywhere, but in the productivity statistics." And admittedly, that was back in 1989. But I guess the question I'm wondering is will we actually see AI or generative AI in the productivity statistics over the next five to 10 years?

The economist, Anton Korenik, who by the way will be on an upcoming episode, he's at the Center for Governance of AI and the University of Virginia. He's got a paper warning that generative AI will lead to 10% to 20% productivity gains over the next decade leading to strong positive supply shocks to the economy and ultimately deflationary effects potentially in the form of wage pressure. In fact, there's another paper, an Ellen Du paper that estimates that almost 50% of the workforce could eventually see AI perform half or more of their job tasks. What do you think about that in terms of what you just said?

Dr. Mike Kollo:

It's a very interesting scientific way of looking at it. So, a lot of these papers, I think including certainly the first one you mentioned, takes the methodology of segregating jobs into tasks. Certainly, what we did at Fathom. So, every job has tasks, typically sourced from something like O*NET, which is a database on this. So, each one of these tasks is about let's say 10 to 12 activities that a person of that title would be doing per day.

And then, what they do is they say, all right, out of those 10 to 12 activities that a person is doing per day, how many of these activities could be augmented or even replaced by a technology like generative AI?

So, there was a paper by MIT that showed that let's say 35% of writing tasks can be eliminated or essentially made faster by ChatGPT like systems.

So, you might say, "Right, how much of your time do you spend doing writing tasks?" Therefore, if I apply that 35%, I can reduce that. And I think these statistics are probably about the 10%, 20% for white collar work is probably where they start to come in terms of efficiency gains. And then, there's this uncomfortable decision the authors need to make, which is, so what will companies do when they get 10% efficiency gains?

Will they reduce the team from 100 to 90 people and do the same amount of work? Or will they have those 100 people doing 110 persons, people's work, I suppose with the view that there is more demand for their services. And this is where you start to run into industry, an industrial organization.

There's field of economics that looks at how companies compete and how they make products and services, how they price those products and services and so on. So, you can't really make those statements about productivity gains being, let's say, flowing into GDP or economic activity until you understand whether those productivity gains are going to be utilized in order to make more widgets and make more things.

And therefore, as you say, reduce inflation through reduced pricing because there's more supply of things going to the market or whether that will be simply your industry that's more price inelastic or it's not your business strategy to lower prices or to make more goods or make more services. And therefore, you happily just let go of those 10 people and continue to make the same amount of things just with 90 people and with a higher profit margin.

Jason Mitchell:

What impact do you see for the finance sector in quantitative investing, which obviously is a big area of your focus? Generative AI models are better at supporting document summarization and translation tasks. But there seems to be fundamental barriers to applying future AGI models to the decision-making process in terms of compliance, comfort with the integrity of the actual underlying data. Not to mention the fact that large language models aren't primarily analytic and quantitative tools, but they're probably better suited for qualitative tasks.

Dr. Mike Kollo:

I think it's a great point. I think that by being a quant, and I suppose spending so many years in the quantitative field, I've got a real great appreciation for how difficult forecasting really is. I think anybody who's worked in this field in some sense has had the hardest job of probably any of the applications of these statistical systems, which is to know the future. And so, when I look at most of machine learning, I think not that useful for quantitative forecasting directly, useful for quantitative forecasting.

And I think for generative AI, I think similarly, it's probably not the core of an alpha engine in the sense that it certainly is very good at understanding language and information contained in language. So, I think the parts of the investment process that relate to either getting sentiment information or maybe financial information that's contained in language or certainly looking at multi-language analysis, I think it could play a very important part. But what you're really doing with these models is that you are disentangling or you're translating a bunch of text into a quantitative signal or a piece of information, a data point, and then you're making investment decisions within the process of that. However, I think the element that could really change the game for quant investing is actually not necessarily in the alpha generation, but in the humanization of that entire field.

So, if you think about how quantitative finance plays today or even financial services, it's seen as a little bit dry, very technical and somewhat removed industry. It's not something that people talk about at dinner tables. It's not something that's discussed on CNBC or Bloomberg or the major channels. And the narratives and stories that quantitative managers tell seem a bit alien to many investors.

They talk about data cleanliness as you just mentioned. They talk about algorithmic integrity. They talk about speed of execution. They talk about slippage and so on. And these are not common topics that other people would talk about. And so, I think large language models could provide a really compelling face to this industry by again, making it very engagement driven, making essentially personas that investors can very readily engage with.

These personas would be knowledgeable about quantitative finance in that particular fund and what it's doing, but more importantly, they would be a lot more relatable. There will be a lot more education, there will be a lot more soft in some way. They would use things like analogies, similes and so on to explain difficult complex concepts. And in that context, I think they could bring back a lot more attention to quantitative finance, which at least in my view, having been in industry has stalled over the last decade probably.

Jason Mitchell:

Can you put what you just said in the context of a quote that you had once, which I really found provocative, which is fund managers need quants, but quants don't necessarily need fund managers. So, what happens when you introduce AI and ultimately AGI into this context? Who needs who and for what?

Dr. Mike Kollo:

It's a great question. So, my quote there was really about this idea that the world at large was falling in love with data and quantitative ways of making decisions with data. Whether you're looking at customers, whether you're looking at integrated, whether in this particular case you're looking at generative AI.

What's interesting about that is that whereas in the past, quantitative investing has actually done something that very few industries ever did as the pioneers of quantity investing. I would say probably in the late 80s and mid 90s took the idea of using a data generated task called investing from beginning to end and put it all onto a system that ran beautifully and smoothly and made mistakes like anything else, but over a very difficult topic area called in this case, investment management.

And so, actually, quantitative management was well ahead of its time in terms of doing that, where many other industries still struggled with the automation and how they were going to do that and autonomously and so on. So, I feel like the rest of the world is catching up now. And whereas before quants were somewhat subjugated into the back corners of trading desks and investment management firms as more specialists than difficult to deal with and so on, the world at large needs a lot more quantitative professionals to go and do this work.

Regenerative AI change that. I think generative AI to some extent is enabling people with non-technical skills to reach for data. So, it certainly can write a bit of code to pick up data for you to summarize it, to help you understand it, maybe even to talk to you a bit about it. Ultimately though, it's understanding and its capabilities are still somewhat narrow, and I think it's still at a point where most people go, "That's interesting, but now I'm going to get a specialist to help me understand that."

So, I still think that within the world of quantitative investing and quanta skills that we'll see a growth. I suppose it'll see a growth more in understanding of data or a problem or how to solve a problem with data. So, it's a more rounded, thoughtful engagement rather than can you just write me some SQR code and execute a particular task? For me, it's much bigger. With AGI or, I suppose, multitask a more generalized intelligence, I think you enter a whole new world there. And I'm not one of these people who believes that there has to be a single system here.

I think that it's going to be a little bit more likely that we'll have multiple systems linked into each other, almost like a network. And these multiple systems will have different specializations and capabilities and focus areas and training data sets and so on. So, I think at that point, you'll have a capability as an investor a lot more to reach across multiple areas here. But I think one of the things that we have a natural moat around us in investment management space, which a lot of other industries don't, is that the underlying problem is still extremely difficult.

If you have a great signal generation, if you have a wonderful investment process, you might have a 51%, maybe a 55%, I mean an extreme probability of getting something right, and 45% are getting wrong. There are many, many other industries where humans are currently overseeing processes, which if all automated or put into the hands of more sophisticated intelligence systems could see success rates of 70%, 80%, 90% equivalent or higher than humans, a bit like with translation. In which case the problem is effectively solved at that time and people go, "Yeah, I'm pretty much done where I don't need to think about it anymore."

Whereas feels like with investing, because the game theory, nature of it a little bit, I don't think it'll ever be "Solved." And so, it'll continue to have that race for alpha in the intellectual race for better investing and so on. So, I suspect that therefore quants and thoughtful people will continue to use these tools called AI within the investment space and have demand for them. But in the rest of the world, I think will come sharply up that automation curve and for a lot of the industries that their problems can be solved through automation largely, I think the demand for people who understand how to use these systems and create them and explain them will equally sharply rise as well.

Jason Mitchell:

Let's talk about bias. Large Language Models or LLMs are pre-trained data through self-supervised learning with tremendous amounts of data, but that inevitably results in the development of an internal representation of the world or worldview. What bias problems does this represent for business models that are trying to scale globally? Algorithms are likely to produce customized views for different people. But what are the logic problems when those views conflict with the dominant worldview or truth of that system?

Dr. Mike Kollo:

It's a great question. And I think there's a couple of different ways of addressing it. So, I think firstly, all systems have a degree of bias. This is where, I suppose, language betrays us a little bit because there's many definitions of bias. But I would suggest to you that all systems that are trained on existing historic data are reflections of that existing historic world and the norms of that existing historic world.

And so, therefore, we carry those norms with us into that dataset. And to some extent, we can jiggle around these capabilities, but we have to be very careful when we reach into these systems and try to essentially social engineer the outcomes.

So, I'll give you a quick example. If I create, not an AI system but a very basic AI system, whereby I give loans to wealthy people and essentially my loans are directly proportionate to the person's bank account balance, then you may turn around to me and say, "That system has a bias to middle-aged white men, globally. And you'd be right, but that's not the causal reason that that is doing what it's doing. And so, association, I suppose, and confusing causation with association is another reason why we might see biases where there are none or there are not intended to be any.

Of course, the solution to that simplistic problem I just threw up isn't to make the system give up more loans to minorities or different genders and so on, because that's actually not economically sound decision and you overburden people with debt in that case. So, again, I think being very careful to define bias here from causation and association, but let's say that we are a bit more subtle in that definition.

Let's say that by bias, what we really are talking about is things like politeness. I ask a system to write a polite email to my client. And that term, a phrase politeness might mean very different things in, for example, Saudi Arabia, then it does in Germany, then it does in Australia, then it does in Japan. The way that language is used and the way that context is used in each of these countries matters a great deal.

I think with systems like ChatGPT and others, there's enormous amounts of embedded context in these systems. There's 175 billion parameters contain a huge amount of social context that has been read in particularly through language, I would say. I would say particularly through the idea that the way it has learned the Japanese language or the German language has been by integrating these societal norms that are contained within the language. And so, in that structure, language is not only the carrier of knowledge, but it's also the carrier of social norms and social accepted norms I suppose.

And I think it's harder for us to understand that speaking English because that we can name four or five countries that speak English natively but have very different social norms. But I think as soon as you start to go into different languages, very small ones, Hungarian for example, will have 9 million people in very specific social norms. So, the biases within the Hungarian language cluster will be very different to the biases within the German cluster than the English cluster, the English-speaking cluster. And then, obviously, across them, they'll also have different ones as well. So, I've yet to see anybody go through that level of fine-tooth comb and say, "All right, let's think about bias in a much more global and sophisticated way unless we really start to think about nuances within language and capability."

I have seen obviously the work that's been around certain minority groups, especially for US training data, I think those are excellent pieces of work around pointing out that these things are biased. And I do believe in a system of reporting system that all algorithms should probably carry, which would talk about the level of bias. And I think the European legislation recently that came out that positioned systems of high risk for AI systems certainly talked about the idea of the intended use of these technologies being a big determinant of whether it was in fact regard as a high-risk system. And then, therefore, because you're trying to do something that's high risk, like for example, vet people for jobs or vet people for certain kinds of financial support, you have to make sure that your data is well represented and well understood and so on. But ultimately, you also have to understand causation, which I think has been the Achilles heel of this industry for some time.

Jason Mitchell:

What does something like chat GPT mean for higher levels of integration of AI into our thought processes and decision making? It almost seems like large language models are transforming the fundamental notion of a tool from something that is traditionally been execution driven to something that is now contemplation oriented. The idea almost reminds me of Yuval Harari's concern in Sapiens that online stores in Amazon are already profoundly shaping our notions of choice and free will. So, how do you see LLMs expanding these exchanges or personalized algorithms and ongoing conversations with potential behavioral errors?

Dr. Mike Kollo:

It's a great question. So, Kissinger, along with two other authors whose name escaped me right now, but very important, wrote a wonderful opinion, I believe, in the journal early in the year in which he said that this technology seeks to reshape the way that the human cognitive function works and it hasn't been shaken up since the invention of the printing press. And that's a very big call to make.

That's 500 years of evolution there. But I think what he's talking about, and I do tend to agree with this, is that the problem today or rather reality today is that if I want to find a piece of information, I've got two choices. I want to find out, for example, a particular date in history, I'll go to the computer and type in a particular question and I'll get a very specific answer. As soon as I want to find out something that's a bit more controversial, aka maybe there's multiple ways of doing something or there's multiple views of something.

I now need to read through two or three or four people or maybe articles and I need to think about those articles, compare them and make hypothesis as to maybe which one's right or which one's wrong, but certainly engage in a lot of critical thinking. And that critical thinking is a muscle that I'm using to discriminate between good and bad arguments, between strong and weak arguments to the arguments that I like or I don't like.

If I'm good at this stuff, I'll try to not think about my own biases. I'll try to think about the facts and the scientific proof behind each of those arguments. However, if I end up turning to a single voice of reason in the future, a large language model that I trust, and I will ask them to please tell me about the arguments pro and against this particular issue. I will get a very measured, pre-baked set of reasons for both of those sides and probably even a probabilistic truth answer at the end of it.

And I will take that and I will believe it, and I'll have very little reason to push back on it, or very little reason to make my own investigations to go out there and to really think for myself and to think right, what is the truth? How do I find the truth? How do I think about the truth? Because essentially, this other system has already served up to me a very nice truth. Now, today, you might say, "Well, we've got some of that with Twitter or social media where I have a group of friends who all believe what I believe."

And so, therefore, they just keep saying the same thing again and again, whether it's political or otherwise, and therefore I'm just reading the same facts and believing them. But that's not the same thing. So, Brexit is a good example. Brexit is one of those topics everybody has a strong opinion on.

But a lot of people that get asked exactly what the reasons behind their views are a little bit more vague, because they made the decision based upon a small number of data points or perhaps intuition or perhaps conversations with neighbors and so on, but not necessarily thought through the entirety of that complex issue and then weighed it to throw and against.

So, they've already made a decision, but maybe more intuitively. Whereas, I think in the future of large language models, we could be thinking that we've made a decision based upon very cognitively sophisticated data points, which have been presented to us based upon the pro and con of our complex topic, because we never ourselves created those. We never went through the process, often painful process, to create that worldview and to understand it and to so on. I think that's really what he's talking about there.

That actually what we're talking about here is every time in history where new technologies come along and essentially replaced or mostly replaced the need to do something, to do a task, we as human beings have tended to forget how to do that task over time.

Jason Mitchell:

I want to change lanes and talk about regulation. How do we regulate generative AI or AI more broadly if there's no real consensus about how to regulate it, especially when tech regulation has a history of lagging the industry. The EU AI Act represents the world's first comprehensive AI law. But there has been some considerable pushback from the likes of OpenAI, Google and Microsoft, indeed, even Sam Altman at OpenAI has talked about leaving the EU if he can't comply with regulations. So, what would you suggest?

Dr. Mike Kollo:

Oh, this is a hot topic. Hot, hot, hot topic. Look, a couple of things. Firstly, I think there should be a societal conversation. I do believe that people need to be around the table. I do believe we need to have more information and more transparency as to what's happening with these systems and these technologies. What I am less believing of is that the right thing to do is to throw it over to government to solve it for us.

I'm not necessarily for anti-government, broadly speaking. But I have noticed in my years that for more nuanced and complex issues that require more domain experience, a centralized public sector body like a government isn't necessarily the ideal savior to go to. So, you've got this new technology that's just emerged in generative AI. It's very sophisticated. Lot of things that we used to think that were true are not really true anymore with the system, including explainability and these attributes.

And suddenly, what we're saying is, "Hey, dear government, please solve this for us so that all of our future generations will be safe and happy and equal."

That's putting a lot of pressure on government. So, the EU regulation I think has been around for about two years sitting on the shelf. So, maybe this has prompted them to take it off the shelf and dust it off and clean it up and make it better. But certainly, these topics I think are things that are really hard to solve by government. And they touch on a lot of questions around data privacy, around societal equality and so on. If you ask the average person, which I've been doing relentlessly about what they think about AI, as you mentioned before, they're afraid of it. But then, when you dig underneath it and say, "Why?" They say, "Well, is it ethical?" And you say to them, "Okay." But I mean, how do you think about that? You don't ask me whether a bank is ethical or an airline is ethical.

How do you think AIs are unethical? And most of the time, you get back to a very simple topic of, I think I'll lose my job. I think somebody is going to try to manipulate me. I think there's trying to fool me and trying to dispossess me. So, I think we have laws today for protecting people's rights, consumer rights protection. We have laws today that means that if someone tries to call up my bank and pretend to be me, they're not going to get access to my bank account. The fact that we now have AI, which that person can do a much better job, I think personally, my voice through large language models or other kinds of systems, I don't think necessarily changes these protections or these ideas.

So, I suppose my sense is that we need to improve from the ground up people's understanding of these technologies. We need to teach it more at schools. We need to teach it more to executives, to the board of directors. Yes, board of directors, people in their 50s and 60s. They need to get across this technology. It's not enough that there was a very different generation in which these individuals grew up and acquired their business skills. But yes, these AI are big enough and important enough to get across schools, absolutely.

And then, I think that the consumer protection whether they're for children or against fraud or other illegal activity need to incorporate. And they have already, but they need to continue to incorporate elements or tools that AI can be used essentially to achieve these bad outcomes. And ultimately, if we've blocked that all the terrible outcomes that AI could be useful for our society, and now we're living in that gray zone of, okay, this is not illegal, this is just maybe a new way of learning, a new way of educating people, then we need to start having a little bit more open societal conversations like how do we feel about an AI teaching our children, about history? Do we feel good or bad about that?

And so, I think it's those conversations I would love to have in the open. So far, I want to do want to point this out though, we haven't had any of those, just about any of those anywhere in the world. What we've had is very interesting as sticky products that are slowly crept into our everyday lives underneath our attention, underneath our, almost like hitting our subconscious to the point where we all derive positive benefits from the fact that our, I don't know, Google Maps gets us faster to our destination. But we don't really ask questions about what's involved in the AI behind these products anymore.

Jason Mitchell:

It's interesting. It almost feels like AI regulation is ironically about or a good part about labor regulation, but I push back and it almost feels like there's two sides to that. It's obviously protections against job loss. But there's also this point around the mechanical Turk factor, which articles like Time Magazine and The Verge have recently written on this element of annotators and taskers who label much of the data used to train large language models. Is a lot of that or is it essentially creating a permanent underclass or subclass of crowd workers that are subsisting on below living wages?

Dr. Mike Kollo:

I think that's a really good question. It goes back a little bit to my capitalist, in the context of capitalism, what's acceptable question. Maybe if we think about other ride-sharing companies like Uber and the criticism that they've received in the past regarding their drivers being on contracts, not having the rights and protections of employees and so on, was not really a statement about technology. It was a statement about labor unions and labor rights and how these platforms enabled, I suppose, a new worker to come into existence.

The gig economy, that was seen as a really great thing. People could do things casually. But then, of course, then when it became too mainstream, it became not very protected, not very nice. I think with this idea of labeling and information, data labeling has been around ever since machine learning has taken off and certainly has been as popular as well long before Chat GPT. My sense is that with regards to language, we now have enough critical mass to get people to use a system that's useful and in using that system provide feedback to the creators of the system that improves the system. So, I think we've passed the critical threshold, but it is well known.

And I did read that article with great interest that a lot of the benefit of Chat GPT over previous systems was derived from feedback, which was derived from very low wage paid people who then will receive reinforcement... who's who feedback went into a particular system called reinforcement learning.

But I would position this to you as well though, had ChatGPT not become successful and that investments of half a million dollars or whatever had gone to waste, should those people have been badly affected as a result. And so, the weird thing about capital is in this particular case, there was a venture capital piece of money that was allocated to this company.

The company took a risk by going to the open labor market and choosing people to do a particular job for it or picking up data that nobody was really complaining about at the time not being available. They then created something amazing and most importantly, valuable from it. And because it's so valuable, I feel like the world is not only to paying attention, but all of these hands have come out as well, and all of these different questions have come out. Well, maybe we should sue you for this. Maybe we should sue you for that. Maybe we should ask for more money here and there. Whereas had ChatGPT not been so commercially successful at all, I wonder how many of these things come out.

So, I always take a moment when there's money involved and I'm thinking about ethical questions to wonder if money wasn't involved, would that still be such a big ethical deal?

Jason Mitchell:

Let's finish off with one last question. How do you see open-source large language models currently being developed, evolving in the near and medium term? They're not really competitive with commercial offerings right now, despite what some would claim. Do you expect the gap to close? Frankly, the future will look very different in a world where open-source models are standard versus a world where the only viable tools are offered commercially.

Dr. Mike Kollo:

It's a great question, and I think that's currently what is on offer at the moment. So, I suppose you've got a class of models which have been pre-trained using language and available language, and these are of good standard but not fantastic standards. And then you've got, as you say, a couple of commercially available models that have had either more data or better methodology applied to them to make them a little bit better for a particular use case. I think you'll see a lot more of that in the coming years whereby more specialist, large language model providers will create ones for financial services, for healthcare, for various other industries that require their own use cases and language.

And I think a lot of these obviously will be commercial because they'll have commercial outcomes and uses. I still believe, however, that for education and for these kinds of more societally focused industries, you will have more sophisticated open-source large language models, which will certainly be available for a variety of people around the world. I think the second reason why the open-source world has a very good chance at this is because you can always use the commercial models to train your open-source model. And you might say, "Well, in the rights and conditions, that's certainly true." But don't forget, the world is a big place. And there's many, many countries where they'll happily do that and be not under the same focus as the US regulatory might have in its own jurisdiction. So, I think the so-called genies out the bottle, I think there's a certain capability that's been released. I think that now there's going to be mushrooming of these types of models. I don't think there's going to be an intellectual moat that will be available for many companies anymore.

I think over the coming years, the intellectual moat like that Google note that was leaked will mean that it'll be more about the productization and more about the trust. And I think the commercial value will be captured by brands that you'll be happy to put in front of your customers, because you'll know that the digital agent that you're putting in front of your customers that's representing your product to that customer is being vetted, is not going to say anything silly, is well understood and so on.

But therefore, I still think there'll be plenty of other models that will be available and useful for a variety of other use cases that are much less commercially done. So, again, more in the social space, for example, charity and education and so on and so on. The final point I want to make here, I think, is that there is a very strong push to realize the social value of these models, which we haven't so much touched on him. And that basically means that our ability to put a powerful educator or a powerful therapist or a powerful positive role model in front of people with good information and content that is helping those people, individuals. Whether they come from poor or impoverished backgrounds or whatever else, is the civilization changing capability of this technology. And I certainly hope, and I trust in the fact that many people recognize that, and we'll see a lot of that application as well as the commercial ones.

Jason Mitchell:

That's a really good way to end. So, it's been fascinating to discuss how to think through the implications of generative AI, what large language models like ChatGPT mean for the workplace, and why our focus needs to shift towards understanding the new areas of growth, industry and expertise that these systems open up. So, I'd really like to thank you for your time and insights. I'm Jason Mitchell, Head of Responsible Investment Research at Man Group, here today with Mike Kollo, CEO of Evolved Reasoning. Many thanks for joining us on a sustainable future and I hope you'll join us on our next podcast episode. Mike, thanks so much. This has been fantastic.

Dr. Mike Kollo:

Thank you, Jason. Thank you very much.

User Country: United States (237)
User Language: en-us
User Role: Public (Guest) (1)
User Access Groups:
Node Access Groups: 1