Universal Mentors Association

How AI innovation can drive 10X growth in enterprises | David Shrier interview

[ad_1]

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


David Shrier is making a number of predictions about AI, some encouraging and some scary. And they’re worth paying attention to, like one about how AI innovation will drive 10 times greater growth for enterprises.

Shrier, who did a fireside chat with me at an AI event in San Francisco at ServiceNow, is a globally recognized expert on technology-driven innovation. He is a professor of practice in AI & Innovation with Imperial College Business School, and is a Visiting Scholar in the Department of Engineering at MIT.

And his Visionary Future venture studio invests in a portfolio of university-related spinouts spanning cognitive technologies, new financial architectures and sustainability, and is in the process of launching three new AI businesses over the next 90 days. Visionary Future published a report dubbed Artificial Intelligence Leadership Playbook.

David also has worked with over 100 governments on technology policy & regulation, and served on the parliamentary advisory committee for the EU AI Act. He has published eight books in the past eight years. His ninth book, Basic AI: A Human Guide to Artificial Intelligence, will be released by Little Brown and Harvard Business Publishing in January 2024.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

David’s latest hack is ChatDave.AI (http://chatdave.ai), an LLM-based model that ingested about 600,000 words of his writing — essentially all of his books — on AI, cyber security, digital identity and blockchain.

In our fireside chat, Shrier started out with a discomforting thought, as he said that the importance of generative AI is “both less than people say it is and more than people realize it is.” While some talk of the elimination of jobs is incendiary (like the British Telecom CEO saying he’ll fire 42% of staff and replace them with AI), Shrier believes generative AI will drive some very real changes in workforce and society.

Whether you’re predicting the hype will be true or false, you have to be paying attention. Shrier said that if you look at mentions of AI on earnings calls, it has skyrocketed in the last nine months. You should take that with a grain of salt.

On the other hand, his long-term view of AI is shocking. He said, ” I’m going to make a forecast for you and say that by 2032, we’re going to see close to 10 percent lift in global GDP due to a combination of generative AI and older versions of AI. About $11.8 trillion of increase in global GDP by 2032 due to AI. The bear case forecast is $1.7 trillion, to give you an idea of the spread.”

He noted a colleague at Imperial studied a 2,200-person company and broke down the tasks the workers do. The analysis showed that 30% to 67% of those jobs could be replaced by AI. And the good and bad news? Quantum AI is going to take computing for AI to an incredible new level, Shrier said.

“The workforce of the future is a critical problem. AI is going to hit us in five to seven years with the same intensity that took the industrial revolution 150 years,” Shrier said.

Here’s an edited transcript of our fireside chat. And thanks to Shuchi Rana of ServiceNow for hosting us.

David Shrier and Dean Takahashi at ServiceNow AI event.

VentureBeat: I’m happy to be here with David Shrier, who came all the way from London for this. He’s a globally recognized expert on technology-driven innovation. He holds an appointment as a professor of practice in AI and innovation at Imperial College business school. I’ll leave the rest of the introduction to him.

David Shrier: I’ve been doing corporate innovation now for more than two decades. Many of you who are interested now in enterprise innovation, you have to read Dean’s book on the Xbox. It’s one of the best case studies I’ve ever read on the gory details of how enterprise innovation happens.

But I spend a lot of time in academia, looking at trends and trying to think about how AI and other disruptive technologies are going to impact the world. I teach students. We have a class on AI startups at Imperial College. Imperial, if you don’t know it, is an excellent engineering university. We have more than a thousand AI researchers. We’re doing a lot in the field. I also run a venture studio. We build AI businesses and financial infrastructure businesses. We’ve got our sleeves rolled up in the middle of this mess.

VentureBeat: Our session here is about AI. We’re heard a lot of fear and rumor about generative AI. Where do you think we are in terms of ground truth or things that you know to be true?

Shrier: I’d say that it’s both less than people say it is and more than people realize it is. We’re at this moment where there’s a lot of hype. We’re near the apex of the Gartner hype cycle. When you see the CEO of British Telecom going on the airwaves and saying he’ll fire 42% of his staff and replace them with AI, you sense a bit of froth. At the same time, there are some very real changes that are going to happen in the workforce and society, and generative AI is part of that.

The reason why it’s particularly interesting is that the prior waves of AI innovation affected things like manufacturing jobs and lower-level service jobs. Recently McDonald’s replaced a lot of cashiers with computer screens. But generative AI is starting to replace McKinsey consultants, Goldman Sachs bankers, Microsoft software engineers. A lot of white collar professions that were insulated from the effects of AI automation are now under threat.

Dean Takahashi and David Shrier speak to a crowd at ServiceNow in SF.

VentureBeat: After some of the trends we’ve seen that did end up being overhyped, they may have set everyone up to disbelieve anything that follows. Metaverse, blockchain, cryptocurrency. Is AI going to rise and collapse like those trends and leave us worse off? Or do you see something fundamentally different?

Shrier: I wrote my first AI software program in 1991. I was waiting a long time for people to care. But it’s important to remember that tech forecasting is subject to massive dispersal of outcomes. I’ll give two examples to illustrate why we really should pay attention to AI. In 2009, two major forecasts were established by the tech forecasting companies. First, they said that in five years, cloud was going to be a big thing. It was going to be $14 billion of annual revenue. In fact it ended up being closer to $40 billion. They got it wrong, and cloud was much bigger than people expected. On the other hand they said that virtual reality was going to be $162 billion worth of market. It ended up being $20 billion.

People are forecasting where things are going to go with AI. I’m absolutely certain they’re getting it wrong. We just don’t know in which direction. But there is something fundamentally different. What’s happening now is the accumulation of several generations of technology development. What’s different about what we see with AI now is we’re building on prior waves of automation, robotic process automation, machine learning, and data science. Now you have some applications that are coming to bear when two other trends are converging: high-performance compute and better networks. Now suddenly these AI technologies can be adopted quickly, and they’re more powerful than they ever used to be.

VentureBeat: What are some of the short-term impacts, now that this seems to work?

Shrier: For one thing, I have a new punch line for a joke. I invite all of you to go to ChatDave.AI. It’s a real website. I loaded nine of my last books into a large language model and threw it out there. I’d be curious to hear what you have to say about it.

But aside from the novelty factor, there’s going to be a lot of overreaction. If you look, for example, at mentions of AI on earnings calls, it has skyrocketed in the last nine months. A lot of CEOs feel compelled to do something, or be seen to do something, and so they’re probably going to fire more people than they should, because they want to be seen to be realizing the benefits of AI cost savings. They’ll get rid of a lot of institutional knowledge, and in the near term, meaning one to two years, a lot of companies will falter because they let go of a lot of their most valuable IP. Longer term, they’ll start to get their arms around it, but in the near term we’re going to see a lot of confusion.

VentureBeat: What impact do you think AI is going to have on the economy and society in the next five to 10 years?

Shrier: As we get smarter about how we use it, we’re going to see some fairly significant gains. Bearing in mind what I just said about the dispersal of tech forecasts, I’m going to make a forecast for you and say that by 2032, we’re going to see close to 10 percent lift in global GDP due to a combination of generative AI and older versions of AI. About $11.8 trillion of increase in global GDP by 2032 due to AI. The bear case forecast is $1.7 trillion, to give you an idea of the spread.

A plan for action on AI requires a lot of players.

VentureBeat: I remember McKinsey had their very bold report on the metaverse, that it would be a $5 trillion economy by 2030.

Shrier: Part of how we get to these numbers is we actually look at jobs. I have a colleague at Imperial who did a fairly in-depth study. He worked with a Fortune 10 company and he took 2,200 job descriptions, broke them down into tasks, and mapped them to 32 AI technologies. He was able to fairly granularly figure out what you could replace with an AI. Depending on how aggressive you are about adoption, it was anywhere from 30% to 67% of the workers at this fairly industrial company. There is some logic behind it. It’s not simply guessing.

VentureBeat: I was thinking about some things I’ve heard more specifically in the gaming space. An Israeli startup told me they hired 10 AI engineers, very senior people, and they got going really fast on making their games. Normally they would surround those people with junior engineers to help them, but instead they gave them AI assistants. That doesn’t sound good for people who are graduating from college right now, looking for jobs in game development.

On the other hand, those people graduating from college now can use AI to become, in a way, one-man bands. They could bypass all of the infrastructure out there – studios and publishers – and just publish their games directly to wherever they’re going. In that sense, that’s the upside. It could create a lot of opportunity. What do you extrapolate from these kinds of small details about what could happen?

Shrier: The games example is a good metaphor for broader changes in the construct of enterprise. Today, you’re still not able to replace a senior developer with Copilot or another kind of AI system, but you can replace a bunch of junior developers. One model of management is, you have 10% of your organization that are your A players, and then you have a lot of B players who help fill out what they do. You can’t run your organization with 10,000 A players because they’ll all be fighting with each other. But with the application of these AI systems, you can compress the layer below your A players. You can have an organization that has the fighting weight of a 10,000-person company with only 1,000 employees.

It does have profound implications for the labor market. It also has profound implications for competitiveness and capital intensivity. If you want to build a company, you no longer need 10,000 people to compete on a global scale.

An AI leadership playbook from Visionary Future.

VentureBeat: What’s an enterprise’s path to practical AI right now?

Shrier: The first thing is literacy. A lot of these decisions and announcements are being made without a strong enough understanding of what AI can and can’t do or how to manage it. You can’t just set it and forget it with these AI systems. The models will drift. You need to manage what you do with AI. AI security is another nightmare that no one wants to talk about. There are all sorts of interesting ways that you can attack AI systems and there’s insufficient security surrounding them. Better literacy, for sure, is one thing that corporations need.

The second thing that I recommend is benchmarking and diagnostics. Figure out your current state of play. In a lot of organizations I work with, they don’t know what they have. They don’t know where they have AI in their business and what it’s doing. There’s no AI governance. Which brings me to the third recommendation, which is to institute an AI governance council, so that you stay on top of what’s happening in your enterprise. Finally, once you’ve gotten smarter, figured out what you’re doing, and put some governance on top of it, build your AI strategy so that you can project forward three to five years and build your business for the future.

VentureBeat: What are you worried about? What do we all need to learn more about?

Shrier: These are systems that we as human beings are designing, but not enough people are consciously aware of that in terms of how the algorithms are developed and how the data that trains those models is constructed. We begin to introduce a lot of bias into these AI systems. It’s unintentional, but it ends up having society-scale impact. One of the more famous examples was in 2016, when Google first released their image recognition system, which the primarily young male engineers, aged 20-32, trained on a database of primarily young males of western European descent, aged 20-32. “Oh, this database looks good.” They trained the model and the model was terrible about recognizing anyone who wasn’t 28 years old or a white male. There were some fairly embarrassing headlines.

That was one of the egregious examples, but this happens all the time. It happens a lot more than people are aware of. It’s important, when you implement these systems, you have a lot of consciousness around how you’re training the model, what unintended consequences it could have, and what you’re going to do to correct for it.

VentureBeat: You can look at how the language of choice to use with any chat AI system is English.

Shrier: Right. A language not spoken by the majority of the world’s population.

VentureBeat: How can people get up to speed very quickly on generative AI? How do you become literate?

Shrier: There’s a lot that’s going on in the blogosphere. I have a new book coming out, but as you’ve pointed out, publishing cycles being what they are, it’s not coming until January. But in the meantime there is a lot of good content online from reputable sources that can get you up the curve and keep you apprised of activity in the space.

ServiceNow had awesoe tacos at the event per @ChefGreen510 on Instagram.

VentureBeat: You need to read the news on a daily basis.

Shrier: It’s happening that fast. I have a book from 2021 on AI, and most of it’s good, but it doesn’t talk about generative AI. There are a lot of statements in it that are completely wrong. Things like, “Management consultants are relatively safe from AI automation.” Oops.

VentureBeat: There is the Terminator scenario out there that everyone knows about. But how do we avoid making really stupid mistakes with AI?

Shrier: It’s helpful to take a systems thinking approach. A lot of people tend to focus just on the myopic task in front of them and not look at the bigger picture. If you’re an engineer working on an AI model, how is it being used? There are a lot of Meta engineers who’ve left and said, “I wish I’d known what I was building. I deeply regret it now.” Senior executives have gone on record with similar statements. But they could have known.

This gets back to the idea of having awareness around, what is the use case for the AI? What data is being used to train it? There’s a handful of questions you can ask that could help avoid a Terminator-like scenario. These are things that we are building. AI isn’t just happening to us. We’re making it. A lot of people in this room here are making it. We’re building the stuff. Let’s make it good.

VentureBeat: How do we also avoid paralysis when all of this stuff is changing so quickly?

Shrier: Particularly in innovation industries, it’s better to make a decision under the Pareto principle, 80-20. If it’s the wrong decision, make another decision. I see a lot of companies eaten alive because they sit and wait for the perfect analysis. By the time they have the perfect analysis, they’re Polaroid.

VentureBeat: How do we get this technology more evenly distributed to the masses?

Shrier: This is an interesting one, because on the one hand, mobile networks are connecting everyone. That’s part of how ChatGPT got 100 million users in six weeks. I know usage is down, but Threads got 100 million users in five days, I think it was? That’s something that I’m calling flash growth. We have these widely distributed networks. Smartphones are cheaper and cheaper. You can have a $25 HTC handset in Africa. It improves the onramps.

On the other hand, the backend compute is still too expensive. I think it was costing OpenAI something like 15 cents per query to ChatGPT until they tweaked the model because it was too expensive. Did everyone notice that it got a little dumber? That’s because it was too expensive when it was smart. If that’s true for, let’s say, affluent consumers, what do we do for the rest of the world? That’s something where we need to be working on pathways to affordable AI. Right now we don’t have a good answer.

VentureBeat: Do you think the infrastructure is going to keep up with all these queries we’re throwing at it? I wrote about Cerebras Systems launching a supercomputer. They build giant wafers as their processors, 400 cores on a single processor. They’re feeding that data in from 70,000 AMD Epyc processors. That’s just one machine that they think will help us keep up. Does our demand exceed what we have in terms of infrastructure? Will we melt down the planet while we’re building all this tech?

Shrier: I have good news and bad news. The good news is quantum AI. The bad news is I’m not sure when. We’re nearing some tech breakthroughs that could solve some of the compute demand challenges, but we don’t know when we’ll get them. In the meantime there are also some supply chain issues. We were making a bunch of chips in China, and then that became geopolitically risky. We shifted our supply chain to Taiwan, which turned out to be also geopolitically risky. Now we’re trying to shift it again. There have been some challenges in the global supply chain for hardware, but we’re starting to work through that.

Image representing the intersection of quantum computing and creation of vaccines

Question: Are you seeing any AI aside from generative AI that’s impactful and exciting?

Shrier: I’m working on a few, actually. One I’m very excited about is in the domain of computational chemistry. We use a digital twin as a control system in a chemical process to pull carbon directly out of factory waste and turn it into food-grade baking soda. That’s carbon utilization through AI. Another one is predicting future prices of traded securities using a hybrid of human and AI systems. We’ve figured out a way, commercializing some MIT research, to tweak prediction markets so they don’t suck. That’s two examples, neither of which are generative.

Question: You work at an intersection of industry, academia, and regulation. How do you see those three coming together?

Shrier: In the near term, unfortunately, poorly. We’re hoping to fix that. Some colleagues of mine and I are trying to put something together called the Trusted AI Institute. This spans Imperial College, Oxford, MIT, and the University of Edinburgh, as well as the OECD, the World Economic Forum, and a number of corporates. We’re trying to bring together a dialogue so that we don’t have a big mess.

Right now more than 80 governments are looking to regulate AI, and they’re all going in 80 different directions. I was on the advisory committee for the EU AI Act. That was well-reasoned, but it didn’t really take generative AI into account. They’re having to tweak it after the fact and figure out how to apply it. If we bring together all of the stakeholders, including industry and enterprise that are going to be impacted by these regulations, and put them in dialogue with the regulators, we hopefully get better regulation coming out.

This is going to be regulated. Don’t kid yourselves. This is not going to be a complete free market. Governments saw what happened with social media and they’re not happy about that. They saw what happened with cryptocurrency and they’re not happy about that. They’re getting pretty activist around AI. It’s incumbent on us to talk to them before they do something that limits innovation.

David Shrier of Imperial College.

Question: You’ve said that you run a venture studio in London. Is there anything specific to that venture studio model that allows AI innovation better than just running a startup or deploying your capital through other VC models?

Shrier: Our venture studio is a mix of passive VC investment and co-creation. We have an 81% IRR on a vintage 2020 fund – or it’s not a fund, but a pool of capital – because of that co-creation model. By engaging closely with management we’ve been able to generate superior returns. I don’t recommend it for everyone. We’re able to do it because we’re experienced operators. I’ve raised more than $600 million as an entrepreneur and taken a company through IPO. That’s different from someone who worked at McKinsey or Goldman Sachs and then became a venture investor. They may be a very good venture investor, but they don’t have the same operational background.

Adobe is capitalizing on continued interest in Terminator movies to show off its new AR app Aero, bringing a Rev9 model to life in augmented reality.
Hollywood’s portrayals of AI-controlled Terminator robots once seemed like pure science fiction, but AI and robotics have been catching up with James Cameron’s legendary nightmares.

Question: Creative people like screenwriters have had their own pushback against AI adoption. Do you think it will be a step function for certain industries? Will they leapfrog through this because of the cost advantage? Or will it be more of a gradual linear curve across industries?

Shrier: This question of adoption by industry–some industries like Hollywood are up in arms and striking in protest over AI. Other industries may seek to adopt it more rapidly. This is exactly why we’re putting the Trusted AI Institute together. The workforce of the future is a critical problem. AI is going to hit us in five to seven years with the same intensity that took the industrial revolution 150 years. Think about what the industrial revolution gave us. It gave us trains, telegraph, telephone, combustion engine, the Russian Revolution, World War I, and World War II. There’s a lot of upheaval that played out with all this technology innovation. We’re about to see a similar scale of change happen in less than a decade.

It’s going to be messy. There are ways we can try to ameliorate the impact, but what’s going on in Hollywood you can easily envision happening in other industries as well. People are correctly feeling threatened by these systems. In the same week that the SAG strike was announced, a startup here in San Francisco released a full episode of South Park that was entirely generated by AI.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *