Episode 61

Power in the Age of AI with Author Paul Scharre

Paul Scharre, Vice President and Director of Studies, at Center for a New American Security (CNAS), joins Carolyn and Mark to dive into his newest book, Four Battlegrounds: Power in the Age of Artificial Intelligence. From the first time he recognized the power AI could hold, to the ways AI may put us on a path to global peace, Paul offers valuable insight and perspective on the field of artificial intelligence and machine learning.

Key Topics

  • [01:44] About Paul Scharre
  • [02:50] When Paul Scharre recognized the power of AI
  • [07:17] The four Elements of the Battlegrounds
  • [12:57] Paul Scharre's take on the technological divide in the United States, and how we can solve it
  • [20:10] U.S.'s standing in comparison to Nation-State adversaries
  • [26:18] Establishing globally agreed upon AI guardrails
  • [31:45] The exponential growth of AI
  • [42:12] Top requirements to achieve global peace

Quotable Quotes

On Paul's main focus when working at the Pentagon: "How can we use robotics to help create more distance between our service members and threats?" - Paul Scharre

Role of humans in AI: "Having data and computing hardware, having chips alone, doesn't get you to some meaningful AI tool. You also need the human talent" - Paul Scharre

On adversary AI advancement: "Fundamentally, both the US and China are going to have access to AI technology, to robust AI ecosystems, big tech companies, startups within each country, and the bigger challenge is going to be: How does the military take this technology, work with its civilian AI scientists, and then translate this into useful military applications?" - Paul Scharre

About Our Guest

Paul Scharre is the Vice President and Director of Studies at the Center for a New American Security. Prior to this role and becoming an award-winning author, Scharre worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies.

Episode Links

Transcript

Carolyn Ford:

Welcome to Tech Transforms, sponsored by Dynatrace. I'm Carolyn Ford. Each week, Mark Senell and I talk with top influencers to explore how the US government is harnessing the power of technology to solve complex challenges and improve our lives.

Hi. Thanks for joining us on Tech Transforms. I'm Carolyn Ford here with Mark Senell. Hi, Mark.

Mark Senell:

Hey, Carolyn.

Carolyn Ford:

Today we get to talk to Paul Scharre about his latest book, Four Battlegrounds. In it, he argues a new industrial revolution has begun, and like the Industrial Revolution, artificial intelligence will touch every aspect of our lives and cause profound disruptions in the balance of global power, especially among the AI superpowers: China, the US and Europe. Four Battlegrounds defines this struggle through four key elements: data, computing power, talent and institutions.

The fourth battlefield is maybe the most critical. The ultimate global leader in AI will have institutions that effectively incorporate AI into their economy, society, and especially their military. I found Four Battlegrounds engaging and sometimes terrifying. It's truly a picture of how AI is transforming warfare, global security and the future of human freedom, and what it will take for democracies to remain at the forefront of the world order. Paul Scharre, welcome to Tech Transforms.

Paul Scharre:

Thank you. Thanks for having me on the show.

Mark Senell:

Welcome, Paul.

Carolyn Ford:

[:

Paul Scharre:

Sure. I currently work at the Center for New American Security. We're a Washington DC-based think tank that focuses on national security issues. We're a bipartisan organization, so we have Democrats and Republicans working together on staff even, I know, in today's polarized times in Washington. We think that's really essential for putting forward pragmatic and principled bipartisan national security solutions.

Been here for about nine years. Prior to this, I worked at the Pentagon in the Office of the Secretary of Defense in the Bush and Obama administrations, where I worked on emerging technology issues, connecting strategy and planning to the budgetary nuts and bolts of how the Pentagon actually gets things done. Prior to that, served in the US Army. I was an Army ranger, did a number of tours overseas to Iraq and Afghanistan early in the wars.

Mark Senell:

Thank you for your service.

Paul Scharre:

Well, thank you.

Carolyn Ford:

[:

Paul Scharre:

ment. It was in the summer of:

We're driving down the road and we've come across a roadside bomb, an improvised explosive device, or IED, as the military likes to call them with the acronym. Now, we saw it first, which is the preferred way of finding them, rather than just running into it. So, we stopped and they called up the bomb disposal team, the explosive ordinance disposal unit, to come defuse the bomb, which I thought was great because I'd seen among the units that I was with a variety of tactics, one of which was leaning outside of the vehicle and shooting at the bomb. So, calling the professionals seemed like a better tactic from my standpoint than just hanging out the vehicle and then popping off a couple rounds at it.

We had to wait a while, because these bomb disposal teams were in really high demand at the time, and they showed up in their big armored vehicle, the MRAPs, these big armored vehicles they drive around in. I'm waiting for this bomb tech to come out in the big suit that they wear, like if folks had seen The Hurt Locker, the bomb disposal suit that they wear. And I'm like, "Well, this is interesting. Let's see what this looks like in practice." Instead, out comes this little robot. And I was like, "Oh, that makes a ton of sense. Have the robot defuse the bomb. Why do you want to be on top of this bomb with your face up in it snipping the wire? That's crazy, right? Have the robot go do that."

Then the more I started thinking about it, I was like, "There's a lot of stuff in war that's really dangerous that you could have robots do." So, when I left the military and I went to work at the Pentagon, that was one of the first things that I went to work on, was how can we use robotics to help create more distance between our service members and threats? There's lots of different ways to do that on the ground, in the air, undersea. I worked on a lot of issues surrounding robotics and autonomous systems inside the military, worked at the Pentagon, and I've continued to work on those issues since I left the government. That was the topic of my first book, Army of None, which was on autonomous weapons, weapons that are making their own decisions about whom to kill.

As I was wrapping up that book, one of the things that really blew me away was all of the progress that we've seen in artificial intelligence. It was clear that the ground was shifting beneath our feet, that while I thought, what I think a lot of experts in the defense space thought, the real hot area was in robotics, in fact we've been... all this explosion in artificial intelligence, which is related, but there are all these other applications beyond robotics like we're seeing with ChatGPT that are really groundbreaking, and I think are having tremendous impact on national power and global power. That's what this newest book is about.

Carolyn Ford:

Well, thank you for talking us through that story, because the book, the way you present pretty much every chapter is through a story like that, which helped me stay engaged and understand. You did a really good job of disseminating a lot of really complicated information, and you break it down in Four Battlegrounds to four key elements that define the struggle for AI dominance, to control AI among those superpowers.

[:

Paul Scharre:

One of the motivating questions behind this was if artificial intelligence is like another industrial revolution, because it's a general-purpose technology much like electricity or the internal combustion engine, then how might artificial intelligence change global power? We saw that the first and second industrial revolutions changed global power in these really profound ways, where countries that industrialized faster, they raced ahead in terms of national economic power, and then by extension military power. They were able to translate that economic power to military advantage, and we saw that this shifted the global balance power in Europe and globally. It led the United States, for example, in World War II to be able to be a military power globally by taking all of the US economic power and transforming that into those factories and turning them to turn out tanks and airplanes in World War II.

But one of the things that the Industrial Revolution did was changed the key metrics of power. Coal and steel production became key inputs of national power. Oil became this geostrategic resource that countries were willing to fight wars over. So, the question in my mind was, "Well, what is that in an age of AI? How would you begin to measure national AI power, and what are going to be the key things that countries should be competing in in an age of artificial intelligence to stay ahead?" That's what led to these four battlegrounds.

If you look at artificial intelligence as a technology, it is three key technical inputs: data, computing hardware and algorithms. Innovations like ChatGPT are algorithms that are trained on data using computing hardware. You need all three of those things to make this work. Well, the algorithms are the hardest part to control, because it's just math. So, if a paper was published online, then others know what that new technique is, so that's very hard to get a national advantage from the algorithms.

But data and computing hardware are really important areas of competition, and whether you're a company looking to capitalize on AI technology or you're a country trying to stay in the forefront, thinking about how do you get smart about using the data that you have, acquiring data, cleaning it, getting it ready to train machine-learning systems, that's an important component of AI power, as is the computing hardware side: these very specialized chips, graphics processing units or GPUs that are used to train these AI models.

Mark Senell:

Like quantum computing?

Paul Scharre:

And the most advanced AI systems... What's that?

Mark Senell:

Like quantum computing?

Paul Scharre:

Well, what's interesting is quantum computing is very, very powerful, but at the moment not directly connected to a lot of these cutting-edge machine-learning systems. Things like ChatGPT, they use massive amounts of computing power. They're using thousands of these specialized chips running for weeks or months at a time, but not at the moment quantum computing. That's used for other things, but is more still in the research space in terms of trying to get to fundamental advances of quantum computing.

But having data and computing hardware, having chips alone, doesn't get you to some meaningful AI tool. You also need the human talent, and there's a really fierce competition for human talent globally. And you need the institutions, the organizations that can take these inputs of data, computing hardware and talent, and then turn them into useful applications. When you look historically, the institutions turn out to be really critical for who stays ahead in these technological revolutions.

Mark Senell:

Could you explain that piece of it a little bit more? Because that to me is fascinating. Are you talking governments as an institution, or are you breaking that down?

Carolyn Ford:

Universities.

Paul Scharre:

All of the above: governments, companies, universities, research labs, the networks between them. To give an example, airplanes were invented in the United States, but by the time you get to World War II, the US has no meaningful advantage in aircraft technology because it's proliferated among all of the industrial powers at the time. They all have access to aircraft. The question for militaries at the time is, what do you do with an airplane? The advantage in World War II of using air power doesn't actually come down to the technology, it comes down to the institutions that exist inside different countries to take this technology and transform it into some useful military input.

The British, for example, they built aircraft carriers first. They were the first to do that. But then they fell behind the United States and Japan in aircraft carriers, not because they didn't have access to aircraft technology, they did, it was because of internal bureaucratic squabbles within the British military about who was going to be responsible for air power. That's what caused them to fall behind, and that's a lesson that comes up repeatedly throughout the history of technology and particularly military adoption, is that bureaucracy and culture matter a tremendous amount in terms of taking these raw technologies and turning them into actual meaningful advantage.

Mark Senell:

[:

Paul Scharre:

in the United States. In the:

Well, since then, government spending on science technology has declined pretty steadily, and the private sector of the US has stepped up to fill up the gap. Overall national spending on research and development is still pretty high in the United States, but the balance of where it's coming from has really changed, and now the government plays a secondary role. So, one of the challenges that the US military has faced for this technology like AI that's coming out of the commercial sector, in some ways it's the opposite of stealth technology that came out of secret defense labs. It's coming out of the commercial space, so the military has to import this technology, and that's been a real challenge. There have been some media headlines about some tech employees saying they don't want to work with the US military. There was a backlash a few years ago. We saw employees at Google, Microsoft, Amazon, all write letters saying they didn't want to work with the military.

The reality is, all those companies are still working with the military. Even Google that discontinued their work on Project Maven, one of the DOD's AI projects a few years ago, they're going back to working with the defense department now. The bigger obstacle are internal issues to the defense acquisition system; all of the red tape that exists, that makes it hard for companies to work with the defense department. When I talk to, particularly for startups, the big tech companies, they can weather some of these concerns. They can build out their compliance architectures to comply with all the red tape the government has. But for small startups, it can be lethal to their ability to innovate. That's actually, I think, the biggest challenge the government has in trying to stay at the forefront of this technology.

Mark Senell:

This came up a year ago at the Billington event, Carolyn, that the whole acquisition, how all these great technological startups, they just said, "Look, I can't even deal with the government. It's too hard to do business with them."

Carolyn Ford:

[inaudible:

Paul Scharre:

Well, I think it was a big mistake at the time. I think that's a great point. There was a big blow-up when a few years ago it came out that Google had been working on the Defense Department's Project Maven, which was the DOD's first project really capitalizing on AI coming out of the deep learning revolution, and using this for image processing for drone video feeds. It still is really the DOD's biggest flagship AI project. We haven't seen a lot of other major AI successes since then. I think that's a problem for the department, because that's five years old.

But at the time when it came out, I think it was a bit of a surprise. I was surprised that Google was involved in a project like this just because of their brand and how they position themselves as a company. But a number of Google employees were really upset about this. They wrote this open letter protesting, and it became this big crisis for the company. And the DOD basically was radio silent on the whole thing. They weren't engaged on explaining what are they doing, what is Project Maven, what is it used for, and one of the challenges was, it was being used to process video feeds coming off of drones. Well, when people hear "drones," they think of drone strikes, and then it generated all this controversy of like, "They're weaponizing AI and they're using AI to kill people."

None of that was true. It was using AI to process this imagery to make better sense of it. It wasn't doing targeting, it was helping humans, the intel analysts, process this information better and faster and more accurately, get better situational awareness about what's going on. Whether you support us counter-terrorism policies or oppose them, presumably having the humans making decisions to be better informed would be a better thing, so they're less likely to have accidents and make mistakes. But the DOD didn't engage on this, and then sometimes the DOD actually would say things that I think were really not helpful.

Carolyn Ford:

on the autonomous [inaudible:

Paul Scharre:

Right. And part of the problem here is there is this big cultural divide. The way that the military talks about these issues, it works within a context of war, but maybe doesn't land so much in the civilian sector very well. At the time, the Secretary of Defense, Jim Mattis, was really focused on lethality. It's like, "We've got to increase lethality." This became the buzzword across the Defense Department, because everyone's talking about, "My program is lethal. Look at how lethal we're being," so they can appeal to the secretary and make sure their budget doesn't get cut or whatever.

Mark Senell:

They shouldn't say these things in public.

Paul Scharre:

Well, it doesn't resonate. People are like, "What?" They would use it to apply to all these random things. One senior defense official referred to their cloud computing initiative as increasing lethality, which is like, "What are you talking about? We're dropping servers on people?" That doesn't make any sense. But within the DD context-

Mark Senell:

Not very effective.

Paul Scharre:

That's not going to be an effective tactic. But then you get people protesting it who were like, "Well, the DOD's war cloud," and, well, the DOD kind of encouraged that. So, I think that cultural divide could be a problem.

Mark Senell:

[:

Carolyn Ford:

I know. Who's winning?

Paul Scharre:

I think the US has perhaps a narrow lead in artificial intelligence for now, but China is a major global powerhouse in artificial intelligence. They have leading AI companies. They're slightly behind some of the US labs in the frontier of AI development in terms of the most cutting edge systems like ChatGPT or its successor GPT-4, but they're behind in a matter of months.

Mark Senell:

That's not much.

Paul Scharre:

The bigger challenge from militaries... What's that?

Mark Senell:

That's not much.

Paul Scharre:

It's not much, right. It's not like a decade, right?

Mark Senell:

Yeah.

Paul Scharre:

It's maybe 9, 10, 12, 18 months. It's not five years behind, for sure. And then the bigger challenge for military is to import this technology, and the Chinese military has the same opportunities and challenges that the US military has. They both have access to leading AI companies, Chinese companies like Baidu, Alibaba, Tencent, SenseTime, iFlytek are leading global AI firms. And there are still institutional barriers inside China towards military modernization just like there are here in the US, but on that front, it's a pretty level playing field, which I think is certainly not what the US is looking for.

Carolyn Ford:

You say that institutions, of your four categories, it's probably the most important. Am I misquoting you?

Paul Scharre:

I don't know, I think they're all important. I think it depends on what we're trying to talk about. If you want military advantage, I'd say it's the most important.

Carolyn Ford:

Why? And how it's playing for China.

Paul Scharre:

I think why is because fundamentally, both the US and China are going to have access to AI technology, to robust AI ecosystems, big tech companies, startups within each country, and the bigger challenge is going to be how does the military take this technology, work with its civilian AI scientists, and then translate this into useful military applications.

There are some things that are different about the ecosystem inside China. The government overall plays a much, much larger role in the economy and in funding science technology inside China than it does in the United States. When I visited the offices of iFlytek, a Chinese voice AI company, I spoke with their executives. They said that about half of their revenue comes from the government. That's not true for a major US tech company. The government is frankly a small part of their business for a lot of US tech companies.

That I think has a number of different effects in terms of both government spending boosting the AI ecosystem inside China, changing how companies are going to respond to government investment and government incentives. Certainly the kind of pushback you got from tech employees here in the US, you're not going to get that in China. If tech employees write a letter criticizing government, they're going to go to jail. So, you don't have that opportunity there.

Now, at the end of the day, I'm not sure that that's the biggest barrier to adoption here in the US anyways. Now, I think the bigger barriers are things like the defense acquisition system, which is deeply flawed. We need to be able to move faster. I guess the good news here is the Chinese system is also sclerotic and broken and flawed, but that's a place where it's a pretty level playing field, and we've got to find ways to innovate faster on the institutions so that we can find ways to take this technology and turn it to useful military advantage.

Mark Senell:

I wonder if the defense industrial base complex that we have here in the US, they're tapping that as well. Well, it's not like direct government to industry, but it's technology to industry to government, so they're kind of doing it on the government's behalf.

Paul Scharre:

And both China and the United States are looking to find ways to tap into the innovation happening in their commercial ecosystems better. In some ways, China is looking to the US for ideas about how to do this. A few years ago, the DOD created this defense innovation unit, the Defense Department's outpost in Silicon Valley, with the goal of tapping into innovation happening in tech firms in Silicon Valley. Its successes, I think it's had trouble scaling across an 800 billion enterprise, but they've been able to bring in some new companies and bring them in for the DOD.

Well, China's doing the same thing. After the US did this, they created a rapid response small group in Shenzhen, a major tech hub inside China, and the goal... to tap into commercial innovation inside China. In fact, this organization was called China's DIU by commentators inside China. So, we see lots of instances of this kind of parallel innovation, whether it's in the specific technology or in organizational solutions.

Carolyn Ford:

[:

Paul Scharre:

Yeah, DIU.

Carolyn Ford:

DIU. I recall there were some other things that you mentioned about putting some guardrails around AI and just as a world globally agreeing on rules. Is anybody really going to agree on the rules, or is AI like mutually-assured destruction?

Mark Senell:

I can't wait to hear this answer.

Paul Scharre:

Maybe the danger is, it's a little of both. I think it depends on what kinds of AI applications we're talking about, because AI has a whole wide range of applications. On things like synthetic media, the use of AI to generate video, audio images, deep fakes, that's the place where it's very much the Wild West. We're seeing lots of innovation: some of it's really cool, really creative, seeing some of these AI generate images. Some of them are spooky and weird. Some of them are definitely problematic, whether it's things like people using AI to create revenge porn, pasting people's images on the heads of porn stars, or people using them to potentially do political manipulation. All of these things are problematic.

There was one case where actually an AI-generated voice was already used to commit fraud, where someone called a company and used an AI-generated voice to simulate the voice of the CEO and told them, "Hey, we need to make this urgent bank transfer." The person on the line, they heard the sound of somebody they recognized, "Oh, it's the boss," and they made the bank transfer. That's a place where I don't know that we're ready as society for all of the disruption that some of this AI-generated media is going to bring.

We see, for example, colleges freaking out about students using AI-generated essays. It's just the tip of the iceberg about how do we live in a world where a lot of the things that we're used to seeing and hearing and believing could now be fake, could be fake in ways that are very convincing to people. And how do we think about provenance for media? How do we think about discerning what's real and what's not? There are I think important regulatory solutions: labeling synthetic media, requiring watermarking of synthetic media so you can go back and you can actually detect whether it was AI-generated or not.

Things like a Blade Runner law, and this is one of my favorite titles of solutions. California passed a few years ago a Blade Runner law named after the movie Blade Runner, where in the movie they have these synthetic humans. The idea of a Blade Runner law is basically that if you're talking to a bot, an AI system, it has to disclose that it's a bot, so that if a company calls you and it's an AI talking to you, they have to tell you that it's an AI. That's a bot disclosure requirement. I think that's a sensible thing to live in a world where we have these kinds of technologies, of you should know if you're talking to a human or to an AI. That's how science-fiction works-

Mark Senell:

Well, criminals aren't going to do that.

Paul Scharre:

Criminals are aren't going to do it. That's right. Well, and that gets into a difficult problem with a lot of these things, is how do you manage this technology, particularly given how widely available AI is. Last year, this company Stable Diffusion took an image-generating AI model called Stability. I'm sorry, Stability AI is a company. They released this model called Stable Diffusion, and the model that they trained had two key safety features. One, it would not generate certain types of content. If you wanted to generate an image of child pornography, it would not generate that. It had a filter. And it had a watermark that was embedded, so any image that it generated would have this hidden digital watermark so you could tell it was generated by an AI. These are both really important features.

Well, they released it open source, which meant the code is available for anybody. The first thing people did was strip off the filter and the watermarking, right?

Mark Senell:

Yeah.

Paul Scharre:

So, I think these are some genuine challenges we're going to face in society.

Carolyn Ford:

Artificial intelligence is proving to be, as we've discussed, really hard to control, more so than previous technologies. Why do you think that is? And we've already touched on this, but are we going to get a handle on it?

Paul Scharre:

I don't know whether we're going to be able to solve these problems. I'm optimistic, but I think that the concerns are real.

[:

Mark Senell:

10 billion-fold? In the number of dollars?

Paul Scharre:

10 billion times improvement.

Carolyn Ford:

Since:

Paul Scharre:

That's remarkable. Since:

The pace of progress is really rapid, and that's what's causing so many AI scientists to raise an alarm saying, "Hey, we don't actually know where this is going. We don't know what we're going to see 12 months from now, much less five years from now." And we've continued to see an increasing number of AI scientists saying, "We need to slow down. We need to take a pause. We need to pay more attention to safety," because right now, no one knows how to make these systems safe.

There are some techniques for trying to do that, but they're not reliable. For example, for this latest model, GPT-4, it can do a whole wide range of things. It can write computer code, it can write poetry, it can play chess, it can synthesize chemical compounds. While a lot of these things are great and fun and valuable, some of them could be used to cause harm. It can be used for cyberattacks, and that's one big concern, that this is going to accelerate the potential for cyberattacks.

Some scientists from Carnegie Mellon also demonstrated that it could be used to synthesize chemical weapons. That's a concern, where-

Mark Senell:

Yes, that's a concern.

Paul Scharre:

Right? That's not okay. And in fact, these tools can be used to generate novel toxins that no one has developed before, and AI scientists have done this in the past. So, no one knows how to reliably put in guardrails to make the systems not do these harmful things, and that's one of many risks. There are many. But I do think that we need to take these risks seriously, and it's worth all taking a breather and saying, "Let's make sure these systems are safe," but we just rush to deploy them and potentially cause harm.

Mark Senell:

Well, it sure feels like we're in for some scary times and things that are going to happen before things get good, because it seems like individuals move a lot faster than society and governments, and there's always going to be people pushing the envelope on this stuff. At least it just feels like we're probably going to have some bumps.

Carolyn Ford:

and Mark, when we [inaudible:

Mark Senell:

I know.

Carolyn Ford:

I'm sorry, it feels like we're really close. Tell me I'm wrong, Paul.

Paul Scharre:

I don't know that I'm going to say we're wrong. I think that it's worth being concerned, and I think there are some things that seemed like science fiction 10 years ago that don't seem like science fiction now. I'm going to throw out some sci-fi references here for those who maybe follow these things. The movie Her a few years ago, where Joaquin Phoenix, he's got this AI girlfriend, that's basically real now. I remember when that came out thinking, "That's wild. Maybe someday that'll happen, but this is going to be a long ways away." No, you can have chatbots and AI voice assistants, there's a company, Replica, that has these things that people use as girlfriends, and it's weird, but that's not science fiction anymore, that's real. A lot of the stuff, the plot of Blade Runner, no, but the idea that you can have synthetic AI entities that are interacting with people; synthetic media is pretty compelling and it's pretty powerful. People aren't building physical androids that are convincing, but digital avatars that are, yes.

Mark Senell:

What about Cyberdyne Systems or something like that, where they are controlling military capabilities?

Paul Scharre:

rtainly working on [inaudible:

One of the movies that I think about a lot is Ex Machina, where they have this female android. At the end, spoiler alert, spoiler alert, it doesn't go good for the humans at the end, you probably know if you're going to watch an AI film, and it turns on them. One of the things that I think is compelling about that particular film is it shows the dangers of anthropomorphizing these systems. I think actually that's a valuable thing to hold onto, which is that we're primed to have this mental model of personhood in our head that we use to interact with other people. That's how we can have a conversation, that's how we talk to strangers. We have this model in our minds of what another person is like, and we can interact with them.

We often project that model onto other non-human entities, to our pets. People name their Roombas. That's fine when it's your dog or your cat, but on these AI systems, I do think that can be harmful. We could see this with things like ChatGPT, where it's specifically designed to tap into this tendency to anthropomorphize systems, that they've trained the system to act as this chatbot. You can interact with it, and it adopts this persona, and it's chatting with you. It's not a chatbot. Actually what's crazy, it's not a human, it's actually a language model that's generating text, that's simulating being a chatbot.

That leads to some weird behaviors, and we've seen it go off the rails. There was an article New York Times a couple months ago where this New York Times reporter was talking to this bot and it says that it's in love with him, and he should leave his wife and come be with the chatbot. It's really strange stuff. It has to do with some of the underlying architectures of what's going on, which is under the surface this weird, alien form of intelligence. It's a huge, inscrutable black box. But it doesn't think humans at all, and that's something I think that we need to keep in mind when we're talking about how we are going to use these systems and interact with them.

Mark Senell:

You address that pretty in-depth, don't you, in your book, where you talk about what happens when they don't start acting like we think they should act, like you call it alien, you know-

Paul Scharre:

Yeah.

Mark Senell:

... form. But when they, "Well, we thought it was going to act like this, but it doesn't act like that at all."

Paul Scharre:

These AI systems can be very brittle, they can do strange and surprising things, and some of it's just that it doesn't work. It just fails, because maybe the situation in which you're using it is novel, it's not in the training data.

But sometimes the AI systems do things that are quite clever, but maybe not what you intended. One of my favorite examples of this is there was an AI that was used to play Tetris. It was trained to play Tetris Nintendo game. It wasn't very good at Tetris. One of the reasons why, as it turns out, is you only get feedback in Tetris in your score when you clear a line, so you have to have some understanding of how you place the bricks to clear a line. It doesn't get very good immediate feedback, so this AI model is not particularly good. I'm not saying Tetris is that hard. People can train AI systems to play Tetris. But this particular one, no good at Tetris, it was just stacking the blocks directly on top of one another. It was terrible. But one of the things it learned to do is really quite clever, is it learned to pause the game before the last brick fell so that it would never lose.

Carolyn Ford:

Jeez.

Paul Scharre:

And that blows me away, right?

Carolyn Ford:

Yeah. But this is the way we're training these models. We're training them to win the game, so when it figures out how to win, we shouldn't be surprised.

Mark Senell:

It's WarGames.

Carolyn Ford:

It is WarGames.

Paul Scharre:

That's the thing.

Carolyn Ford:

The only way to win is not to play.

Paul Scharre:

Another good reference. Another good one, another good one. This comes up again and again, that these systems find these surprising and creative ways to hack things, some of which is good, some of which is not good. I think one of the lessons here is that intelligence is powerful. Humans have conquered the globe because of our intelligence. We don't have claws, we don't have sharp teeth, we don't have armor. We're not big and strong compared to other animals, but we're smarter, and now we're able to create these machines that have some aspect of intelligence. They don't have the ability to do all of the things that humans can do yet, although we are starting to see systems like GPT-4 that can do a number of different things that humans can do, and controlling this type of technology turns out to be really hard.

Carolyn Ford:

I told you this book terrified me a little bit, and I loved it.

[:

Paul Scharre:

Absolutely. There is a global contest underway for how we use artificial intelligence. China is pioneering a very dystopian vision of using AI for internal surveillance and repression and human rights abuses. Half of the world's one billion surveillance cameras are in China. They're using AI tools like facial recognition, gait recognition, voice recognition to monitor and surveil their citizens. And China is actively exporting this model around the world. There are at least 80 countries that have Chinese police and surveillance technology. China is working with other countries on exporting its laws and norms behind how to use this technology, the social software that underpins how AI is used.

This is all very troubling, because it has profound implications on individual liberty and human freedom. I think it's really important that democratic countries get together to push back on this creeping tide of techno-authoritarianism, to present an alternative model for how to use AI, to have a framework for using AI and other digital technologies in a way that protects individual privacy, protects freedom, that we can present to the world and say, "Here's another way to use this technology that doesn't threaten human freedoms," and then engage with other countries to shape how they're adopting it. I think the stakes are very high to make sure that we're using this technology in a way that's consistent with democratic values and protects privacy and freedom.

Carolyn Ford:

Democratic countries come together, work together, I'm just going to totally paraphrase you here, to push against evil.

Paul Scharre:

That's right, and I want to see more democratic countries come together on a whole variety of things, on laws, on technical standards for how AI is used, to make sure that it's being used in a way that don't threaten individual freedom and privacy.

Carolyn Ford:

Mark, do you have any last questions for Paul?

Mark Senell:

No, that's a perfect way to end it, because it's pretty profound and it's positive. I think that's a great way to end it, no.

Carolyn Ford:

I agree. For our listeners, read the book, because Paul actually goes into detail of how to achieve this, of how democratic countries need to come together, and how we can push against the evil. Paul, I want to give you the last word here. Do you have anything else you would like to say before we end?

Paul Scharre:

Well, I guess what I'd say is I'm excited by the fact that there's been so much attention on AI in the news and so many more people engaging on this topic. I'll talk to people and people will say, "I heard a bunch of AI scientists are worried about AI. Should we be worried?" And I'm like, Well, maybe a little bit." Or people are interested or concerned about tools like ChatGPT. I just would say I think that's really critically important, because we all have a stake in this world that we're building. So, we shouldn't be leaving it up to just tech companies or governments or experts, that we all need to have a voice in this and be engaged, and I do think that we're going to have much better outcomes as society if we're able to bring together a whole diverse set of voices for how we use this technology.

Carolyn Ford:

I love that. We cannot just let ourselves be assimilated like the Borg.

Paul Scharre:

Exactly, exactly.

Mark Senell:

This has been wonderful, Paul. I really appreciate your insight today on a lot of really thought-provoking topics.

Carolyn Ford:

Thank you very much.

Mark Senell:

Thank you so much.

Paul Scharre:

Well, thank you. Thank you, Mark and Carolyn for having me on and for this discussion. I really appreciate it.

Mark Senell:

I don't even know if we have time for the fun question. This has been so interesting to talk about.

Carolyn Ford:

The whole thing has been fun. So, we're going to let you go, Paul. Thank you so much again for taking the time. Thank you, listeners. Share this episode, smash that like button, and we will talk to you next time on Tech Transforms.

Thanks for joining Tech Transforms, sponsored by Dynatrace. For more Tech Transforms, follow us on LinkedIn, Twitter and Instagram.

About the Podcast

Show artwork for Tech Transformed
Tech Transformed
Tech Transforms has a new home, visit us here https://techtransforms.fireside.fm/

About your hosts

Profile picture for Carolyn Ford

Carolyn Ford

Carolyn Ford is a passionate leader, doer, adventurer, guided by her father's philosophy: "leave everything and everyone better than you found them."
She brings over two decades of marketing experience to the intersection of technology, innovation, humanity, and the public good.
Profile picture for Carolyn Ford

Carolyn Ford

Carolyn Ford is passionate about connecting with people to learn how the power of technology is impacting their lives and how they are using technology to shape the world. She has worked in high tech and federal-focused cybersecurity for more than 15 years. Prior to co-hosting Tech Transforms, Carolyn launched and hosted the award-winning podcast "To The Point Cybersecurity".