Remarks by APNSA Jake Sullivan on AI and National Security

National Defense University
Washington, D.C.

MR. SULLIVAN: Good morning, everyone. And thanks so much for that introduction, Lieutenant Colonel Grewal. And I also want to thank the National War College for bringing us all together today. And I want to thank my colleagues from across the intelligence community and DOD, as well as from the NSC, who have really put their blood, sweat, toil, and tears into producing this National Security Memorandum on Artificial Intelligence that we’re rolling out today.

Most importantly, though, I want to thank all of you for allowing me to be here to say a few words this morning. It’s truly an honor for me to be here. And, in fact, there’s a reason I wanted to address this specific group of leaders.

More than 75 years ago, just a few months after the Second World War ended, then-General Dwight Eisenhower wrote a letter to his fellow military leaders. All around them, the world was changing. Nazi Germany had fallen. Nations were rebuilding. The Cold War was just beginning. And people everywhere were reckoning with the horrors of the Holocaust.

It was a new era, one that demanded new strategies, new thinking, and new leadership.

So, General Eisenhower pitched an idea: a National War College. He didn’t know where it would be or what exactly it would look like, but he knew America needed a school whose primary function would be, as he wrote, quote, “to develop doctrine rather than to accept and follow prescribed doctrine.” Develop, not accept and follow.

That idea has guided this institution ever since. In the aftermath of the Second World War, it led your forebearers to reimagine our decision-making apparatus, including the establishment of the National Security Council. Thanks for that. (Laughter.)

During the Cold War, it led them to develop new strategies to advance our national security, including implementing containment, détente, and beyond.

And throughout the global war against terror, your predecessors have pioneered new thinking and new tactics that have helped keep our nation safe.

Now it’s your turn.

We’re in an age of strategic competition in an interdependent world, where we have to compete vigorously and also mobilize partners to solve great challenges that no one country can solve on its own.

In this age, in this world, the application of artificial intelligence will define the future, and our country must once again develop new capabilities, new tools, and, as General Eisenhower said, new doctrine, if we want to ensure that AI works for us, for our partners, for our interests, and for our values, and not against us.

That’s why I’m proud to announce that President Biden has signed a National Security Memorandum on Artificial Intelligence. This is our nation’s first-ever strategy for harnessing the power and managing the risks of AI to advance our national security.

So, today I want to talk to you about what’s brought us to this moment and how our country needs all of you to help us meet it.

Like many of you here at the War College, I’ve had to grapple with AI and its implications for national security since I became National Security Advisor — about what makes it so potentially transformative and about what makes it different from other technological leaps our country has navigated before, from electrification to nuclear weapons to space flight to the Internet.

And I’ve seen three key things in particular.

First, the sheer speed of the development of artificial intelligence. The technical frontier of AI continues to advance rapidly — more rapidly than we’ve seen with other technologies.

Let’s just take protein folding as an example. Discovering a protein structure, or how it folds, is essential for understanding how it interacts with other molecules, which can solve fundamental puzzles in medicine and accelerate the development of treatment and cures. Up until 2018, humanity had collectively discovered the structure of around 150,000 proteins, largely through manual efforts, sometimes after years of painstaking work using advanced microscopes and x-rays.

Then, Google DeepMind showed that AI could predict the structure of a protein without any wet lab work. By 2022, four years later, that same team released predicted structures for almost every protein known to science, hundreds of millions in all.

Just a few weeks ago, the scientist involved won a Nobel Prize.

Now, imagine that same pace of change in the realms of science that impact your work as national security leaders every day.

Imagine how AI will impact areas where we’re already seeing paradigm shifts, from nuclear physics to rocketry to stealth, or how it could impact areas of competition that may not have yet matured, that we actually can’t even imagine, just as the early Cold Warriors could not really have imagined today’s cyber operations.

Put simply, a specific AI application that we’re trying to solve for today in the intelligence or military or commercial domains could look fundamentally different six weeks from now, let alone six months from now, or a year from now, or six years from now. The speed of change in this area is breathtaking.

This is compounded by huge uncertainty around AI’s growth trajectory, which is the second distinctive trait.

Over the last four years, I’ve met with scientists and entrepreneurs, lab CEOs and hyperscalers, researchers and engineers, and civil society advocates. And throughout all of those conversations, there’s clear agreement that developments in artificial intelligence are having a profound impact on our world.

But opinions diverge when I ask them, “What exactly should we expect next?” There’s a spectrum of views. At one end, some experts believe we barely kicked off the AI revolution, that AI capabilities will continue to grow exponentially, building on themselves to unlock paths we didn’t know existed, and that this could happen fast, well within this decade. And if they’re right, we could be on the cusp of one of the most significant technological shifts in human history.

At the other end of the spectrum is a view that AI isn’t a growth spurt, but it has or soon will plateau, or at least the pace of change will slow considerably, and more dramatic breakthroughs are further down the road.

Experts who believe this aren’t saying AI won’t be consequential, but they argue that the last-mile work of applying AI, that are already here — the capabilities that are already here — is what will matter most, not just now but for the foreseeable future.

These views are vastly different, with vastly different implications.

Now, innovation has never been predictable, but the degree of uncertainty in AI development is unprecedented. The size of the question mark distinguishes AI for many other technological challenges our government has had to face and make policy around. And that is our responsibility.

As National Security Advisor, I have to make sure our government is ready for every scenario along the spectrum. We have to build a national security policy that will protect the American people and the American innovation ecosystem, which is so critical to our advantage, even if the opportunities and challenges we face could manifest in fundamentally different ways. We have to be prepared for the entire spectrum of possibilities of where AI is headed in 2025, 2027, 2030, and beyond.

Now, what makes this even more difficult is that private companies are leading the development of AI, not the government. This is the third distinctive feature.

Many of the technological leaps of the last 80 years emerged from public research, public funding, public procurement. Our government took an early and critical role in shaping developments, from nuclear physics and space exploration to personal computing to the Internet.

That’s not been the case with most of the recent AI revolution. While the Department of Defense and other agencies funded a large share of AI work in the 20th century, the private sector has propelled much of the last decade of progress. And in many ways, that’s something to celebrate. It’s a testament to American ingenuity, to the American innovation system that American companies lead the world in frontier AI. It’s America’s special sauce. And it’s a good thing that taxpayers don’t have to foot the full bill for AI training costs, which can be staggeringly high.

But those of us in government have to be clear-eyed about the implications of this dynamic as both stewards and deployers of this technology.

Here, two things can be true at the same time.

On the one hand, major technology companies that develop and deploy AI systems by virtue of being American have given America a real national security lead, a lead that we want to extend. And they’re also going head-to-head with PRC companies like Huawei to provide digital services to people around the world. We’re supporting those efforts, because we want the United States to be the technology partner of choice for countries around the world.

On the other hand, we need to take responsible steps to ensure fair competition and open markets; to protect privacy, human rights, civil rights, civil liberties; to make sure that advanced AI systems are safe and trustworthy; to implement safeguards so that AI isn’t used to undercut our national security.

The U.S. government is fully capable of managing this healthy tension, as long as we’re honest and clear-eyed about it. And we have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead.

Now, when it comes to AI and our national security, I have both good news and bad news. The good news is that thanks to President Biden and Vice President Harris’s leadership, America is continuing to build a meaningful AI advantage.

Here at home, President Biden signed an executive order on the development and use of AI, the most comprehensive action that any country in the world has ever taken on AI.

We’ve worked to strengthen our AI talent, hardware, infrastructure, and governance. We’ve attracted leading researchers and entrepreneurs to move to and remain in the United States. We’ve unleashed tens of billions of dollars in incentives to catalyze domestic leading-edge chip production. We’ve led the world in issuing guidance to make sure that AI development and use is safe, secure, and trustworthy.

And as we’ve done all of this, we’ve scrutinized AI trends, not just frontier AI, but also the AI models that will proliferate most widely and rapidly around the world. And we’re working to enhance American advantages across the board.

But here’s the bad news: Our lead is not guaranteed. It is not pre-ordained. And it is not enough to just guard the progress we’ve made, as historic as it’s been. We have to be faster in deploying AI in our national security enterprise than America’s rivals are in theirs. They are in a persistent quest to leapfrog our military and intelligence capabilities. And the challenge is even more acute because they are unlikely to be bound by the same principles and responsibilities and values that we are.

The stakes are high. If we don’t act more intentionally to seize our advantages, if we don’t deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead.

Even if we have the best AI models but our competitors are faster to deploy, we could see them seize the advantage in using AI capabilities against our people, our forces, and our partners and allies. We could have the best team but lose because we didn’t put it on the field.

We could see advantages we built over decades in other domains, like space and undersea operations, be reduced or eroded entirely with AI-enabled technology.

And for all our strengths, there remains a risk of strategic surprise. We have to guard against that — which is why I’m here today.

Our new National Security Memorandum on AI seeks to address exactly this set of challenges. And as rising national security leaders, you will be charged with implementing it with no time to lose.

So, in the balance of my remarks, I want to spend a few minutes explaining the memorandum’s three main lines of effort: securing American leadership in AI, harnessing AI for national security, and strengthening international AI partnerships.

First, we have to ensure the United States continues to lead the world in developing AI. Our competitors also know how important AI leadership is in today’s age of geopolitical competition, and they are investing huge resources to seize it for themselves. So we have to start upping our game, and that starts with people.

America has to continue to be a magnet for global, scientific, and tech talent. As I noted, we’ve already taken major steps to make it easier and faster for top AI scientists, engineers, and entrepreneurs to come to the United States, including by removing friction in our visa rules to attract talent from around the world.

And through this new memorandum, we’re taking more steps, streamlining visa processing wherever we can for applicants working with emerging technologies. And we’re calling on Congress to get in the game with us, staple more green cards to STEM diplomas, as President Biden has been pushing to do for years.

So, that’s the people part of the equation.

Next is hardware and power. Developing advanced AI systems requires large volumes of advanced chips, and keeping those AI systems humming requires large amounts of power.

On chips, we’ve taken really significant steps forward. We passed the CHIPS and Science Act, making a generational investment in our semiconductor manufacturing, including the leading-edge logic chips and the high-bandwidth memory chips needed for AI.

We’ve also taken decisive action to limit strategic competitors’ access to the most advanced chips necessary to train and use frontier AI systems with national security implications, as well as the tools needed to make those chips.

The National Security Memorandum builds on this progress by directing all of our national security agencies to make sure that those vital chip supply chains are secure and free from foreign interference.

On power, the memorandum recognizes the importance of designing, permitting, and constructing clean energy generation facilities that can serve AI data centers so that the companies building world-leading AI infrastructure build as much as possible here in the United States in a way that is consistent with our climate goals.

One thing is for certain: If we don’t rapidly build out this infrastructure in the next few years, adding tens or even hundreds of gigawatts of clean power to the grid, we will risk falling behind.

Finally, there’s funding for innovation. This fiscal year, federal funding for non-defense R&D declined significantly. And Congress still hasn’t appropriated the science part of the CHIPS and Science Act, even while China is increasing its science and technology budget 10 percent year over year. That can mean critical gaps in AI R&D.

We want to work with Congress to make sure this and the other requirements within the AI National Security Memorandum are funded. And we’ve received strong bipartisan signals of support for this from the Hill. So, it’s time for us to collectively roll up our sleeves on a bicameral, bipartisan basis and get this done.

And we also have to be aware that our competitors are watching closely, not least because they would love to depose our AI leadership. One playbook we’ve seen them deploy again and again is theft and espionage. So, the National Security Memorandum takes this head on. It establishes addressing adversary threats against our AI sector as a top-tier intelligence priority, a move that means more resources and more personnel will be devoted to combating this threat.

It also directs people across government, like so many of you, to work more closely with private sector AI developers to provide them with timely cybersecurity and counter-intelligence information to keep their technology secure, just as we’ve already worked to protect other elements of the U.S. private sector from threats to them and to our national security.

The second pillar focuses on how we harness our advantage and our enduring advantage to advance national security.

As National Security Advisor, I see how AI is already poised to transform the national security landscape. And where you sit, as war fighters, as diplomats, as intelligence officers, I’m sure you’re seeing it too. Some change is already here. AI is reshaping our logistics, our cyber vulnerability detection, how we analyze and synthesize intelligence. Some change we see looming on the horizon, including AI-enabled applications that will transform the way our military trains and fights. But some change, as I said earlier, we truly cannot predict in both the form it will take and how fast it will come.

Bottom line: Opportunities are already at hand, and more soon will be, so we’ve got to seize them quickly and effectively, or our competitors will first.

That means all of us in the national security enterprise have to become much more adept users of AI. It means we need to make significant technical, organizational, and policy changes to ease collaboration with the actors that are driving this development. And the National Security Memorandum does just that. It directs agencies to propose ways to enable more effective collaboration with non-traditional vendors, such as leading AI companies and cloud computing providers.

In practice, that means quickly putting the most advanced systems to use in our national security enterprise just after they’re developed, like how many in private industry are doing. We need to be getting fast adoption of these systems, which are iterating and advancing, as we see every few months.

Next, today’s AI systems are more generally capable than the bespoke and narrow tools that dominated prior AI. And this general capability is a huge advantage. But the flipside is they cost much more to train and run. So we’re pushing agencies to use shared computing resources to accelerate AI adoption, lower cost, and learn from one another as they responsibly address a wide range of threats, from nuclear security to biosecurity to cybersecurity.

And I emphasize that word, “responsibly.” Developing and deploying AI safely, securely, and, yes, responsibly, is the backbone of our strategy. That includes ensuring that AI systems are free of bias and discrimination.

This is profoundly in our self-interest. One reason is that even if we can attract AI talent or foster AI development here in the United States, we won’t be able to lead the world if people do not trust our systems. And that means developing standards for AI evaluations, including what makes those systems work and how they might fail in the real world. It means running tests on the world’s most advanced AI systems before they’re released to the public. And it means leading the way in areas like content authentication and watermarking so people know when they’re interacting with AI, as opposed to interacting with, for example, a real human.

To do all of that, we have to empower and learn from a full range of AI firms, experts, and entrepreneurs, which our AI Safety Institute is now doing on a daily basis.

Another reason we need to focus so much on responsibility, safety, and trustworthiness is a little bit counterintuitive. Ensuring security and trustworthiness will actually enable us to move faster, not slow us down. Put simply, uncertainty breeds caution. When we lack confidence about safety and reliability, we’re slower to experiment, to adopt, to use new capabilities, and we just can’t afford to do that in today’s strategic landscape.

That’s why our memorandum directs the first-ever government-wide framework on AI risk management commitments in the national security space, commitments like refraining from uses that depart from our nation’s core values, avoiding harmful bias and discrimination, maximizing accountability, ensuring effective and appropriate human oversight.

As I said, preventing misuse and ensuring high standards of accountability will not slow us down; it will actually do the opposite. And we’ve seen this before with technological change.

During the early days of the railroads, for example, the establishment of safety standards enabled trains to run faster thanks to increased certainty, confidence, and compatibility.

And I also want to note we’re going to update this framework regularly. This goes back to the uncertainty I mentioned earlier. There may be capabilities or novel legal issues that just haven’t emerged yet. We must and we will ensure our governance and our guardrails can adapt to meet the moment, no matter what it looks like or how quickly it comes.

Finally, we need to do all of this in lockstep with our partners, which is the third pillar of our memorandum.

President Biden often says we’re going to see more technological change in the next 10 years than we saw in the last 50. He’s right. And it doesn’t just apply to our country, but to all countries.

And when it comes to AI specifically, we need to ensure that people around the world are able to seize the benefits and mitigate the risks. That means building international norms and partnerships around AI.

Over the last year, thanks to the leadership of President Biden and Vice President Harris, we’ve laid that foundation. We developed the first-ever International Code of Conduct on AI with our G7 partners. We joined more than two dozen nations at the Bletchley and Seoul AI summits to outline clear AI principles.

We released our Political Declaration on the Military Use of AI, which more than 50 countries have endorsed, to outline what constitutes responsible practices for using AI in the military domain.

And we sponsored the first-ever U.N. General Assembly Resolution on AI, which passed unanimously, including with the PRC, I might add, as a co-sponsor.

It makes clear that, as I said, we can both seize the benefits of AI for the world and advance AI safety.

Let me take just a moment to speak about the PRC specifically.

Almost a year ago, when President Biden and President Xi met in San Francisco, they agreed to a dialogue between our two countries on AI risk and safety. And this past May, some of our government’s top AI experts met PRC officials in Geneva for a candid and constructive initial conversation.

I strongly believe that we should always be willing to engage in dialogue about this technology with the PRC and with others to better understand risks and counter misperceptions.

But those meetings do not diminish our deep concerns about the ways in which the PRC continues to use AI to repress its population, spread misinformation, and undermine the security of the United States and our allies and partners.

AI should be used to unleash possibilities and empower people. And nations around the world, especially developing economies, want to know how to do that. They don’t want to be left behind, and we don’t want that either.

Our national security has always been stronger when we extend a hand to partners around the world. So, we need to get the balance right. We need to balance protecting cutting-edge AI technologies on the one hand, while also promoting AI technology adoption around the world.

Protect and promote. We can and must and are doing both.

So let me briefly preview for you a new global approach to AI diffusion, how AI can spread around the world in a responsible way that allows AI for good while protecting against downside risk.

This new global approach complements the memorandum that has come out, and comes out of extended conversations in the Situation Room and with allies, industry, and partners over the last year.

The finer print will come out later, but I can say now that it will give the private sector more clarity and predictability as they plan to invest hundreds of billions of dollars globally.

This includes how our government will manage the export of the most advanced chips necessary to develop frontier models; how we will ensure broad access to substantial AI computing power that lies behind the bleeding-edge but could nonetheless transform health, agriculture, and manufacturing around the world; how we will help facilitate partnerships between leading American AI firms and countries around the world that want to be part of the AI revolution; and how we will set safety and security standards for these partnerships to ensure we effectively protect against risks while unleashing new opportunities.

These partnerships are critical. They’re fundamental to our leadership. We know that China is building its own technological ecosystem with digital infrastructure that won’t protect sensitive data, that can enable mass surveillance and censorship, that can spread misinformation, and that can make countries vulnerable to coercion.

So, we have to compete to provide a more attractive path, ideally before countries go too far down an untrusted road from which can be expensive and difficult to return. And that’s what we’re doing.

We’ve already developed new partnerships that will support economic progress, technological innovation, and indigenous AI ecosystems, from Africa to Asia to the Middle East and beyond. And we’re going to keep at it, with a clear and rigorous approach to AI diffusion.

Now, I do want to make sure I leave time for our conversation, so let me just close with this:

Everything I just laid out is a plan, but we need all of you to turn it into progress. We need you, and leaders across every state and every sector, to adopt this technology to advance our national security and to do it fast.

We need you to ensure that our work aligns with the core values that have always underpinned American leadership.

And as President Eisenhower said, we need you to constantly update and develop our AI doctrine in the years ahead.

It will be hard. It will require constant thinking, constant rethinking, constant innovation, constant collaboration, and constant leadership. But with the past as our proof, I know that everyone in this room and all across our country is up for it. And together, we will win the competition for the 21st century.

So, thank you, and I look forward to the conversation. (Applause.)

From title: THE WHITE HOUSE
Human Rights and Current Affairs: DoOurBest.org
Do our best to defend human rights.
Email:[email protected]