Skip to content

Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS

Washington, D.C. Senate Majority Leader Chuck Schumer (D-NY) joined the Center for Strategic and International Studies (CSIS) to speak on SAFE Innovation in the AI Age and outline his vision for how the Senate can harness AI’s potential and protect our society from its potential harms. You can view a one pager on the SAFE Innovation Framework here. Below are Senator Schumer’s remarks as prepared for delivery, which can also be viewed here:

It is wonderful to be here at CSIS, and it is a pleasure to be in a room full of so many leaders from the world of innovation, tech, business, academia, labor, civil rights, and the arts.

Friends, we come together at a moment of revolution. Not one of weapons or political power but a revolution in science and understanding that will change humanity.

It has been said that what the locomotive and electricity did for human muscle a century and a half ago, Artificial Intelligence is doing for human knowledge today, as we speak. But the effect of AI will be more profound, and dramatic change will certainly occur over a much shorter period of time.

The idea of AI is not new. Supercomputers with human-like behavior have long been with us in movies, science fiction, and art. But now, what once lived only in our imaginations exists in our day to day lives. 

Thanks to remarkable innovations in computing power, in the speed of our semiconductors, in the size of our data sets, and in fields like Machine Learning and Neural Networks, we can say with confidence that the age of AI is here and here to stay.

And we are still just at the beginning. Some experts predict that in just a few years the world could be wholly unrecognizable from the one we live in today. 

That is what AI is: World-altering.

Change at such blistering speed may seem frightening to some—but if applied correctly, AI promises to transform life on Earth for the better. It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace.

But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether.

We have no choice but to acknowledge that AI’s changes are coming, and in many cases are already here. We ignore them at our own peril. Many want to ignore AI because it’s so complex. But with AI, we cannot be ostriches sticking our heads in the sand.

The question is: what role does Congress and the federal government have in this new revolution? Are we capable of playing a proactive role in promoting AI’s growth? Can Congress work to maximize AI’s benefits, while protecting the American people—and all of humanity— from its novel risks?

I think the answer to these questions is an emphatic yes. It must be. Because if the government doesn’t step in, who will fill its place?

Individuals and the private sector can’t do the work of protecting our country. Even if many developers have good intentions, there will always be rogue actors, unscrupulous companies, and foreign adversaries that seek to harm us. Companies may not be willing to insert guardrails on their own, certainly not if their competitors won’t be forced to do so.

That is why we’re here today: I believe that Congress must join the AI revolution, and we need your help.

AI is unlike anything Congress has dealt with before. It’s not like labor or healthcare or defense where Congress has a long history we can work off of. Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch. But Congress is up to the challenge.

The last two years in Congress have been the most successful of the last thirty years: historic infrastructure legislation, the largest clean energy package ever, CHIPS and Science, and the American Rescue Plan. Many of these were done with bipartisan support, under my leadership as Majority Leader.

So don’t count Congress out!

I know many of you have spent months calling on us to act. I hear you loud and clear. Many of my colleagues—from both sides of the aisle—hear you loud and clear.

So today, I want to outline a two-part proposal to move us forward on AI: one part on framework, one part on process.

First, Congress needs a framework for action…what should our framework be? What issues within AI should we look at to prepare legislation?

After months of talks with over 100 AI developers, executives, scientists, researchers, workforce experts, and advocates, this morning I’d like to share my proposed framework for action.

I call it the SAFE Innovation Framework for AI policy.

The SAFE Innovation Framework. I call it that because Innovation must be our north star. The U.S. has always been a leader in innovating on the greatest technologies that shape the modern world.

But if people think AI innovation is not done safely, if there are not adequate guardrails in place, it will stifle or even halt innovation altogether.

So it is SAFE innovation that we must seek.

Second, Congress will also need to invent a new process to develop the right policies to implement our framework. AI moves so quickly and changes at a near exponential speed, and there’s such little legislative history on this issue, so a new process is called for. The traditional approach of Committee hearings play an essential role, but on their own wont suffice. We will need help from creators, innovators, and experts in the field.

That is why later this year, I will invite the top AI experts to come to Congress and convene a series of first-ever AI Insight Forums, for a new and unique approach to developing AI legislation. I will talk a little more about these forums in a moment, but let’s return to the framework first.

Let me repeat: our framework must never lose sight of what must be our north star—innovation.

America is by nature a country of innovators: We produced over 590,000 patent applications in 2021, and 60% of the top 100 companies by market capitalization are American. It was America that revolutionized the automobile. We were the first to split the atom, to land on the moon, to unleash the internet, and create the microchip that made AI possible.

AI could be our most spectacular innovation yet, a force that could ignite a new era of technological advancement, scientific discovery, and industrial might.

So we must come up with a plan that encourages—not stifles—innovation in this new world of AI, and that means asking some important questions:

One: what is the proper balance between collaboration and competition among the entities developing AI?

Two: how much federal intervention, on the tax and spending side, must there be? Is federal intervention to encourage innovation necessary at all, or should we let the private sector develop on its own?

Three: what is the proper balance between private AI systems and open AI systems?

And finally: how do we ensure innovation and competition is open to everyone, not just the few big powerful companies? The government must play a role ensuring open, free, and fair competition.

In short: the first issue we must tackle is encouraging, not stifling, innovation.

But if people don’t think innovation can be done safely, that will slow AI’s development and even prevent us from moving forward. So my SAFE Innovation framework calls for Security, Accountability, protecting our Foundations, and, lastly, Explainability, one of the most important and most difficult technical issues in all of AI.

First comes Security—for our country, for American leadership, and for our workforce. We do not know what Artificial Intelligence will be capable of two years from now, fifty years from now, one hundred years from now. In the hands of foreign adversaries—especially autocracies—or domestic rogue groups interested in extortionist financial gain or political upheaval, the dangers of AI could be extreme.

We need to do everything we can to instill guardrails that make sure these groups cannot use our advances in AI for illicit and bad purposes.

But we also need security for America’s workforce, because AI—particularly generative AI—is already disrupting the way tens millions of people make a living. At greatest risk are those who live paycheck to paycheck, displacing millions of low-income workers, many from communities of color. Trucking, manufacturing, energy production could be next. And rest assured, those with college educations and advanced degrees will be impacted too.

AI will reshape the knowledge economy—impacting workers in sales, marketing, coding, software development, banking, law and other skilled occupations. Many assumed these jobs would always be safe, but that is not the case. The erosion of the middle class—already one America’s most serious problems—could get much worse with AI if we ignore it and don’t take measures to prevent job loss or misdistribution of income.

Globalization is a good cautionary tale. Many heralded it as a turning point for prosperity and growth. Decades later most people agree that globalization, on balance, probably increased wealth, but at the cost of tens of millions of jobs shipped overseas.  While some communities flourished, others were hollowed out, and remain so even to this day.

Congress was far too slow to aid Americans who needed help with these changes. Let us not repeat the same mistakes when it comes to AI.

To prevent that from happening, we will need everyone at the table: workers, businesses, educators, researchers. This is going to be a huge challenge, and all of us must be part of the solution.

AI policies must also promote Accountability.

Otherwise what will stop companies from using AI to track our kids’ movements, inundate them with harmful advertisements, or damage their self-image and mental health? What’s to stop a shady business from using AI to exploit people with addictions, or financial problems, or serious mental illnesses? How do we make sure AI isn’t used to exploit workers or encourage racial bias in hiring?

And how can we protect the IP of our innovators, our content creators, our musicians and writers and artists? Their ideas are their livelihoods. So when someone uses another person or another company’s IP, we need accountability to ensure they get their due credit and compensation.

Without guardrails in place regulating how AI is developed, audited, and deployed—and without making clear that certain practices should be out of bounds—we risk living in a total free for all, which nobody wants.

Nor do we want a future where AI eats away at America’s Foundations.

On its own, AI neither supports nor opposes the causes of human liberty, civil rights, or justice. If we don’t program these algorithms to align with our values, they could be used to undermine our democratic foundations, especially our electoral process.

If we don’t set the norms for AI’s proper uses, others will. The Chinese Communist Party, which has little regard for the norms of democratic governance, could leap ahead of us and set the rules of the game for AI. Democracy could enter an era of steep decline.

And there is a more immediate problem: AI could be used to jaundice and even totally discredit our elections, as early as next year. We could soon live in a world where political campaigns regularly deploy totally fabricated—yet totally believable—images and footage of Democratic or Republican candidates, distorting their statements and greatly harming their election chances. ChatBots can now be deployed at a massive scale to target millions of individual voters for political persuasion.

Once damaging misinformation is sent to a hundred million homes, it is hard to put the genie back in the bottle. What if foreign adversaries embrace this technology to interfere in our elections?

This is not about imposing one viewpoint, but about ensuring people can engage in democracy without outside interference. This is one of the reasons we must move quickly. We should develop the guardrails that align with democracy and encourage the nations of the world to use them. Without taking steps to make sure AI preserves our country’s foundations, then we risk the very survival of our democracy.

Finally, Explainability: one of the thorniest and most technically complicated issues we face--but perhaps the most important of all.

Explainability is about transparency. When you ask an AI system a question and it gives you an answer—perhaps an answer you weren’t expecting—you want to know where that answer came from.  You should be able to ask “why did AI choose this answer, over some other answer that could have also been a possibility?” And it should be done in a simple way, so all users can understand how these systems come up with answers.

Congress should make this issue a top priority, and companies must take the lead in helping us solve this problem. Because without explainability, we may not be able to move forward.

If the user of an AI system cannot determine the source of the sentence or paragraph or idea—and can’t get some explanation of why it was chosen over other possibilities—then we may not be able to accomplish our other goals of accountability, security, or protecting our foundations.

Explanability is thus perhaps the greatest challenge we face on AI. Even the experts don’t always know why these algorithms produce the answers they do. It’s a black box.

No everyday user of AI will understand the complicated and ever-evolving algorithms that determine what AI systems produce in response to a question or task.

And of course, those algorithms represent the highest level of intellectual property for AI developers. Forcing companies to reveal their IP would be harmful, it would stifle innovation, and it would empower our adversaries to use them for ill.

Fortunately the average person does not need to know the inner workings of these algorithms. But we do need to require companies to develop a system where, in simple and understandable terms, users understand why the system produced a particular answer and where that answer came from.

This is very complicated work. And here we will need the ingenuity of the experts and companies to come up with a fair solution that Congress can use to break open AI’s black box.

Innovation first, with Security, Accountability, Foundations, Explainability.

These are the principles I believe will ensure that AI innovation is SAFE and responsible and has the appropriate guardrails.

If we proceed with these priorities in mind, I think Congress can help ensure AI works for humanity’s good. I think we can go a long way towards keeping people safe.  

Now let me share my second proposal: a new legislative approach for translating this framework into legislative action.

Later this fall, I will convene the top minds in artificial intelligence here in Congress for a series of AI Insight Forums to lay down a new foundation for AI policy.

We need the best of the best sitting at the table: the top AI developers, executives, scientists, advocates, community leaders, workers, national security experts – all together in one room, doing years of work in a matter of months. The panels will include people of differing views, including some skeptics. We want the experts, in each subject where we have questions and problems, to sit around the table, debate the major challenges, and forge consensus about the way to go. Opposing views will be welcome, even encouraged, because this issue is so new that we must put all ideas on the table.

Our jobs as legislators will be to listen to the experts and to learn as much as we can so we can translate these ideas into legislative action.

Each Insight Forum will focus on the biggest issues in AI, including:

  • asking the right questions
  • AI innovation
  • Copyright & IP
  • Use-cases & risk management
  • Workforce
  • National security
  • Guarding against doomsday scenarios
  • AI’s role in our social world
  • Transparency, explainability & alignment
  • and Privacy & liability

These Insight Forums are the first of their kind.

They have to be the first of their kind, because AI moves so quickly, will change our world so dramatically, is deeper in its complexity, and lacks the legislative history in Congress that other issues have.

If we take the typical path—holding Congressional hearings with opening statements and each member asking questions five minutes at a time, on different issues—we simply won’t be able to come up with the right policies.

By the time we act, AI will have evolved into something new. This will not do. A new approach is required. These AI insight forums can’t and won’t replace the activity already happening in Congress on AI.

Our committees must continue to be the key drivers of Congress’ AI policy response, continue to hold hearings on legislation, and build bipartisan consensus. But hearings won’t be enough. We need an all-of-the-above approach, because that’s what AI’s complexities and speed demands.

And this must be done on a bipartisan basis. AI is one issue that must lie outside the typical partisan fights of Congress. The changes AI will bring will not discriminate between left or right or center. It will come for all of us and thus demands attention from all of us.  

To deepen this spirit of bipartisanship, I’ve established a group Senators to lead on this issue: Senators Heinrich, Young, Rounds and myself. I thank them for their work.

And I reiterate that while this is my framework and my vision for Congressional action, I hope the same spirit of collaboration that we’ve seen so far will propel us forward in the months ahead.

We will also rely on our Committee Chairs to help us develop the right proposals: Chairman Cantwell, Peters, Klobuchar, Warner, Durbin, as well as their Republican ranking members.

Last week, I asked each of the committee chairs to reach out across the aisle to identify and explore areas where we can get working on AI in committee.

We also need all members who’ve spoken on AI to join us: Senators Bennet, Thune, Blumenthal, Blackburn, Hawley and many others.

No question about it: this is all exceedingly ambitious.

We must exercise humility as we proceed. We are going to work very hard to come up with comprehensive legislation. Because this is so important, we are going to do everything we can to succeed.

But success is not guaranteed. AI is unlike anything that we’ve dealt with before, and it may be exceedingly difficult for legislation to tackle every single issue.

Again, humility is the key word.

But even if we can find some solutions and create a degree of consensus to deal with some of AI’s many challenges, we must pursue it. And like many great undertakings in our nation’s history, we must move ahead with bipartisanship and cooperation. We must cast aside ideological hang-ups and political self-interest.  That is the only way our efforts will succeed.

In 1963, President Kennedy said in Frankfurt Germany that “Time and the world do not stand still. Change is the law of life. And those who look only to the past or the present are certain to miss the future.”

What was true sixty years ago is even truer today: Change is the law of life.

Because of AI, change is happening to our world as we speak, in ways both wondrous and startling. There are many who think we are in over our heads. There are those who fear AI’s immense power and conclude its better to turn back, to go no further down this unknown road. We all know it’s not that simple or that easy. The AI revolution is going to happen, with us or without us.

If we can promote innovation, but make sure that it is safe—if America leads the way—the future will be far better, brighter, and safer than if it happens without us.

I do not know of any other instance in human history when we reached new heights, uncovered new truths, or mastered new innovations only for us to turn back.

It’s in our nature to press ahead. We are, as President Theodore Roosevelt said, the ones in the arena.

So friends, let us not turn back. Let us not look away. Instead, let us forge ahead, determined and unafraid, to lay a foundation for the next era of human advancement.

Thank you all, and I look forward to working with you on this great effort very, very soon. 

###