Skip to main content

A security loophole in Biden’s AI executive order

Presented by Google & BCG: How the next wave of technology is upending the global economy and its power structures
Nov 30, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by Google & BCG

WASHINGTON, DC - OCTOBER 30: U.S. President Joe Biden signs a new executive order guiding his administration's approach to artificial intelligence during an event in the East Room of the White House on October 30, 2023 in Washington, DC. President Biden issued a new executive order on Monday, directing his administration to create a   new chief AI officer, track companies developing the most powerful AI systems, adopt stronger privacy policies and "both deploy AI and guard against its possible bias," creating new safety guidelines and industry standards. (Photo by Chip Somodevilla/Getty Images)

President Joe Biden signs a new executive order guiding his administration's approach to artificial intelligence. (Photo by Chip Somodevilla/Getty Images) | Getty Images

As the gears start turning to implement President Joe Biden’s immense executive order on AI, questions have been percolating in the tech world: Yes, it’s long and sweeping, but does it focus on the right things?

Two computer science professors  —  Swarat Chaudhuri of The University of Texas at Austin, and Armando Solar-Lezama of MIT — wrote us with their concerns about flaws in the order that might hinder our abilities to improve safety and cybersecurity in an increasingly AI-driven world.

A year to the day ChatGPT launched, we invited them to elaborate on their concerns with the White House approach to AI in a guest essay.

The Biden administration’s AI executive order sets new standards for the safety and security of artificial intelligence, and specifically calls out security risks from “foundation models,” the general-purpose statistical models trained on massive datasets that power AI systems like ChatGPT and DALL-E.

As researchers, we agree the safety and security concerns around these models are real.

But the approach in the executive order has the potential to make those risks worse, by focusing on the wrong things and closing off access to the people trying to fix the problem.

Large foundation models have shown an astounding ability to generate code, text and images, and the executive order considers scenarios where such models — like the AI villain in last summer's "Mission: Impossible"— create deadly weapons, perform cyberattacks and evade human oversight. The order’s response is to impose a set of reporting requirements on foundation models whose training takes more than a certain (very large) amount of computing power.

 

A message from Google:

Artificial intelligence can significantly bolster climate-related adaptation and resilience initiatives. Our new report with Boston Consulting Group (BCG) shows that AI is already delivering improved predictions to help adapt to climate change. For example, Google Research has been working on a flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information up to seven days in advance. Learn more here.

 

The specific focus on the risks of the largest models, though well-intentioned, is flawed in three major ways. First, it’s inadequate: by focusing on large foundation models, it overlooks the havoc smaller models can wreak. Second, it’s unnecessary: we can build targeted mechanisms for protecting ourselves from the bad applications. Third, it represents a regulatory creep that could, in the long run, end up favoring a few large Silicon Valley companies at the expense of broader AI innovation.

FraudGPT, a malicious AI service already available on the dark web, is a good illustration of the shortcomings of the Biden approach. Think of FraudGPT as an evil cousin of ChatGPT: While ChatGPT has built-in safety guardrails, FraudGPT excels at writing malicious code that forms the basis of cyberattacks.

To build a system like FraudGPT, you would start with a general-purpose foundation model and then "fine-tune" it using additional data — in this case, malicious code downloaded from seedy corners of the internet. The foundation model itself doesn't have to be a regulation-triggering behemoth. You could build a significantly more powerful FraudGPT completely under the radar of Biden’s executive order. This doesn't make FraudGPT benign.

Just because one can build models like FraudGPT and sneak them under the reporting threshold doesn't mean that cybersecurity is a lost cause, however. In fact, AI technology may offer a way to strengthen our software infrastructure.

Most cyberattacks work by exploiting bugs in the programs being attacked. In fact the world's software systems are, to an embarrassing degree, full of bugs. If we could make our software more robust overall, the threat posed by rogue AIs like FraudGPT — or by human hackers — could be minimized.

This may sound like a tall order, but the same technologies that make rogue AIs such a threat can also help create secure software. There’s an entire sub-area of computer science called "formal verification" that focuses on methods to mathematically prove a program is bug-free. Historically, formal verification has been too labor-intensive and expensive to be broadly deployed — but new foundation-model-based techniques for automatically solving mathematical problems can bring down their costs.

 

GET A BACKSTAGE PASS TO COP28 WITH GLOBAL PLAYBOOK: Get insider access to the conference that sets the tone of the global climate agenda with POLITICO's Global Playbook newsletter. Authored by Suzanne Lynch, Global Playbook delivers exclusive, daily insights and comprehensive coverage that will keep you informed about the most crucial climate summit of the year. Dive deep into the critical discussions and developments at COP28 from Nov. 30 to Dec. 12. SUBSCRIBE NOW.

 
 

To its credit, the executive order does acknowledge the potential of AI technology to help build secure software. This is consistent with other positive aspects of the order, which call for solving specific problems such as algorithmic discrimination or the potential risks posed by AI in healthcare.

By contrast, the order's requirements on large foundation models do not respond to a specific harm. Instead, they respond to a narrative that focuses on potential existential dangers posed by foundation models, and on how a model is created rather than how it is used.

Focusing too tightly on the big foundation models also poses a different kind of security risk.

The current AI revolution was built on decades of decentralized, open academic research and open-source software development. And solving difficult, open-ended problems like AI safety or security also requires an open exchange of ideas.

Tight regulations around the most powerful AI models could, however, shut this off and leave the keys to the AI kingdom in the hands of a few Silicon Valley companies.

Over the past year, companies like OpenAI and Anthropic have feverishly warned the world about the risks of foundation models while developing those very models themselves. The subtext is that they alone can be trusted to safeguard foundation model technology.

Looking ahead, it’s reasonable to worry that the modest reporting requirements in the executive order may morph into the sort of licensing requirements for AI work that OpenAI CEO Sam Altman called for last summer. Especially as new ways to train models with limited resources emerge, and as the price of computing goes down, such regulations could start hurting the outsiders — the researchers, small companies, and other independent organizations whose work will be necessary to keep a fast-moving technology in check.

 

A message from Google:

Advertisement Image

 
Kissinger, the AI pundit?


As you wade through the barrage of assessments of Henry Kissinger’s legacy (he died this week, age 100), it’s worth remembering his late-life interest in AI.

In 2019, the former statesman co-authored with Google mogul Eric Schmidt and the computer scientist Daniel Huttenlocher a book modestly titled “The Age of A.I. and our Human Future,” warning that AI could disrupt civilization and required global responses.

Although it wasn’t always kindly received — Kevin Roose, in the Times, called it “cursory and shallow in places, and many of its recommendations are puzzlingly vague” — Kissinger did not let go of the subject. He recorded lengthy videos on AI, and this spring, at a sprightly 99, proclaimed in a Wall Street Journal op-ed that generative AI presented challenges “on a scale not experienced since the beginning of the Enlightenment” - an observation that gave the U.S. business elite a wake-up call.

As recently as last month, Kissinger co-wrote an essay in Foreign Affairs on “The Path to AI Arms Control,” with Harvard’s Graham Allison.

It's hard to know exactly what Kissinger wrote himself, or what motivated this final intellectual chapter — we did email one of his co-authors, who didn't respond by presstime. (He was reportedly introduced to the topic by Eric Schmidt at a Bilderberg conference.) But it’s not hard to imagine that as a persuasive, unorthodox thinker often accused of inhumanity, Kissinger saw an alien new thought process that was even more unorthodox, even less human, potentially even more persuasive — and he wanted people to know it was time to worry. — Stephen Heuser

 

SUBSCRIBE TO CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Subscribe now to California Climate to keep up with the changes.

 
 
Tweet of the Day

Tweet by @TopNotchQuark aka Quarked up Shawty: Spotify wrapped is to the chronically online what Myers Briggs is to your average tech bro.

via @TopNotchQuark on Twitter | @TopNotchQuark

The Future in 5 Links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Google:

Delivering improved predictions to help adapt to climate change is one of three key areas where we’re developing AI to accelerate climate action.

Floods are the most common natural disaster, causing thousands of fatalities and disrupting the lives of millions every year. Since 2018, Google Research has been working on our flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information so communities and individuals can prepare for and respond to riverine floods. Our Flood Hub platform is available in more than 80 countries, providing forecasts up to seven days in advance for 460 million people. With the help of AI, we hope to bring flood forecasting to every country and cover more types of floods.

Learn more here about how we’re building AI that can drive innovation forward, while at the same time working to mitigate environmental impacts.

 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to rouf@idiot.cloudns.cc by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

Comments

Popular Posts

The costs of Healey's budget cuts

Lisa Kashinsky and Kelly Garrity's must-read rundown of what's up on Beacon Hill and beyond. Jan 09, 2024 View in browser   By Kelly Garrity and Lisa Kashinsky MAKING ENDS MEET — Gov. Maura Healey’s plan to slash $375 million from the state budget to help plug a $1 billion revenue hole came as something of a surprise after she initially said she had no plans to scale back spending. But some budget watchers say the move to control costs was inevitable — and that the governor...

📷 Zaib Khan added a new photo

  See the photo that he shared.           Facebook                 📷 Zaib Khan added a new photo. 16 October at 20:23   View Photo       Abdul Karim Jam likes this.             This message was sent to ludomallam@idiot.cloudns.cc . If you don't want to receive these emails from Facebook in the future, please unsubscribe . Facebook, Inc., Attention: Community Support, 1 Facebook Way, Menlo Park, CA 94025         To help keep your account secure, please don't forward this email. Learn more.      

U.S. Cyber Command and NSA partner to shield midterms from hackers / Global ransomware damages set to exceed $30B / India's newest airline could have leaked customer data

Plus: Microsoft Azure Virtual Machines have suffered an outage Inside.com Part of   Network August 30, 2022 Presented by The U.S. Cyber Command has partnered with the NSA to shield midterm elections from hackers. The two federal agencies made the announcement in a joint statement. More: The two agencies have  created a joint task force named the Election Security Group. Officials from the NSA and U.S. Cyber Command have stated that the group comprises the best team members that the two agencies have. ESG will receive and share information with other domestic and international authorities to ensure it achieves its goal of protecting the midterm elections from foreign threat actors. The task force will also help U.S. allies to protect their electoral campaigns from actors that want to undermine them. Zoom Out: CISA has collaborated ...

Q&A: Bergman on pushing the FDA on psychedelics

The ideas and innovators shaping health care Aug 08, 2024 View in browser   By Ruth Reader , Erin Schumaker , Daniel Payne , Toni Odejimi and Carmen Paun WASHINGTON WATCH Bergman | Francis Chung/POLITICO ...

8 Best Diabetes-Friendly Meal Delivery Services in 2024

Plus: Identifying and Treating Diabetes Joint Pain ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌   ...

📷 MD Monir Ambulance added a new photo

        📷 MD Monir Ambulance added a new photo. 12 April at 17:59   View Photo               Facebook                 📷 MD Monir Ambulance added a new photo. 12 April at 17:59   View Photo               This message was sent to ludomallam@idiot.cloudns.cc . If you don't want to receive these emails from Facebook in the future, please unsubscribe . Facebook, Inc., Attention: Community Support, 1 Facebook Way, Menlo Park, CA 94025         To help keep your account secure, please don't forward this email. Learn more.      

Sabir Khan wants to be friends on Facebook

  1 mutual friend - Works at Facebook - Islamia University - Bahawalpur - 2,123 friends - 5 photos - 7 groups           Facebook             Sabir Khan wants to be friends with you on Facebook.   Sabir Khan Works at Facebook · Islamia University · Bahawalpur 1 mutual friend · 2,123 friends · 5 photos · 7 groups               Confirm request     See all requests             This message was sent to ludomallam@idiot.cloudns.cc . If you don't want to receive these emails from Facebook in the future, please unsubscribe . Facebook, Inc., Attention: Community Support, 1 Facebook Way, Menlo Park, CA 94025         To help keep your account secure, please don't forward this email. Learn more.      

Spectrum Equity closes $2B fund

Plus, Audacity launches $60M fund Inside.com Part of   Network July 28, 2022 Presented by Spectrum Equity, an investment company based in Boston, has closed its new fund valued at $2B . The fund will be officially named Spectrum Equity X, L.P. More: The firm received funds from previous investors as well as first-time outside investors. Spectrum focuses on backing internet-based companies that aim to disrupt a number of different verticals such as education, financial services, healthcare, and logistics.  Founded in 1993, the company manages $8B in assets, while its average equity investment is $25M-$150M. Audacity has launched a new $60M fund. The India-based VC firm will focus on media tech companies that are raising their Series A round. More: Besides media tech, the firm will also focus on SaaS, g...

A 2022 recap of platform updates and new tools

Startups that raised funding in 2022 Inside.com Part of   Network December 28, 2022 Presented by Android and Apple updates announced in 2022:  Google introduced a pilot program with Spotify to explore user choice billing.  Google released Android 13 (Go edition) with improvements to user experience and technical functionalities.  Android 13 for TV was made available to developers on ADT-3 and the Android TV emulator.  Google announced memory safety vulnerabilities in Android dropped after announcing support for Rust last year.  Google shared its plans to launch the beta version of Privacy Sandbox for Android early next year.  Apple announced changes to its pricing structure, offering developers 700 additional price points and pricing tools.  Apple allowed reader apps to provide in-app links to alternative payment methods. In Apr...

Changes to Google’s end user-facing Terms of Service

Changes to our end user-facing Terms of Service effective March 31, 2020. Hello Administrator, We're writing to let you know about changes in our end user-facing Terms of Service (Terms) that may affect users in your domain. These changes do not impact the terms that govern the agreement between Google and your organization. If you have disabled Google Additional Services for users in your domain, these changes will not impact them. What's Changing? We're improving our Terms and making them easier to understand. The changes will take effect on March 31, 2020, and they won't impact the way your end users use Google services. As the United Kingdom (UK) is leaving the European Union (EU), Google LLC will be the service provider for end users in your domain that are based in the UK. Google LLC will be responsible for all user information and data in Additional Services, and for complying with applicable privacy laws. For more detail...