How Congress can profit from Schumer’s AI Perception Boards

Requires Congress to do one thing about synthetic intelligence have reached a fever pitch in Washington. As we speak, expertise business bosses, labor leaders and different luminaries will collect for the primary of a number of so-called “AI Perception Boards” convened by Senate Majority Chief Charles E. Schumer, D-N.Y., to attempt to present some solutions.
Listed below are 4 modest recommendations for bettering the percentages that this political course of results in significant progress.
First, get the precise individuals within the room. Schumer has invited a who’s who of expertise bigwigs, together with Elon Musk, Invoice Gates and the CEOs of OpenAI, Microsoft, Meta, IBM and Alphabet to attend the primary of his closed-door conferences. Summoning the tech sector’s prime brass to Washington is nice political theater, however it isn’t sufficient to ship efficient coverage. As Meredith Whittaker, an AI professional and frequent tech business skeptic put it, “that is the room you pull collectively when your staffers need footage with tech business AI celebrities. It’s not the room you’d assemble while you need to higher perceive what AI is, how (and for whom) it capabilities, and what to do about it.”
In future conferences, Congress ought to prioritize speaking to engineers, ethicists, AI researchers, coverage consultants and others who’re steeped within the day-to-day work of attempting to construct secure, moral, and reliable AI. Understanding how totally different coverage levers will be dropped at bear on AI dangers or the place quirks of the expertise are more likely to impose limits on efficient coverage can be important. This requires enter from the working degree.
The dialog also needs to be broader. Together with extra representatives from academia and different corners of civil society, Schumer ought to invite a wider cross-section of companies to take part. This could embody representatives of banks, pharmaceutical corporations, producers and different companies that can be among the many largest customers of AI, not simply the main tech corporations.
Second, deal with concrete issues. There are many dangers to fret about with AI. Policymakers must prioritize. Considerations that highly effective AI might slip out of human management or result in widespread job losses are likely to dominate in style discussions of AI on social media.
Lawmakers mustn’t ignore these extra speculative issues, however they need to focus first on dangers with clearly recognized mechanisms for hurt, the place coverage has one of the best likelihood of constructing a distinction. These embody the danger that poorly designed or malfunctioning AI techniques might harm individuals’s financial prospects or injury important infrastructure and the danger that unhealthy actors might use highly effective, commercially accessible AI instruments to design new toxins or conduct damaging cyberattacks.
Third, make use of current instruments earlier than inventing new ones. It’s a delusion that AI is an unregulated free-for-all in the US. Since 2019, the federal government’s technique has been to use current legal guidelines to AI, whereas asking govt department businesses to develop new guidelines or steering the place wanted. Regulating AI in medical gadgets doesn’t require Congress to behave. The Meals and Drug Administration is already doing it. This bottom-up strategy will be frustratingly sluggish and uneven, however it’s also sensible — as a result of how and the place a expertise is getting used issues a fantastic deal when speaking about its dangers.
Congress ought to first make sure that authorities establishments which are already grappling with AI have the experience, authorized authority and motivation they should do their jobs. It might then focus new laws on gaps that aren’t lined by current legal guidelines. A nationwide private information safety regulation can be an apparent place to begin.
Congress also needs to take into account strengthening current constructions for coordinating on AI coverage throughout the federal authorities. This might embody boosting funding for the Nationwide Institute of Requirements and Know-how, which has been doing groundbreaking work on managing AI dangers on a shoestring finances, or empowering the White Home Workplace of Science and Know-how Coverage and the Nationwide Science and Know-how Council to play a extra distinguished position in setting the tone and route of U.S. coverage on AI and different advanced expertise challenges.
Such interventions might not entice the identical headlines as making a “new company” to supervise AI, as some lawmakers and expertise leaders have proposed. However they may assist enhance administration of AI dangers, whereas avoiding unnecessary duplication.
Lastly, and maybe most significantly, Congress ought to acknowledge that regulating AI is an issue that may in all probability by no means be absolutely solved. The dangers of AI – and authorities’s capability to reply to them – will change because the underlying expertise, makes use of of AI, enterprise fashions and societal responses to AI evolve. Insurance policies must be revisited and revised accordingly. No matter Congress decides to do about AI within the coming months, it ought to purpose for a versatile strategy.
Kevin Allison is the president and head of analysis at Minerva Know-how Coverage Advisors, a consulting agency centered on the geopolitics and coverage of synthetic intelligence, and a senior advisor at Albright Stonebridge Group.