Economy

Ideas to Regulate AI Safety

[ad_1]

noticing these come from Luke Muelhhauserand is not speaking for Open Philanthropy in any official capacity:

  1. Software export controls. Control the export (to anyone) of “frontier AI models”, that is, models with very general capabilities above some threshold, or (more simply) models trained with a compute budget above some threshold (for example , as much computing as $1 billion can buy today). This will help limit the proliferation of models that are likely to pose the greatest risk. Also restrict API access in some way, as API access can potentially be used to generate an optimized data set sufficient to train a smaller model to achieve similar performance to the larger model.
  2. Require hardware security features on next-generation chips. Chip security features can be leveraged for many useful IT governance purposes, for example, to verify compliance with export controls and national regulations, monitor chip activity without leaking sensitive IP, limit usage (for example, across interconnection boundaries) or even intervene in an emergency (for example, remote shutdown). These features can be achieved through firmware updates on chips already implemented, although some features would be more tamper-resistant if implemented on the silicon itself in future chips.
  3. Track next-generation chip stocks and flows, and license large clusters. Chips above a certain capacity threshold (eg, the one used for October 2022 export controls) must be traced and a license required to assemble large masses of them (as required to train border models profitably). This would improve the government’s visibility into potentially dangerous computer groups. And without this, other aspects of an effective computer governance regime may become moot through the use of undeclared computing.
  4. Track and require a license to develop border AI models. This would improve government visibility into the development of potentially dangerous AI models and allow greater control over their proliferation. Without this, other policies such as the information security requirements below are difficult to implement.
  5. Information security requirements. Require that border AI models be subject to more stringent information security protections (including cyber, physical, and personal security), including during model training, to limit the unintended proliferation of dangerous models.
  6. Testing and evaluation requirements. Require that border AI models be subject to very stringent security testing and evaluation, including some evaluation by an independent auditor who meets certain criteria. [footnote in the original]
  7. Fund specific genres of model alignment, interpretability, and evaluation R&D. Note that if the genres are not specified well enough, such funding can effectively widen (rather than narrow) the gap between cutting-edge AI capabilities and available methods for alignment, interpretability, and evaluation. see for example here for a possible model.
  8. Fund R&D in defensive information security, again to help limit the unintended proliferation of dangerous models. Even the broadest funding strategy would help, but there are many ways to direct this funding to pipeline development and deployment of frontier AI models.
  9. Create a close antitrust safe harbor for collaboration on AI safety and security. Border model developers would be more likely to usefully collaborate on AI safety and security work if such collaboration were more clearly permitted by antitrust rules. A careful scope of the policy would be needed to retain the basic objectives of antitrust policy.
  10. Require certain types of AI incident reports, similar to incident notification requirements in other industries (eg aviation) or data breach notification requirements, and similar to some vulnerability disclosure regimes. Many incidents would not need to be publicly reported, but could be kept confidential within a regulatory body. The goal of this is to allow regulators and perhaps others to track certain types of damage and close calls from AI systems, to keep track of where the hazards are and quickly develop mitigation mechanisms.
  11. Clarify the liability of AI developers for specific AI damages, especially clear physical or economic damages, including those resulting from negligent security practices. A new framework for AI accountability should particularly address the risks of borderline models carrying out actions. The goal of clear accountability is to incentivize further investment in safety, security, etc. by AI developers.
  12. Create means for rapid shutdown of large compute clusters and training runs. One type of “kill switch” that can be useful in an emergency is an off-grid power cut switch for large computing clusters. As far as I know, most data centers don’t have this. On-chip remote shutdown mechanisms (mentioned above) could also help, although they are vulnerable to disruption by cyberattacks. Several additional options might be required for compute clusters and training runs beyond particular thresholds.

I’m okay with some of these, as long as they’re applied liberally; For example, new editions of the iPhone require regulatory consent, but that hasn’t slowed progress much. That may or may not be the case for #3 to #6, I don’t know how strict the standard is or who exactly is to make the call. Maybe I don’t understand #2, but it sounds to me like a proposal for a complete surveillance society, at least as far as computers are concerned: I’m opposite! And, furthermore, it will generate a lot of underground activity, and in the meantime, the proposal itself will damage the EA brand. Hopefully the country will rise up against such ideas, or perhaps more likely they will die dead. (And to think that they are based on fears that they have hasn’t even been modeled. And I guess I can’t bring a computer from Mexico to use it?) I’m not sure what “restrict API access” means in practice (who? Everyone who could be Chinese spies? And Luke? Are you in favor of banning everything? open source? do we really want to drive all that underground?), but I’m probably opposed. I am opposed to placing the responsibility of a general purpose technology on the technology provider (#11), and hope to write more about this soon.

Finally, is Luke a closet accelerationist? The status quo does a lot to drive progress in AI, often through military and government R&D and public universities, but there is no talk of eliminating those programs. Why so many regulations but government subsidies get away with it!? How about, while we’re at it, banning any more Canadians from coming to the United States? (Canadians are recognized for their contributions to AI.) After all, the security of our nation, and indeed the world, is at stake. Canada is a very nice country, and since 1949 it even contains Newfoundland, so this seems less of an imposition than monitoring all our computer activity, right? It might be even easier to shut down all highly-skilled immigration. Any taker for that?




[ad_2]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button