A call for better access to AI systems for independent testing

I've become convinced that independent testing of AI systems by third parties, such as journalists, civil society, and academic researchers, is currently our best hope for surfacing problems and holding companies accountable.

Yet these kinds of studies are legally risky. Researchers risk having their accounts suspended or even being sued for violating terms of service.

This week, I'm happy to have signed an open letter, written by a group of AI researchers, calling on organizations to adopt voluntary safe harbor protections for good faith testing of generative AI systems.

The letter's main points:

  • Independent evaluation is necessary for public awareness, transparency, and accountability of high impact generative AI systems.

  • Currently, AI companies’ policies can chill independent evaluation.

  • AI companies should provide basic protections and more equitable access for good faith AI safety and trustworthiness research.

Please sign the letter if you agree and share it with your network!

Previous
Previous

Prompt transformation and the Gemini debacle

Next
Next

Wired falls for bogus claims of AI writing detector startups