Amazon joins Biden effort for AI safety

Dan Berthiaume
Senior Editor, Technology
Dan Berthiaume profile picture
artificial intelligence
Amazon is agreeing to follow AI safety priniciples.

Amazon has joined six other high-tech companies in committing to the safe, secure and transparent development of artificial intelligence (AI).

At a recent White House meeting with President Biden, Amazon, Google, Meta, Microsoft, OpenAI, Anthropic and Inflection pledged to follow a set of voluntary commitments to ensure what the Biden administration is terming the “three principles” guiding the future development of AI: safety, security, and trust. 

“At Amazon, we are committed to continued collaboration with the White House, policymakers, the technology industry, researchers, and the AI community to advance the responsible and secure use of AI,” Amazon said in a corporate blog post. “As one of the world’s leading developers and deployers of AI tools and services, Amazon supports these voluntary commitments to foster the development of AI that is safe, responsible, and trustworthy. We are dedicated to driving innovation on behalf of our customers while also establishing and implementing the necessary safeguards to protect consumers and customers.”

Highlights of the steps Amazon and the other companies voluntarily pledged to take include:

  1. Committing to internal and external adversarial-style testing (also known as "red-teaming") of models or systems in areas including misuse, societal risks, and national security concerns such as bio, cyber, and other safety areas.
  2. Working toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards.
  3. Developing and deploying mechanisms that enable users to determine if audio or visual content is AI-generated, including provenance and watermarking, for AI-generated audio or visual content.
  4. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
  5. Incentivizing third-party discovery and reporting of issues and vulnerabilities.
  6. Publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use; including discussion of societal risks, such as effects on fairness and bias.
  7. Prioritizing research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination and protecting privacy.
  8. Developing and deploying frontier AI systems to help address society’s greatest challenges.

Amazon Web Services (AWS), the cloud services platform of Amazon, recently launched a limited preview of a new generative AI model called Bedrock. According to Amazon, Bedrock foundational models can be customized to fit the workflows of specific industries (including retail) and perform a range of tasks including writing blog posts, generating images, solving math problems, engaging in dialog, and answering questions based on a document. Developers are being provided with API-based access to Bedrock foundational models.

AWS is also investing $100 million in the AWS Generative AI Innovation Center, a new program to help its customers build and deploy generative artificial AI solutions.