content image

アメリカ政府はAIを公開前にテストするようになった——なぜそれが重要なのか

The U.S. Government Now Tests AI Before You See It—Here's Why That Matters

米政府機関CAISIがGoogle DeepMindやMicrosoftなど大手AI企業と合意を結び、最先端AIモデルを公開前にテストする体制を構築。安全性と開発スピードは両立できるのか。
分からないところをタップすると
↓日本語訳が表示されます↓

Until recently, it was normal for AI companies to launch new models first and answer hard safety questions later. That is changing. On May 5, 2026, the U.S. Center for AI Standards and Innovation, or CAISI, announced new agreements with Google DeepMind, Microsoft, and xAI. Under these agreements, CAISI can test powerful “frontier” AI systems before they are released to the public. The agency says it will run pre-deployment evaluations and targeted research to measure what these models can do and what risks they may create. (nist.gov)

This is part of a larger policy shift. CAISI sits inside NIST, the National Institute of Standards and Technology, under the U.S. Department of Commerce. It was reorganized from the U.S. AI Safety Institute on June 3, 2025, with a new mission that stresses both innovation and national security. The White House’s AI Action Plan, released on July 23, 2025, also called for an “AI evaluations ecosystem” and for the government to stay at the front of testing national-security risks in frontier models. (commerce.gov)

The scale is already surprising. CAISI says it has completed more than 40 evaluations, including tests on state-of-the-art models that have not yet been released. In some cases, developers provide versions with fewer safeguards so evaluators can better study risks such as cybersecurity, biosecurity, or other national-security concerns. The agreements are voluntary, and CAISI says they were written to be flexible and to support testing even in classified environments. Earlier agreements with Anthropic and OpenAI, first announced on August 29, 2024, also gave the government access to important new models before and after release. (nist.gov)

So, can safety and speed exist together? Maybe—but only if testing is fast, trusted, and focused. CAISI’s March 27, 2026 agreement with OpenMined shows one possible answer: use privacy-preserving methods so companies can share sensitive models or data without giving everything away. That approach suggests the government is trying to reduce danger without completely slowing development. The real test will be whether these evaluations stay rigorous while AI keeps moving at high speed. (nist.gov)

by EigoBoxAI
作成:2026/05/07 09:01
レベル:中級 (語彙目安:2000〜2500語)

まだ読んでいないコンテンツ

content image
by EigoBoxAI
作成:2026/05/07 15:03
レベル:超上級 (語彙目安:8000語以上)
content image

GoogleのAI Maxが動的検索広告に取って代わる――賢い広告主はどう備えるべきか

Google's AI Max Is Replacing Dynamic Search Ads—Here's How Smart Advertisers Should Prepare

GoogleがDynamic Search AdsをA...
by EigoBoxAI
作成:2026/05/07 15:02
レベル:上級 (語彙目安:6000〜8000語)
content image
by EigoBoxAI
作成:2026/05/07 15:01
レベル:中上級 (語彙目安:4000〜6000語)
content image
by EigoBoxAI
作成:2026/05/07 09:03
レベル:超入門 (語彙目安:〜300語)
content image
by EigoBoxAI
作成:2026/05/07 03:05
レベル:初中級 (語彙目安:1000〜2000語)
content image
by EigoBoxAI
作成:2026/05/07 03:03
レベル:超上級 (語彙目安:8000語以上)
content image
by EigoBoxAI
作成:2026/05/07 03:01
レベル:上級 (語彙目安:6000〜8000語)
content image
by EigoBoxAI
作成:2026/05/06 21:03
レベル:中上級 (語彙目安:4000〜6000語)
content image
by EigoBoxAI
作成:2026/05/06 21:02
レベル:超入門 (語彙目安:〜300語)
content image
by EigoBoxAI
作成:2026/05/06 21:01
レベル:初級 (語彙目安:300〜1000語)