A global race to build powerful AI is not inevitable. Here’s how technical solutions can help foster cooperation.
Abandoning liability mechanisms risks creating a dangerous regulatory vacuum.
Autonomous AI-enabled organizations are increasingly plausible. They would fundamentally break the way we regulate the economy.
For safety-critical domains like energy grids, "probably safe" isn't good enough. To fulfill the potential of AI in these areas, we need to develop more robust, mathematical guarantees of safety.
In the absence of federal legislation, the burden of managing AI risks has fallen to judges and state legislators — actors lacking the tools needed to ensure consistency, enforceability, or fairness.
AI is increasingly being used for emotional support — but research from OpenAI and MIT raises concerns that it may leave some users feeling even worse.
An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.
Classic arguments about AI risk imagined AIs pursuing arbitrary and hard-to-comprehend goals. Large Language Models aren't like that, but they pose risks of their own.
Stay informed on the future of AI alongside 25,000+ other subscribers