CENTER FOR AI SAFETY
Updated 517 days ago
The Center for AI Safety (CAIS - pronounced 'case') is a research and field-building nonprofit. Our mission is to reduce catastrophic and existential risks from artificial intelligence through technical research and advocacy of machine learning safety in the broader research community. View our past projects on the updates page...
The Center for AI Safety (CAIS) is a research and field-building nonprofit. Our mission is to promote reliability and safety in artificial intelligence through technical research and advocacy of machine learning safety in the broader research community. CAIS was founded by Dan Hendrycks, a machine learning Ph.D. from UC Berkeley. Some of Dan's previous projects include training language models to answer ethics questions, teaching game-playing agents to behave ethically, and providing a framework for analyzing how specific AI research papers contribute to existential risk...
An institute aimed at advancing trustworthy, reliable, and safe AI.