FUTURE OF HUMANITY INSTITUTE - Key Persons
Joar is a DPhil student at the Department of Computer Science, supervised by Professor Alessandro Abate.
His research is on machine learning, and how to make AI systems safe and reliable. His current thesis title is "Safe Reinforcement Learning", but his broader interests include everything from cognitive science to formal epistemology. In the past he has done research in areas such as philosophical decision theory, constrained reinforcement learning, computational learning theory, inductive logic programming, and active learning with neural networks.
Before starting his DPhil he completed the BA and MCompPhil in Computer Science and Philosophy at Oxford, which he graduated from as top of the year.
Angela is a Research Scholar at the Future of Humanity Institute. She holds a BS in psychology from the Universidad de los Andes. She has worked or interned for institutions such as the Colombian Police, Innovations for Poverty Action, and Generation Pledge. Her interests include how to improve policy and institutional decision-making from a longtermist perspective.
Ben Garfinkel's research concerns the security implications of artificial intelligence, trends in international peace and conflict, and the methodological challenge of forecasting and reducing technological risks. He is also the Acting Director of the Centre for the Governance of AI, a research and field-building organisation focused on the risks and opportunities posed by artificial intelligence.
Ben earned a degree in Physics and in Mathematics and Philosophy from Yale University, before deciding to study for a DPhil in International Relations at the University of Oxford. He has a strong belief in the value of cross-disciplinary learning and collaboration.
Job Titles:
- Research Associate
- Research Associate / Research Scholars Programme
Carl Shulman is a Research Associate at the Future of Humanity Institute, Oxford University, where his work focuses on the long-run impacts of artificial intelligence and biotechnology. He is also an Advisor to the Open Philanthropy Project. Previously, he was a Research Fellow at the Machine Intelligence Research Institute and held positions at Clarium Capital Management and Reed Smith LLP. He attended New York University School of Law and holds a degree in philosophy from Harvard University.
Carla Zoe is a Research Scholar at FHI and a Research Affiliate at the Centre for the Study of Existential Risk (University of Cambridge). She works on the limitations of deep learning, on methodologies for AI research agendas and on estimating the risks of technologies.
She previously worked on foundational research in biochemical neurobiology at LMU Munich and computational ethology at the Institute of Neuroinformatics at ETH Zurich. In her undergraduate, she focussed on experimental psychology, life science and neuroscience. She has presented philosophical research on AI control at the workshop Decision Theory & the Future of AI in Munich, attended the International Rationality Summer School and started an AI Alignment reading group in Zurich.
She has received a research grant by the Berkeley Existential Risk Initiative, a Max Weber Stipend for academic excellence and an LMU research award and studied on Semster at Sea with a scholarship.
David A. Dalrymple (also known as "davidad") came to the Future of Humanity Institute to develop an idealized mathematical model of normative deliberation, analogous to Bayesian updating as a model of descriptive deliberation, or to expected-utility maximization as a model of instrumental deliberation. He believes modern algorithmic and mathematical frameworks, such as category theory and optimal transport, are ripe to be applied to advance the frontier on deep metaphilosophical questions-especially where AI is concerned.
David has backgrounds in theoretical computer science, applied mathematics, software engineering, and neuroinformatics. In 2008 he was the youngest person to receive a graduate degree from MIT, and he went on to study biophysics at Harvard. His neuroscience work has been funded directly by technology leaders such as Larry Page and Peter Thiel in their personal capacities. David has also worked in machine learning and software performance engineering at major tech companies and startups alike, and co-invented the top-40 cryptocurrency Filecoin with Protocol Labs, a decentralized technology firm where David continues to guide R&D strategy.
Cassidy is a Research Scholar, Acting Co-Lead of the Biosecurity Research Group at the Future of Humanity Institute, and a DPhil candidate at the University of Oxford under the supervision of Professor Michael Bonsall. Her research centres on global catastrophic biological risks and the threats posed by advanced biotechnology, with a focus on answering fundamental questions about novel pathogens with pandemic potential.
Cassidy is a qualified doctor in Australia and has previously worked in hospital and laboratory-based medicine, human biosecurity and communicable disease public health. She completed her undergraduate studies in Neuroscience and Developmental Biology and her Medical degree at the University of Queensland, and holds a Masters of Public Health from the University of Melbourne.
Bridging Health and Security Sectors to Address High-Consequence Biological Risks (Cassidy Nelson, co-author with Michelle Nalabandian of NTI)
Job Titles:
- Professor of International Relations, Department of Politics & International Relations, University of Oxford
- Professor of International Relations, University of Oxford
Duncan researches problems of international cooperation and institutions-including international law and international organizations-with an emphasis on institutional design. His current projects focus on multi-partner governance of transnational production and the emergence of informal international organizations (such as the G20) as distinctive forms of international governance. He is cofounder and editor of the journal International Theory.
Dr. Drexler is widely known for his seminal studies of advanced nanosystems and scalable atomically precise manufacturing (APM), a prospective technology using arrays of nanoscale devices to guide chemically-reactive molecular encounters, thereby structuring matter from the bottom up. His 1981 paper in the Proceedings of the National Academy of Sciences established the fundamental principles of APM, and his 1992 book, Nanosystems: Molecular Machinery, Manufacturing, and Computation, presented a deeper analysis of key physical principles, devices, and systems for implementing APM capabilities.
Dr. Drexler's current research explores prospects for advanced AI technologies from the perspective of structured systems development, potential applications, and global implications. Key considerations in this work include advances in AI-enabled automation of AI research and development, and the potential role of thorough automation in accelerated development of comprehensive AI services.
I am a climate physicist interested in stepping back and thinking afresh about how we approach climate change. What are the essential elements of the problem? What can we learn by examining it as a potential existential risk? What might reliable long-term solutions look like? Are we on the path towards them in our research and our policy? Why and why not?
I came to be a research fellow at FHI after a PhD in Atmospheric Physics at Imperial College, London where I worked to re-examine the unorthodox theory that the climate might self-organise to maximise its entropy production rate. This gave me an interesting angle on the ways we make sense of the Earth system and also some experience in highly speculative research which diverges from the status-quo. Other key experiences that shaped my approach were the internships I did with the UK Met Office and at the UNFCCC during that time and the sometimes-surprising insights gleaned in outreach conversations.
Outside of the office, I spend a lot of time in the outdoors - hiking, running, climbing, swimming, cycling, skiing, camping and sitting around fires - and I find that there is a groundedness, simplicity and perspective that weaves back from this into my work. Understanding and connecting to the stories, motivations and concerns of the broadest set of people I can also feels essential to make sense of such a pervasive and emotionally complicated topic as climate change and our response to it. After all, to be a scientist or a philosopher spotlighting difficult societal choices, we have to bring much more than just our education or academic experiences to bear.
Job Titles:
- Acting Co - Lead
- Acting Head of Biosecurity Research Group
Gregory Lewis is a DPhil Scholar at the Future of Humanity Institute, where he investigates long-run impacts and potential catastrophic risk from advancing biotechnology. He is a D. Phil student in Michael Bonsall's mathematical ecology group. Previously, he was an academic clinical fellow in public health medicine, where he won the O'Brien prize, and before that a junior doctor. He holds a master's in public health (with distinction) and a medical degree, both from Cambridge University. Prior to reading medicine, he represented Great Britain in the international biology Olympiad.
Pandemics are not the only thing keeping biosecurity experts up at night (Toby Ord and Gregory Lewis | The Times)
Hannah holds a Bachelor of Arts in Biochemistry from Wellesley College; her Senior Honors Thesis was Investigating how proline linkers enhance antimicrobial peptide hybrid activity.
Job Titles:
- Research Associate
- Research Associate With the Centre for the Governance of AI
Helen is a Research Associate with the Centre for the Governance of AI, and Director of Strategy at the Center for Security and Emerging Technology. She previously worked as a Senior Research Analyst at the Open Philanthropy Project, where she focused on policy and strategy issues related to progress in machine learning and artificial intelligence, including consulting with governments and policymakers as well as advising on grants to scholars working on AI policy issues. She also hired and managed a team to handle the operational side of Open Philanthropy's scale-up from making $10 million in grants per year to over $200 million. Helen graduated from the University of Melbourne in 2014 with a BSc in Chemical Engineering and a DipLang in Arabic.
Job Titles:
- Associate Professor in the Faculty of Philosophy
- Principal Investigator, Population Ethics: Theory and Practice
Hilary Greaves is Associate Professor in the Faculty of Philosophy, and Principal Investigator of the Population Ethics: Theory and Practice project.
Hjalmar's current research focuses primarily on understanding robustness in machine learning using formal verification. He has previously worked on conceptions of agency in arguments for AI risk, safe reinforcement learning and inductive bias in deep learning.
He is a DPhil student supervised by Marta Kwiatkowska at the University of Oxford Computer Science Department, supported by the Future of Humanity Institute's DPhil Scholarship. He previously earned an MSc in Computer Science from Oxford University and a BSc in Mathematics from Stockholm University.
James is a Research Fellow at the Future of Humanity Institute. He completed his PhD at the MRC Laboratory of Molecular Biology. His research interests include risks from engineering biology, in particular whether and how we can choose to invest in technologies so as to reduce overall risk.
Jan is a PhD candidate in the Centre for Doctoral Training on Intelligent and Autonomous Machines and Systems (AIMS CDT), and a member of the FHI DPhil Scholars program. His current research interests include AI safety (in particular interpretability of machine learning algorithms), applications of AI in medicine and biomedical research, and clinical research on cognitive enhancement.
Jan studied medicine at the University of Erlangen-Nuremberg and the University of Wuerzburg, Germany. Parallel to studying, he worked as a research assistant in neuroscience, immunology and global health. After graduating from medical school, he finished a one-year master's degree in Operational Research with Data Science at the University of Edinburgh and worked with Prof. Amos Storkey on innovative deep learning-based approaches to medical image analysis.
Jan is working on long-term technical problems of robust and beneficial artificial intelligence. Previously he was a PhD student With Marcus Hutter and wrote his dissertation on general reinforcement learning.
Janvi holds a Bachelor of Science in Biomedical Sciences from the University of Warwick and a Master's degree in Biotechnology and Pharmacology (Therapeutics) from the University of Cambridge. She is an iGEM alumnus and worked as an intern at the Biological Weapons Convention for six months prior to starting her DPhil. Thesis title: Integrating structural and genomic information to understand cross-species transmission and evolution of emerging viruses.
Jennifer is interested in several topics at the intersection of AI capabilities and safety, including interpretability of machine learning algorithms and applications of physics to machine learning. Before joining FHI, she obtained a PhD in physics from the University of Chicago and was a postdoc at the Institute for Advanced Study in Princeton, New Jersey.
Jonas' research interests include the dual-use potential of life sciences research and biotechnology, as well as fast response countermeasures including vaccine platforms.
Jonas is a Researcher at the Future of Humanity Institute and medical student at the University of Oxford. He previously completed a BA Hons in Medical Sciences with a focus on viral and bacterial disease, as well as vaccinology.
Job Titles:
- Associate Professor of International Relations, University of Oxford
Lewis is a DPhil Affiliate at the Future of Humanity Institute and a DPhil candidate in Computer Science at the University of Oxford. He is interested in how a combination of the logical and statistical paradigms within AI can be used to help create safe, explainable, and provably beneficial technologies. His current research explores game theory, formal methods, and machine learning; the working title of his thesis is "Rational Synthesis in Evolutionary Games". Before coming to Oxford he completed a Bachelor's degree in mathematics and philosophy at the University of Warwick, and a Master's degree in AI at the University of Edinburgh.
Luca Righetti is a Research Scholar at the Future of Humanity Institute. His research interests include the governance of emerging technologies, improving institutional decision making, and causal inference.
Before coming to FHI, Luca completed a BA (Hons) in Economics at the University of Cambridge and worked as a Research Assistant for the Oxford Martin School. He also co-hosts the podcast Hear This Idea, which showcases new thinking in philosophy, the social sciences, and effective altruism.
Maria graduated with a 1st class MPhys from the University of Oxford in 2020. She is now doing a DPhil co-supervised in the Mathematical Physics group of the Mathematical Institute, and Prof. Vlatko Vedral's Frontiers of Quantum Physics group in the Department of Physics. Her DPhil research is in re-framing quantum thermodynamics using Constructor Theory. She is also highly active in quantum science communication and outreach. At FHI, she is interested in exploring the ultimate limits of technology and the long-term consequences of Constructor Theory.
Job Titles:
- Research Assistant to the Director
Matthew works with Nick Bostrom on his ongoing research. From 2018-2020 he worked as researcher and project manager on Toby Ord's book - The Precipice: Existential Risk and the Future of Humanity. Previously, he worked as an equities analyst at Sanford C. Bernstein, researching global cement markets. He has a B.A. in Philosophy from the University of Cambridge.
Job Titles:
- Professor of Machine Learning, University of Oxford
Michael A Osborne (DPhil Oxon) works to develop machine intelligence in sympathy with societal needs. His work in Machine Learning has been successfully applied in diverse contexts, from aiding the detection of planets in distant solar systems to enabling self-driving cars to determine when their maps may have changed due to roadworks. Dr Osborne also has deep interests in the broader societal consequences of machine learning and robotics. His work on the significance of machine learning and robotics to future labour markets has resulted in both sustained coverage in most major media venues (e.g. his being interviewed on BBC Newsnight, a cover feature in the Economist) and policy impact (including presenting oral evidence to the House of Commons Science and Technology Committee). Dr Osborne is the Dyson Associate Professor in Machine Learning, a co-director of the Oxford Martin programme on Technology and Employment, an Official Fellow of Exeter College, and a co-director of the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems, all at the University of Oxford.
Dr. Montague is a Senior Scholar at the Johns Hopkins Center for Health Security, a Research Scientist in the Department of Environmental Health and Engineering at the Johns Hopkins Bloomberg School of Public Health and a Research Associate at FHI.
Dr. Montague has characterised potential biological existential risks, mostly artificial in nature, and designed approaches to mitigating those risks. This has included work on the feasibility of artificial light driven indoor agriculture and on addressing the attribution problem.
Nick is a Research Analyst at GiveWell working on the Open Philanthropy Project. He earned a PhD in Philosophy from Rutgers University in 2013 and then worked as a Research Fellow at the Future of Humanity Institute.
He is particularly interested in ethical issues related to the interests of future generations, and the impact of science and technology on future generations.
2013 | Rationing and rationality: the cost of avoiding discrimination. (Beckstead, N. & Ord, T. (2013). In N Eyal, et al. (eds.) Inequalities in Health: Concepts, measures, and ethics (pp. 232-239). Oxford University Press.
Job Titles:
- Director
- Founding Director, Professor
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. Bostrom is the most-cited professional philosopher under the age of 50. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002). His academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.
Bostrom, N. (2008). Drugs can be used to treat more than disease.
Parenthetical word. (Bostrom, N. 2008. In I. V. Sledzevsky and V. Prajd, Alexei Turchin's structure of the global catastrophe: Risks of human extinction in the XXI century, a series: Dialogues about the future, Vol 2 (pp. 22).
Ethical principles in the creation of artificial minds. (Bostrom, N. 2007. In Analysis and metaphysics. (Vol. 6 pp. 141-143).
Ethical and political challenges to the prospect of life extension. (Bostrom, N. (2007). Proceedings from World Demographics Association Proceedings 2006)
In the great silence there is great hope. (Bostrom, N. (2007).
Bostrom, N. (2006). A short history of transhumanist thought. Analysis and Metaphysics, 5(1-2), 63-95.
Do we live in a computer simulation? (Bostrom, N. 2006. New Scientist, 192(2579))
Growing up: Human nature and enhancement technologies. (Bostrom, N. (2006). In E. Mitchell (Ed.), Tomorrow's people: the challenge to human nature)
A history of transhumanist thought. (Bostrom, N. 2005. Journal of Evolution and Technology, 14(1), 1-25)
Recent developments in the ethics, science, and politics of life extension. (Bostrom, N. 2005. Ageing Horizons, 3(2005), 28-33)
In defence of posthuman dignity. (Bostrom, N. 2005. In Bioethics. Vol. 19(3), pp. 202-214.
Astronomical waste: The opportunity cost of delayed technological development. (Bostrom, N. 2003. Utilitas, 15(03), 308-314)
Bostrom, N. (2003). Human genetic enhancements: a transhumanist perspective. The Journal of Value Inquiry, 37(4), 493-506.
Bostrom, N., & Cirkovic, M. M. (2003). The doomsday argument and the self-indication assumption: reply to Olum. The Philosophical Quarterly, 53(210), 83-91.
The simulation argument: Why the probability that you are living in the Matrix is quite high. (Bostrom, N. 2003. T Times higher educational supplement)
Cortical integration: How to store complex representations in long-term memory. (Bostrom, N. 2000)
The doomsday argument is alive and kicking. (Bostrom, N. 1999. Mind, 108(431), 539-551)
Ondrej Bajgar is a DPhil Affiliate at the Future of Humanity Institute and also a DPhil candidate in the CDT in Autonomous, Intelligent Machines and Systems in the Department of Engineering Science. His research focuses on learning and enforcing robust safety constraints on the behaviour of autonomous systems. He has also been examining whether human rights could form a value foundation on which safety constraints and the associated governance structures could be built. He studied mathematics at the University of Warwick, and before joining FHI, he worked as an AI researcher for IBM Watson, mainly in the areas of text understanding, dialogue systems, and the methodology of evaluating machine learning architectures. Before starting his DPhil, he spent two years at FHI as a Senior Research Scholar. He has also been involved in running Summer Academy Discover, helping high school students find a meaningful future direction.
Job Titles:
- Research Associate
- Alignment Research Center
He is interested in technical questions bearing on the long-term impact of artificial intelligence, and writes about some of them here.
Paul Cristiano is a researcher at OpenAI and research associate at the Future of Humanity Institute, working on AI alignment. Paul just completed a PhD in the theory of computing group at UC Berkeley.
Job Titles:
- Head of Oxford 's Mathematical Ecology Research Group
- Professor of Mathematical Biology, University of Oxford
Mike Bonsall is head of Oxford's Mathematical Ecology Research Group (MERG), and supervises two of FHI's DPhil students who are also in MERG: Andrew Snyder-Beattie and Greg Lewis. The focus of this work within MERG is on studying existential risk from biotechnology using the tools of ecology and evolution.
Mike is a population biologist and has research interests across a range of disciplines including biodiversity, ecology, evolution, health and economics.
His research combines quantitative and empirical approaches to addressing cross-disciplinary questions such as the evolution of parental care and cannibalism, evaluating the cost-effectiveness of different strategies for disease control, approaches for combining ecological and evolutionary information for assessing biodiversity, the dynamics of stem cell systems and the role of uncertainty in dynamics of metapopulations. He is a member of the DEFRA Advisory Committee on Releases to the Environment and has worked with WHO, FNIH and the EU in developing guidance frameworks for the use of novel biotechnological approaches for pest and vector control.