Mission AI: Preparing for Fight Against the Malicious Use of Artificial Intelligence (a research analysis workshop)
Event Website & full details:bit.ly/nomaliciousai
About This Workshop
In February 2017, twenty-sixrenowned experts onAI safety, drones, cybersecurity, lethal autonomous weapon systems, & counter-terrorism, from 14 institutions inacademia, civil society, & industry, gathered for a special 2-day workshop at a the University of Oxford to discuss the malicious use of Artificial Intelligence (AI) by rogue nations, dictators, hackers, terrorists & other assorted criminals, & how companies & governments should prepare for these coming threats.
The result of this secret meeting was the publication of an extensive report earlier this year,The Malicious Use of Artificial Intelligence: Forecasting, Prevention, & Mitigation, that has since been widely circulated & cited. It's the only report of its kind that tackles the possibility of threats by human beings who would intentionally manipulate AI for malicious purposes. It's a battle plan, of sorts, for identifying & combating an enemy we can't yet see.
Report Executive Summary:
"Artificial intelligence & machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed & can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, & proposes ways to better forecast, prevent, & mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers & defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."
What does this mean for you? How will this all impact your life, the decisions you make at work & in your personal life as a consumer? And how will the government, big tech companies & industries protect us from these real threats?What does this important report tell us about how we should all prepare for the coming global battle against the malicious use of AI by criminals, rogue governments, & our enemies who will weaponize it?
What We'll CoverIn This Workshop
This workshopwill be a deep dive into the first 30 pages of this compelling 100-page report. Guest speaker,Amauche Emenari(graduate student, Massachusetts Institute of Technology (MIT),Center for Brains, Minds & Machines, specializing in neural circuits for intelligence), will provide a comprehensive overview, related literature, & interactive discussions on the following sections in the report:
Introduction to the Report (why was it produced, an overview of the authors (26 experts) & the institutions behind this report, how the report is being used, etc.)
Scope of Material Covered in the entire report
General Framework for AI & Security Threats
Security-Relevant Properties of AI
Prerequisites & Preparation
No prerequisites. This workshop isforanyone who who wants to level-up their AI knowledge & gain a more nuanced understanding of how experts across multiple fields defining AI security threats & their suggestions for what businesses & governments should do to prepare for (and ultimately thwart) threats by humans using AI & machine learning maliciously. This workshop is ideal forproduct managers, developers & engineers, marketers, tech journalists, investors, entrepreneurs, students, & teachers.
I am a second-year graduate student in Computational Neuroscience in the Brain & Cognitive Sciences Department at MIT. I currently work as a member of the Synthetic Neurobiology Group specializing in using biological & machine learning techniques to build structural models of the central nervous system.I am interested in understanding how the interaction of complex neural circuits leads to thoughts, perceptions, learning, memory, & behavior & developing devices to interpret & influence brain function. Prior to arriving at MIT, I designed & developed neuroscience games at the National Institutes of Health. I have worked in industry as a software developer on Apple's iPhone software team & on Microsoft's Cloud Infrastructure team.I was born & raised in Washington D.C. I earned my bachelor's degree in Biomedical Engineering & Computer Science from Duke University & my master's degree in Biomedical Engineering from Boston University.