Events  Deals  Jobs  SF Climate Week 2024 
    Sign in  
 
 
With Seth Baum (Exec Dir., Global Catastrophic Risk Institute).
Wed, Nov 15, 2017 @ 06:00 PM   $20   Bushwick, 119 Ingraham St
 
   
 
 
              

    
 
Sign up for our awesome New York
Tech Events weekly email newsletter.
   
LOCATION
EVENT DETAILS
The Goal of this Think Tank:

"I think human extinction will probably occur, & technology will likely play a part in this. [We could possibly create] a fleet of artificial intelligence-enhanced robots capable of destroying mankind." -- Elon Musk

The goal of this event is to give anyone interested in this topic an opportunity to participate in a conversation on the moral & ethic consequences of developing Artificial Intelligence (AI), & to propose ideas on how we can develop AI safely & beneficially. Ask questions, get answers, connect with other people.

This is a non-technical workshop. Anyone & everyone can participate: marketers, advertisers, developers, engineers, product managers, investors, data analysts, students, policy wonks, aliens from Mars -- all are welcome!
The Discussion: What is AI?

The truth is nobody knows for certain what AI is or, more importantly, what it's becoming. Ask a scientist, a technologist, a corporate executive, & an ethicist to define AI & they will each likely have very different definitions & understandings of the technology.

So, where does this leave the rest of us?

It's up to each individual person to come to their own understanding of AI, & to help define its potential. That's were Tech 2025 comes in. Join us as we launch the Tech 2025 Think Tank (the people's think tank) AI series, to explore the ideas, ethics & technologies powering artificial intelligence now & into the future in an inclusive & open environment. Each Think Tank event is an exploratory adventure into the unknown of emerging technologies & ourselves as we define & redefine what AI is & what it should be.

Every event begins with the question: What is AI? This is done so that we can constantly remind ourselves that the definition of AI is not a rigid & absolute but, rather, its meaning is fluid, constantly changing as the technology develops, & should be subject to analysis, critique, & rigorous debate. We are all contributing to defining this powerful new technology as we use it in our homes, at work, & for entertainment each & every day.
The Thought Exercise: How can we develop safe AI?

Earlier this year (January 5-8), in Asilomar, California, more than 100 Artificial Intelligence experts gathered for the Beneficial AI 2017 Conference, hosted by the non-profit Future of Life Institute, whose scientific board of advisors includes Elon Musk, Larry Page, & Stephen Hawking, among others. The experts in attendance at this exclusive event included researchers from academia & industry, & thought leaders in economics, technology, law, ethics, & philosophy. The event was filled with some of the greatest minds in science & technology developing AI.

The purpose of the conference: to discuss & create guiding principles of "beneficial AI" to keep us from developing AI technologies that can eventually harm (and maybe even destroy) humanity.

Their solution was a mutually agreed upon document called Asilomar 23 AI Principals -- a list of 23 guidelines they suggest all AI researchers, scientists, lawmakers & tech companies should follow when developing & implementing AI. The document has since been signed by over 3,542 science & tech influencers (see the full list of signatories HERE).

Read & download the Asilomar 23 AI Principals document HERE. Also, you may want to watch THIS VIDEO of conference attendees discussing the future of AI on a panel including Elon Musk, Stuart Russell, Ray Kurzweil, Sam Harris, Nick Bostrom, & others.

The Rundown

The incomparable Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, joins us once again to guide us in exploring the intent, meaning, & implications of the Asilomar 23 AI Principals, & whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe or even be enforceable in this fractured, complicated world. Dr. Baum's expertise in global catastrophes & risk will give us unique insight into this topic & guide us on alternate ways of thinking about AI risk. What questions should we be asking of researchers, tech companies, the government, & ourselves about these safeguards. And who arethese 100 AI experts guiding our future?

Put your thinking cap on, connect with other people who are as interested in this topic as you are, & meet Dr. Seth Baum (this might be your only chance to talk to a global apocalypse expert!). And this is an interactive think tank (we have group exercises & discussions -- thinking is nice, but thinking & doing is best!).

Dr. Baum's research focuses on risk & policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, & runaway artificial intelligence.Baum received a Ph.D. in Geography from Pennsylvania State University & completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions.
 
 
 
 
© 2024 GarysGuide      About    Feedback    Press    Terms