Events  Classes  Deals  Spaces  Jobs 
    Sign in  
 
 
With Seth Baum (Executive Dir., Global Catastrophic Risk Institute).
Tuesday, July 11, 2017 at 06:00 PM    Cost: $20
Galvanize, 315 Hudson St, 2nd Fl
 
     
 
 
              

      
 
Sign up for our awesome New York
Tech Events weekly email newsletter.
   
 
LOCATION
 
DESCRIPTION
The Skynet Is Falling: 23 Guidelines to Avoid an AI Apocalypse (according to experts)

Event website:http://bit.ly/23aiprincipals

About This Workshop AI experts attending the Beneficial AI Conference in January (including Elon MusK who is seated in the second row). January 58of this year, in Asilomar, California, more than 100 Artificial Intelligence expertsgathered for fortheBeneficial AI 2017 Conference(a follow-up to the 2015AI Safety conference in Puerto Rico), hosted by the non-profitFuture of Life Institute, whosescientific board of advisorsincludes Elon Musk & Stephen Hawking, among others. The experts in attendance at this exclusive event included researchers from academia & industry & thought leaders in economics, technology, law, ethics, & philosophy.

The purpose of the 3-day conference? To discussand create guidingprinciples of 'beneficial AI.' Here is a portion of the joint statement from the organizers of the event:

'We, the organizers, found it extraordinarily inspiring to be a part of theBAI 2017 conference, the Future of Life Institutes second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished & interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, & the people playing a part in that transition have a huge responsibility & opportunity to shape it for the best.

In planning the Asilomar meeting, we hoped both to create meaningful discussion among the attendees, & also to see what, if anything, this rather heterogeneous community actually agreed on. We gathered all the [AI research] reports we could & compiled a list of scores of opinions about what society should do to best manage AI in coming decades.'

What this group of 100+ experts came up with is a mutually agreed upon 'Asilomar 23 AI Principals' -- a list of 23 guidelines that they suggest AI researchers, scientists, lawmakers & tech companies should obey to ensure safe, ethical & beneficial use of AI. In short, guidelinesto keep us from creating AI/robots that will destroy humanity.

The guidelines havesince been covered widely in the media & signed by 3,542 AI/Robots research. See the full list of signatoriesHERE.

'I think human extinction will probably occur, & technology will likely play a part in this. [We could create]a fleet of artificial intelligence-enhanced robots capable of destroying mankind.' --Elon Musk
What Will This Workshop Cover?
JOIN USfor this special, interactive workshop & discussion, featuring guest presenter, Dr. Seth Baum,Executive Director of theGlobal Catastrophic Risk Institute, as we explore the intent, meaning, & implications of these AI guidelines, & whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe.

Dr. Baum's expertise in global catastrophes & risk will offer unique insight into this topic & guide us on alternate ways of thinking about AI risk & the questions we should be asking of researchers, tech companies & the government about these safeguards. Additionally, whoarethe 100 AI experts guiding our future? What can we learn about them that will help us to understand how our future is being shaped through AI?

You won't want to miss this workshop!

After the presentation, we will have our popular interactive, group experience where you will get the opportunity to explore & answer some of the challenging problems in this space with others who are just as intrigued by this topic as you are!

Prerequisite

No technical experience required. This is anon-technicalworkshop meant for everyone: marketers, advertisers, developers, engineers, product managers, investors, data analysts, students, policy wonks -- all are welcome!

In addition to reading theAsilomar 23 AI Principals,you may want to watch the following video of conference participants on a panel discussing the future of AI including Elon Musk,Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, & Jaan Tallin. Video here:https://www.youtube.com/watch?v=h0962biiZa4

Agenda
  • 6pm - 6:15pm - Sign in, enjoy lite bites & beverages
  • 6:15 - 6:20 - Introduction, Tech 2025 announcements, sponsor acknowledgement
  • 6:20 - 6:45 - Guest instructor, Dr. Seth Baum, presentation
  • 6:40 - 7:50 - Interactive group exercises & problem-solving
  • 7:50 - 8pm - Final thoughts & Q&A
Guest Presenter

Dr. Seth Baum, Executive Director of theGlobal Catastrophic Risk Institute, a nonprofit think tank that Baum co-founded in 2011.

Dr. Baum'sresearch focuses on risk & policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, & runaway artificial intelligence.

Baum received a Ph.D. in Geography from Pennsylvania State University & completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions.

Follow him on Twitter@SethBaumand Facebookhttp://www.facebook.com/sdbaum

 
 
 
 
© 2017 GarysGuide      About   Terms   Press   Feedback