| |
|
| |
| DETAILS |
|
A free, public workshop-no technical background needed
What would it actually mean for machines to become smarter than humans-not just at chess or writing, but at everything? For decades, scientists & researchers have debated the idea of Superintelligent AI: systems that could outperform humans across nearly all cognitive tasks. What once sounded like distant science fiction is now increasingly discussed as a real possibility within our lifetimes.
This workshop is a guided, interactive introduction to the idea of superintelligence & the problem of loss of control-what happens if we create systems that are more capable than we are, but whose goals don't perfectly align with human values.
Over the course of the workshop, we'll explore:
Superintelligence: What it means for an AI system to surpass human intelligence across domains, & how this differs from today's AI.
Power & control: Why advanced AI could wield enormous economic, political, & strategic influence-and why just turning it off may not be realistic.
Misalignment risks: How systems pursuing the wrong objectives could cause harm even without malicious intent.
Societal choices: How decisions made today-by researchers, companies, & governments-may shape whether AI becomes a tool we control or a force we struggle to contain.
What you'll get:
A clear, non-technical introduction to core ideas from AI safety & existential risk research.
Live demonstrations of current AI capabilities, to ground abstract concerns in real systems.
Small-group discussions & thought experiments that connect AI risks to everyday life, institutions, & incentives.
A better mental model for evaluating headlines, hype, & claims about AI progress.
Format & vibe:
This is not a lecture it is an interactive workshop designed to help people think clearly about a topic that is increasingly important & moving very quickly.
Whether you're new to AI, casually interested, or deeply unsure what to believe, you'll leave with a clearer picture of why many researchers take superintelligence seriously-and what's actually at stake if we get it wrong.
|
|
|
|
|
|
|
|