Doors at 6:00 PM, discussion from 6:30-7:30, networking 7:30-8:00.
Venue: 501 2nd Street, Suite 202, San Francisco, CA 94107
This event will feature a talk by Jakob Uszkoreit of Google Brain about self-attention mechanisms & their applications. All levels of experience & comfort with the reading material are welcome.
ABSTRACT:
Self-attention has been shown to be an efficient way of learning representations of language, competitive in quality with recurrent & convolutional neural networks, while potentially much more efficient at training time. This talk will cover two specific applications of self-attention: an approach for building very fast Siamese networks for comparing short pieces of text as well as the Transformer, a fast-training, high quality, & potentially more interpretable architecture for tasks ranging from machine translation to image generation.
ABOUT THE SPEAKER:
Jakob Uszkoreit currently works in the research team of Google Brain. There he develops neural network architectures for generating text, images & other modalities in tasks such as machine translation or image super-resolution. Earlier, Jakob led a team in Google Research developing neural network models of language that learn from weak supervision at very large scale. Today, these models are used in Search, Search Ads & the Google Assistant. Before this, Jakob started the group that designed & implemented the semantic parser behind today's Google Assistant after working on various aspects of Google Translate in its earlier years.
Jakob believes the ability to harness weak supervision in multiple modalities at large scale in virtual or physical embodiments will be the key to building machines that 'understand' the world around them, including language, & interact seamlessly with their users & their environment.
Special thanks to our friends at Spoke for providing space for this event.