| |
[ SF Open Source AI Week ] |
PyLadies SF
|
With Aishwarya Ramasethu (AI Enggr, Prediction Guard). |
| AutoKitteh, To Be Announced, San Francisco |
|
Oct 23 (Thu) , 2025 @ 05:30 PM
| |
FREE |
|
|
|
|
|
|
|
|
|
DETAILS |
|
Join us for an exciting evening of talks & networking sponsored by AutoKitteh!
Tentative Agenda:
5:30 pm: Networking
6:00 pm: Sponsor Presentation by AutoKitteh
6:20 pm: Community Announcements
6:30 pm: Advanced Safeguarding for AI Inference Pipelines, Aishwarya Ramasethu
Explore practical methods for integrating privacy-preserving techniques in AI applications. Learn about risks associated with large language models (LLMs) during inference & discover safeguards such as Differential Privacy, Federated Learning, & Homomorphic Encryption. This workshop will feature live demonstrations & hands- on exercises.
7:15 pm: Talk (TBD)
7:45 pm: Networking
8:30 pm: Event ends
We are seeking additional speakers. Please DM us your talk proposals.
I'm Aishwarya, an AI Engineer at Prediction Guard. I work at the intersection of privacy, safety, & large language models (LLMs), with a focus on designing systems that are practical, grounded in real-world use, & safe.
My work spans building LLM pipelines with safety risks in mind, as well as fine-tuning open models for downstream tasks. Currently, I'm focused on evaluating LLMs across multiple dimensions for enterprise use cases-addressing pre-deployment risks, deployment-time issues, & online risk monitoring.
I've contributed to research in this space & am part of MLCommons, a community-driven AI safety initiative. I've also explored a range of privacy-preserving techniques, including differential privacy & federated learning.
|
|
|
|
|
|
|
|