| |
|
| |
| DETAILS |
|
Most teams are still overpaying for inference on problems a 4B model could handle. So we're hosting a dinner about it.
We're gathering a small group of AI infrastructure leaders in San Francisco: engineers & founders building in inference optimization, model distillation, & on-device deployment.
No presentations, off the record.
Hosted by Distil Labs, who've been shipping a platform that trains task-specific small language models from as few as 10 examples. Their fine-tuned SLMs match frontier models at 10-100x lower inference cost.
Sit-down dinner & drinks
Off-the-record conversation
Private dining room, SF
Admission is by approval only.
Approved guests will receive a calendar invite.
|
|
|
|
|
|
|
|