| |
| Physical AI Hack
|
| Founders SF Lab, 2 Marina Blvd, B300, San Francisco |
|
Jan 31 (Sat) , 2026 @ 09:00 AM
| |
FREE |
|
|
|
|
|
|
|
|
| |
| DETAILS |
|
Physical AI Hack 2026 is a hands-on hackathon for people who want to see AI actually work on real robots. We'll be hosting it at Founders Inc, a space where ambitious founders, builders, & operators come together to grow & scale real startups.
This is not a simulation-only event or a paper exercise. Teams will build, fine-tune, & deploy models on real robotic platforms & watch their systems succeed or fail in the physical world.
We're designing the hack around simple, high-signal tasks that are easy to understand, easy to benchmark, & surprisingly hard for robots.
Think real tasks that look simple, until a robot has to do them:
Puzzle & shape insertion. Trivial for humans, brutal for robots. A clean benchmark for vision (shape recognition), action (pick & place), & precise alignment.
Plugging in chargers. A real household task that reveals how difficult fine insertion & depth perception actually are.
Pouring liquid into a cup. Inspired by coffee-making robots, where small depth errors quickly turn into spills instead of success.
These challenges are intentionally chosen because progress is visible, measurable, & hard to fake. We're still adding tasks & are very open to ideas. If there's a real-world manipulation problem you think belongs here, we want to hear it.
What you'll work with
Solo Tech provides the base VLM & VLA models, along with the fine-tuning workflow, so teams can focus on learning & iteration instead of setup.
World Intelligence provides 50+ hours of multimodal egocentric data, including 2D video, depth, IMU, & audio, collected from the same task families used in the hack.
Oli Robotics provides the imitation learning robotics data & tooling for robotics tasks, including demonstrations related to automated coffee-making.
You'll also have access to real robots on site, including Unitree G1, Open Droid R1D2/R2D3, Open Duck Mini & LeRobot SO-101/LeKiwi, so improvements aren't theoretical. You'll see them in action.
What teams can explore
Teams are free to choose their own technical approach. Possible directions include:
Transfer learning & fine-tuning of VLM & VLA models on task-specific data.
Closed-loop policies that improve alignment & execution through feedback.
Generalization across task variations, such as new shapes or layouts.
|
|
|
|
|
|
|
|