ABOUT THE ROLE
Peloton is looking for a Data Engineer to build our Data Warehouse & Data Pipelines. You will work with multiple teams of passionate & skilled data engineers, architects, & analysts responsible for building batch & streaming data pipelines that process terabytes of data daily & support all of the analytics, business intelligence, data science & reporting data needs across the organization.
Peloton is a cloud first engineering organization with all of our data infrastructure in AWS leveraging EMR, AWS Glue, Redshift, S3, Spark. You will be interacting with many business teams including marketing, sales, supply chain, logistics, finance & partner to scale Pelotons data infrastructure for future strategic needs.
- Understand the data needs of different stakeholders across multiple business verticals including Finance, Marketing, Logistics, Product etc.
- Develop the vision & map strategy to provide proactive solutions & enable stakeholders to extract insights & value from data.
- Understand end to end data interactions & dependencies across complex data pipelines & data transformation & how they impact business decisions.
- Design best practices for big data processing, data modeling & warehouse development throughout the company.
- Familiar with at least one of the programming languages: Python, Java.
- Comfortable with Linux operating system & command line tools such as Bash.
- Familiar with REST for accessing cloud based services.
- Excellent knowledge about databases, such as PostgreSQL & Redshift.
- Has experiences with GIT, Github, JIRA & SCRUM.
- 2+ years in building a data warehouse & data pipelines. Or, 3+ years in data intensive engineering roles.
- Experience with big data architectures & data modeling to efficiently process large volumes of data.
- Background in ETL & data processing, know how to transform data to meet business goals.
- Experience developing large data processing pipelines on Apache Spark.
- Experience with Python or Java programming languages.
- Strong understanding of SQL & working knowledge of using SQL(prefer PostgreSQL & Redshift) for various reporting & transformation needs.
- Excellent communication, adaptability & collaboration skills.
- Experience running Agile methodology & applying Agile to data engineering.
NICE TO HAVES:
- Familiar with AWS ecosystem, including RDS, Redshift, Glue, Athena, etc.
- Has experiences with Apache Hadoop, Hive, Spark & PySpark.
Founded in 2012, Peloton is a global interactive fitness platform that brings the energy & benefits of studio-style workouts to the convenience & comfort of home. We use technology & design to bring our Members immersive content through the Peloton Bike, the Peloton Tread, & Peloton Digital, which provide comprehensive, socially-connected fitness offerings anytime, anywhere. We believe in taking risks & challenging the status quo by continuously innovating & improving. Our team is made up of passionate brand ambassadors, & we know that together, we go far.
Headquartered in New York City, with offices, warehouses & retail showrooms in the US, UK & Canada, Peloton is changing the way people get fit. Peloton has been named to many prestigious industry lists, including Fast Company's Most Innovative Companies, CNBC's Disruptor 50, Crain's New York Business' Tech25 & Fast50, as well as TIME's Genius Companies. Visit www.onepeloton.com/careers to learn more about joining our team.