Peloton is looking for a Data Engineer to build our e-commerce Data Pipelines & increase the data integrity of e-commerce data models. You will work with multiple teams of passionate & skilled data engineers, architects, & analysts responsible for building batch & streaming data pipelines that process data daily & support all of the e-commerce reporting & ERP Integrations needs across the organization.
Peloton is a cloud-first engineering organization with all of our data infrastructure in AWS leveraging EMR, AWS Glue, Redshift, S3, Spark. You will be interacting with many business teams including finance, analytics, enterprise systems, & partner to scale Pelotons e-commerce data infrastructure for future strategic needs.
Help build a culture of quality
- Assume technical responsibility for new services & functionality, lookout for opportunities for platform improvement, & work with engineers to scale our production systems.
- Identify & lead technical initiatives to build clean, robust, & performant data applications.
- Contribute to the adoption of software architecture & new technologies.
- Lead, coach, pair with, & mentor e-commerce data software engineers.
- Mentor data engineers from diverse backgrounds to nurture a culture of ownership, learning, automation, re-use, & engineering efficiency through the use of software design patterns & industry best practices.
- Engage in code reviews helping maintain our coding standards.
- Be a leader within your team & the organization.
Facilitate the on-time completion of large projects
- Understand the data needs of different stakeholders across multiple business verticals including Business Intelligence, Finance, Enterprise System.
- Develop the vision & map strategy to provide proactive solutions & enable stakeholders to extract insights & value from data.
- Understand end to end data interactions & dependencies across complex data pipelines & data transformation & how they impact business decisions.
- Design best practices for big data processing, data modeling.
- Lead architecture meetings & technical discussions with the focus of reaching consensus & best practice solutions.
- Break down tasks for other engineers & offer guidance to other engineers on the team when they are blocked.
- Achieve on-time delivery without compromising quality.
- 8+ years of relevant experience including e-commerce & data engineering
- Good active listening skills, the ability to empathize with stakeholders, & other engineers.
- Experience in a high-paced, high-growth environment working with deadlines & milestones.
- Comfortable with ambiguity; you enjoy figuring out what needs to be done.
- Senior-level with at least one modern programming language & can learn anything you don't already know to get the job done.
- Excellent time management skills.
- Have a solid understanding of clean data design principles.
- Experience mentoring engineers with the team-focused mentality for success.
- Excellent knowledge about databases, such as PostgreSQL & Redshift.
- Has experiences with GIT, Github, JIRA, & SCRUM.
- 2+ years in building a data warehouse & data pipelines. Or, 3+ years in data-intensive engineering roles.
- Experience with big data architectures & data modeling to efficiently process large volumes of data.
- Background in ETL & data processing, know how to transform data to meet business goals.
- Experience developing large data processing pipelines on Apache Spark.
- Strong understanding of SQL & working knowledge of using SQL(prefer PostgreSQL & Redshift) for various reporting & transformation needs.
- Experience with distributed systems, CI/CD (ex: Jenkins) tools, & containerizing applications (ex: Kubernetes).
- Familiar with at least one of the programming languages: Python, Java.
- Comfortable with Linux operating system & command-line tools such as Bash.
- Familiar with REST for accessing cloud-based services.
- Excellent communication, adaptability, & collaboration skills.
- Experience running an Agile methodology & applying Agile to data engineering.
- Familiar with the AWS ecosystem, including RDS, Redshift, Glue, Athena, etc.
- Has experiences with Apache Hadoop, Hive, Spark, & PySpark.
Founded in 2012, Peloton is a global interactive fitness platform that brings the energy & benefits of studio-style workouts to the convenience & comfort of home. We use technology & design to bring our Members immersive content through the Peloton Bike, the Peloton Tread, & Peloton Digital, which provide comprehensive, socially-connected fitness offerings anytime, anywhere. We believe in taking risks & challenging the status quo by continuously innovating & improving. Our team is made up of passionate brand ambassadors, & we know that together, we go far.
Headquartered in New York City, with offices, warehouses & retail showrooms in the US, UK & Canada, Peloton is changing the way people get fit. Peloton has been named to many prestigious industry lists, including Fast Company's Most Innovative Companies, CNBC's Disruptor 50, Crain's New York Business' Tech25 & Fast50, as well as TIME's Genius Companies. Visit www.onepeloton.com/careers to learn more about joining our team.