Build the next generation of scalable data infrastructure
Our Data Infrastructure team is a small group focused on delivering major impact as they build the scalable data infrastructure that ingests, analyzes & links data resources across the organization. Were looking for individuals who are comfortable with autonomy, confident in moving fast, & are always looking for new ways to better understand & improve our users (data scientists & data engineers) experience. Data Infrastructures core mission is to enable the most efficient & highest quality feature generation in service of building the next generation of data intelligence products.
You will work on:
Building & maintaining the infrastructure that enables highly-scalable ETL pipelines & data-driven systems that contribute towards building a compounding linked data asset. We work closely with product owners, data scientists, & machine learning engineers building the infrastructure to scalably productionize analytic solutions.
Youll be responsible for all phases of the development cycle: design, implementation, testing, & release. Leveraging your deep knowledge & experience in taking ideas from zero to completion, youll create the platform that empowers product teams at Enigma to quickly explore, iterate & deliver world-class data intelligence.
Were looking for someone who has:
- A strong background as a data engineer in building & shipping highly scalable distributed systems on cloud platforms (AWS/Azure/GCP) & database technologies (SQL/NoSQL/column-oriented data stores/distributed databases)
- Experience with the Big Data ecosystems (Hadoop/Hive/Spark/Airflow)
- A proven track record of leading & delivering large projects independently
- The proven ability to learn new technologies quickly
- Experience with Python a plus