Blink Health is a well-funded healthcare technology company on a mission to make prescription drugs more accessible & affordable for everyone. We're scaling up in a highly complex vertical to change the way Americans access the prescription drugs they need.
Our proprietary platform & supply chain allows us to offer everyone whether they have insurance or not amazingly inexpensive prices on over 15,000 medications. With the addition of telemedicine and home delivery for prescriptions, Blink is providing a life-changing experience for people all over the country & fixing how opaque, unfair & overpriced healthcare has become. We are a highly collaborative team of builders & operators who invent new ways of working in an industry that historically has resisted innovation. Join us!
About The Team
Blink Engineering strives to build trusted, highly observable, data-driven products to bring affordable, accessible healthcare to all Americans. We understand healthcare is the most complex system most of us will ever fix. We believe in solving this complexity through the use of simple, well-known technologies. We are a highly collaborative team that believes in owning outcomes over owning code & putting patients at the center of everything we do.
The Blink Health Data Engineering & Analytics team is a small team responsible for building infrastructure, frameworks & tooling to enable data-driven decisions; building & maintaining our data warehouse for security & scale. This role is central to building & executing on a robust & forward-looking data strategy for the company, & the successful candidate blends top-tier software engineering expertise with the ability to look ahead at what we need to build for the future.
About the Role
As Data Engineer, you will be a helping building our next generation of data tools & frameworks, in addition to developing & maintaining data products & infrastructure. You will proactively assess production DW support trends to determine & implement short- & long-term solutions, & be able to design for data integrity, reliability, & performance.
- You have 4+ years hands-on experience & demonstrated strength with:
- Python software development. You will be coding.
- Building & maintaining robust & scalable data integration (ETL) pipelines using SQL, EMR, Python & Spark.
- Writing complex, highly-optimized SQL queries across large data sets.
- Designing & maintaining columnar databases (e.g., Redshift, Snowflake)
- Distributed data processing (Hadoop, Spark, Hive)
- ETL with batch (AWS Data Pipeline, Airflow) & streaming (Kinesis)
- Integration & design for Business Intelligence tools (e.g., Looker, QuickSight)
- Creating scalable data models for analytics.
- You have experience designing & refactoring large enterprise data warehouses & associated ETLs, with continuous improvement examples for automation & simplification across all aspects of the DW environment, inclusive of both engineering & business reporting.
- Proven success with communicating effectively across diverse disciplines (including product engineering, infrastructure, analytics, data science, finance, marketing, customer support, etc.) to collect requirements & describe data engineering strategy & decisions.
- Undergraduate or graduate degree in Computer Science