NYC  SF        Events   Jobs   Deals  
    Sign in  
 
 
 

Dropbox // cloud storage
Apply To Job
 
 

 

Role Description

In this role you will build large, scalable analytics pipelines using modern data technologies. This is not a maintain existing platform or make minor tweaks to current code base kind of role. We are effectively building from the ground up & plan to leverage the most recent Big Data technologies. If you enjoy building new things without being constrained by technical debt, this is the job for you!

Our Engineering Career Framework is viewable by anyone outside the company & describes whats expected for our engineers at each of our career levels. Check out our blog post on this topic & more here.

Responsibilities

  • Help define company data assets (data model), Spark, SparkSQL & HiveSQL jobs to populate data models
  • Help define & design data integrations, data quality frameworks & design & evaluate open source/vendor tools for data lineage
  • Work closely with Dropbox business units & engineering teams to develop strategy for long term Data Platform architecture to be efficient, reliable & scalable 
  • Conceptualize & own the data architecture for multiple large-scale projects, while evaluating design & operational cost-benefit tradeoffs within systems
  • Collaborate with engineers, product managers, & data scientists to understand data needs, representing key data insights in a meaningful way
  • Design, build, & launch collections of sophisticated data models & visualizations that support multiple use cases across different products or domains
  • Optimize pipelines, dashboards, frameworks, & systems to facilitate easier development of data artifacts

On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable & high-quality experience for our customers.

Requirements

  • 5+ years of Spark, Python, Java, C++, or Scala development experience
  • 5+ years of SQL experience
  • 5+ years of experience with schema design, dimensional data modeling, & medallion architectures
  • Experience with the Databricks platform & data lake architectures for large-scale data processing & analytics
  • Excellent product strategic thinking & communications to influence product & cross-functional teams by identifying the data opportunities to drive impact
  • BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience
  • Experience designing, building & maintaining data processing systems

Preferred Qualifications

  • 7+ years of SQL experience 
  • 7+ years of experience with schema design, dimensional data modeling, & medallion architectures
  • Experience with Airflow or other similar orchestration frameworks
  • Experience building data quality monitoring using MonteCarlo or similar tools

Compensation

Poland Pay Range

 
 
 
 
 
About    Feedback    Press    Terms    Gary's Red Tie
 
© 2025 GarysGuide