Who we are
DoubleVerify is the leading independent provider of marketing measurement software, data & analytics that authenticates the quality & effectiveness of digital media for the world's largest brands & media platforms. DV provides media transparency & accountability to deliver the highest level of impression quality for maximum advertising performance. Since 2008, DV has helped hundreds of Fortune 500 companies gain the most from their media spend by delivering best in class solutions across the digital ecosystem, helping to build a better industry. Learn more at www.doubleverify.com.
Role Description
Help us build our analytics system used by both internal teams & external clients, from the ground up. You will be a leading part of a high-performing platform data engineering team, which builds an online analytics platform providing insights & data for the world's largest brands & media platforms. This role involves working on high-scale distributed architecture, handling large-scale data processing at a petabyte scale, working with numerous advertising data sources from major platforms such as Google, Meta, & TikTok, developing systems that export massive amounts of reports with thousands of data points daily, & building robust APIs using Python & Java. You will integrate complex tools like data catalogs (e.g., Atlan, Open Metadata) & semantic layers (e.g., Looker, Cube.dev) & work with multiple data lakes such as Snowflake, BigQuery, & DataBricks. Additionally, you will leverage native tables & newly supported open formats like Iceberg & Delta to ensure maximum flexibility & minimize go-to-market time for new data products.
Who You Are
You are a passionate & experienced software engineer looking to take on your next major technical projects. You enjoy learning the intricacies & nuances of a distributed system, & then elegantly & cleanly designing new technical implementations to organically grow that system to its greatest potential. You can speak intelligently on the complex interaction between code choices now, their system tradeoffs, evolution, interaction, first order, future & second order effects. A constant learner, youre a continuous contributor to the improvement in team skill level. Youve got the ability to take on individual assignments & complete them front to back. You pride yourself on good architecture, writing & delivering high quality code.
And most importantly, you enjoy sharing, reviewing, & teaching those practices to others.
What You Will do
- Be the technical system owner, responsible for producing a long-term technical vision, code quality, performance & observability.
- Learn multiple complex systems that use numerous modern cutting-edge technologies, such as Looker, Snowflake & Airflow.
- Design technical implementations to grow these systems
- Design & implement systems responsible for high concurrency access to large data sets
- Identify gaps, deficiencies & inefficiencies in the system. Propose & implement solutions.
- Help oversee technical implementations written by the rest of the team. Ensure that team implementations are in line with the designs created by the team & aligned with DV best practices/agreed concepts, with an eye towards compatibility between features, design, implementation choices & best practices
- Be quick to fix issues that come up, & help to mentor & train others on the team
- Become a key contributor to feature scoping, technical implementation, & developer estimates
- Work with the Product Management team to understand requirements
- Be proactive about developer testing, & coding at all levels of a system of applications
- Test & optimize code developed both by you & by other team members
- Establish effective monitoring for automated system failure detection
- Continuously release your features using automated deployment tools & frameworks
Requirements
- At least 7 years of professional software engineering experience
- Proven experience with Python, or other object oriented languages (JS, Java, C#, etc.)
- Strong SQL proficiency with ability to suggest optimization for query performance & cost efficiency
- Strong familiarity with REST APIs & web-based APIs
- Familiarity with core architecture principles of at scale systems
- Experience with BI platforms such as Looker, Tableau, Power BI, etc.
- Familiarity with public cloud, such as GCP, AWS, Azure
- Excellent communication skills
- Experience with using task/build/automation tools in coordination with DevOps
- Bachelors Degree or higher in Computer Science or related field or equivalent technical experience
- Previous experience with managing & growing a large codebase over time is a large plus!
- Knowledge of Kubernetes & Terraform are not required, but are a plus!
- Previous experience as a team lead or a principal engineer or as an architect is not required, but is a plus.
The successful candidates starting salary will be determined based on a number of non-discriminating factors, including qualifications for the role, level, skills, experience, location, & balancing internal equity relative to peers at DV. The estimated salary range for this role based on the qualifications set forth in the job description is between [$118,000.00 - $235,000.00]. This role will also be eligible for bonus/commission (as applicable), equity, & benefits. The range above is for the expectations as laid out in the job description; however, we are often open to a wide variety of profiles, & recognize that the person we hire may be more or less experienced than this job description as posted.
Not-so-fun fact: Research shows that while men apply to jobs when they meet an average of 60% of job criteria, women & other marginalized groups tend to only apply when they check every box. So if you think you have what it takes but youre not sure that you check every box, apply anyway!
|