Events  Classes  Jobs 
    Sign in  
 
 
DoubleVerify // digital media measurement software & analytics
 
Engineering, Full Time    New York    Posted: Tuesday, June 16, 2020
 
   
 
Apply To Job
 
 
JOB DETAILS
 

You Will...

-Serve as an architect for propriety systems, lead the development process improvement initiatives that require architectural changes & expertise, & recommend required architectural, design, & implementation changes.
-Define the knowledge management process, establish design & coding guidelines, best practices, & mentor & coach junior developers.
-Conduct technical screenings & interview & hire engineering talent.
-Drive technical code reviews, ensure technical cohesion, write technical & functional design documents, drive technical discussion, & provide guidance & peer review the deliverables.
-Oversee the integration & implementation with strategic partners & work closely with internal & external engineering teams to prioritize & scope the smooth rollout of propriety products.
-Coordinate with product & engineering teams & represent teams in planning sessions
-Develop & deploy microservices with focus on APIs for data exchange & externalization.
-Use technologies including Kafka, Spark, Hadoop, Hive, Scala & Kubernetes & implement highly scalable fault tolerant data pipelines that process, enhance & distribute large volumes of data as well as perform data analysis & engage in SQL query writing.
-Enhance & optimize data-pipelines for various internal & external use-cases & provide customers with insights in near-real time.
-Have one to three direct subordinates.

Requirements:
-Bachelors degree in computer science, information technology, or engineering; we will also accept a masters degree in computer science, information technology, or engineering & three years of experience in data engineering & distributed systems.
-5 years of overall progressive experience in data engineering & distributed systems.
-This experience must include 3 years of experience in the following: (1) Kafka; (2) Hadoop; (3) Spark; (4) Kubernetes; (5) implementing highly scalable & fault tolerant data pipelines to process large streams of data; (6) data analysis, including SQL query writing; (7) optimizing & enhancing the existing data pipeline; (8) collaborating with engineers, architects, & IT team members to build existing data platforms; (9) maintaining & managing existing data pipelines, including resolving production issues; & (10) automation to monitor data pipelines

 
 
 
Apply To Job
 
 
 
 
 
© 2020 GarysGuide      Terms