Who we are
DoubleVerify is the leading independent provider of marketing measurement software, data & analytics that authenticates the quality & effectiveness of digital media for the world's largest brands & media platforms. DV provides media transparency & accountability to deliver the highest level of impression quality for maximum advertising performance. Since 2008, DV has helped hundreds of Fortune 500 companies gain the most from their media spend by delivering best in class solutions across the digital ecosystem, helping to build a better industry. Learn more at www.doubleverify.com.
The DevOps team made up of engineers working cross-functionally to provide all of our product infrastructure & automation. You will be part of a talented team that sits at the center of multiple software engineering teams to provide standardized tools & streamlined processes.
- Work with peers to improve & expand our Kubernetes infrastructure
- Contribute to the design & implementation of new products & features, making sure they are all developed so they fit nicely in our Continuous Delivery framework & processes
- Design, deploy & manage streaming services such as Kafka, spark & HDFS
- Create & support cloud (GCP) & local cluster environment & infrastructure.
- You will work alongside the development teams to provision, automate, & tune multiple environments across datacenter & cloud platforms.
- Identifying bottlenecks, sniffing packets, & creating dashboards on the fly is key.
Who you are:
- Experience with Docker & container orchestration platforms (Kubernetes preferred)
- Ability to leverage application & system metrics, log events, & wire data to analyze performance
- Mastery of one or more configuration management frameworks (ansible preferred)
- Hands on experience with distributed data stores & data streaming services like Spark or Kafka
- Scripting skills that support working with APIs & harvesting custom metrics (Python or Go Lang)
- Good communication skills, a great personality, & a love for working collaboratively
- Experience working with GCP, AWS or other public cloud
- 4+ years experience as DevOps engineer
- 1+ years of leadership experience including managing direct reports
- 2+ years experience in Linux environment
- 2+ years hands on experience in Python/Bash.
- 2+ years hands on experience in Kubernetes.
- 2+ year experience in one of the following infrastructure automation tools: Ansible, Chef, or Puppet
- Excellent verbal & written communication skills - ability to effectively communicate with technical & non-technical stakeholders across all levels of the organization
- Experience with large scale production systems.
- Experience with cloud architectures such as GCP.
- Experience with CI/CD tools (GIT, TeamCity/Jenkins, Ansible, Artifactory
- Experience with DevOps Tools: Git/GitHub, Atlassian Suite, Teamcity, Maven, or Nuget
- Experience with containerized environments, micro-services, & distributed systems
- Experience working with GCP or other public cloud.
- Experience with implementation of package management
Nice to have:
- Experience with Rancher.
- Experience with K8S Operators
- Experience with helm-charts
- Experience in monitoring & metrics collection/processing (Prometheus-Grafana Stack)
- Logging & log analysis systems (ELK, Splunk)