The DevOps team made up of engineers working cross-functionally to provide all of our product infrastructure & automation. You will be part of a small talented team that sits at the center of multiple software engineering teams to provide standardized tools & streamlined processes.
We in DoubleVerify believe that giving our people a broad range of responsibilities results in the highest satisfaction for the engineers & a strong return on investment for the company. We want people who love the idea of working on tools & system testing in one week & fine-tuning kubernetes internals on the next. The ideal candidate embraces continuously evolving architecture & wants to play a role in designing how things are done here.
You will be working with our team to help architect, build, & support a high volume/low latency platform that processes several terabytes of data each day. You will have the freedom to innovate & take your projects from test to production in a short time.
- Work with peers to improve & expand our kubernetes infrastructure
- Contribute to the design & implementation of new products & features, making sure they are all developed so they fit nicely in our Continuous Delivery framework & processes
- Design, deploy & manage streaming services such as Kafka, spark & HDFS
- Create & support cloud (GCP) & local cluster environment & infrastructure.
- You will work alongside the development teams to provision, automate, & tune multiple environments across datacenter & cloud platforms.
- Identifying bottlenecks, sniffing packets, & creating dashboards on the fly is key.
Who you are:
- Previous experience in a DevOps role working closely with Software Engineers
- Experience with Docker & container orchestration platforms (kubernetes preferred)
- Ability to leverage application & system metrics, log events, & wire data to analyze performance
- Mastery of one or more configuration management frameworks (ansible preferred)
- Hands on experience with distributed data stores & data streaming services like spark, kafka, etc
- Scripting skills that support working with APIs & harvesting custom metrics (python or go is great!)
- Good communication skills, a great personality, & a love for working collaboratively
- Experience working with GCP or other public cloud
- 2+ years experience in Linux environment (4+ years of experience advantage)
- 3+ years experience as DevOps engineer (4+ years of experience advantage)
- Experience with large scale production systems.
- Experience with cloud architectures such as GCP.
- Hands on experience in Python/Bash.
- Experience with CI/CD tools (GIT, TeamCity/jenkins, Ansible, Artifactory
- DevOps Tools experience: Git/GitHub, Atlassian Suite, Teamcity, Maven, Nuget
- Experience in one of the following infrastructure automation tools: Ansible, Chef, Puppet (we use Ansible)
- Experience with containerized environments & micro services
- Hands on experience in Kubernetes at least 2 years.
- You must be fluent in English
- Experience working with GCP or other public cloud.
- Experience with Linux system administration.
- Experience with implementation of package management
Nice to have:
- Experience with Rancher.
- Experience with K8S Operators
- Experience with helm-charts
- Experience in monitoring & metrics collection/processing (Prometheus-Grafana Stack)
- Logging & log analysis systems (ELK, Splunk)