The Kinetica Active Analytics Platform combines streaming & historical data with location intelligence & machine learning-powered analytics. Organizations across automotive, energy, telecommunications, retail, healthcare, financial services, & beyond leverage the platform's GPU-accelerated computing power to build custom analytical applications that deliver immediate, dynamic insight. Kinetica has a rich partner ecosystem, including NVIDIA, Dell, HP, & IBM, & is privately held, backed by leading global venture capital firms Canvas Ventures, Citi Ventures, GreatPoint Ventures, & Meritech Capital Partners.
For more information & trial downloads, visit kinetica.com or follow us on LinkedIn & Twitter.
Kinetica seeks a Senior Software Engineer with Python & Distributed Systems experience. As a member of the ML Product Engineering Team, you'llbuild out the product in Python & integrate all other parts (Kubernetes, Docker, & our GPU-powered DB, etc) using Python bindings to build & deliver an overall product (a REST API.) This is a Product Engineering role, so we are building a generic solution that works across many industries, many use-cases, many clients, many data varieties, many data volumes, & many data velocities (rather than building for a specific use case or a specific client -- this offers a huge opportunity to make a mark on a fast-growing industry.)
- Integrate a variety of components into an overall smooth-functioning product
- Ability to bring things to a close -- not just exploring but getting things to the finish line.
- Research products & keep abreast of marketplace offerings & possibilities
- Work with commercial & open-source packages to find stacks to achieve required product features
- Work with a close-knit team to design & develop a release-quality commercial product
- Work with our broader engineering group to ensure products fit into the company's product lineup
- Work iteratively to hone proofs-of-concept for new product features & steadily merge development into the overall product
- Keep attuned to the marketplace & spot opportunities to expand functionality in response to new technical capabilities as they arise
- Keep attuned to customer use & actively work to improve product experience to meet usage, both current & future usage the customer may not even realize they need
- Technical Degree in Computer Science, Operations Research, Statistics, Math, Physics, or equivalent
- 5+ years of Python development experience
- Proficiency working in Linux development environments
- Experience in structured work environments with automated-testing & quality processes
- Experience developing REST APIs
- Familiarity with SQL & databases
- Familiarity with containerized Python applications (Docker specifically)
- Desire to work with large datasets
- Exposure to one machine learning open-source package (sklearn, TensorFlow, Caffe2, Torch, etc.)
- Strong communication skills, written & verbal, used in a fast-moving environment
- Familiarity with Container Orchestration (Kubernetes specifically)
- Strong communication skills as demonstrated by personal projects, technical blog postings, volunteer activities, etc.
- Interest in Machine Learning & Data Science
- Experience working with highly complex technical ecosystems (resource managers, containers, automated testing -- e.g., Mesos, Kubernetes, Docker)
- Familiarity with popular python libraries (NumPy)
- Understanding of the data science ecosystem -- commercial & open source
- Participation in hackathons; wins at hackathons or Kaggle leaderboards are even more impressive
- Experience working with computational systems (e.g., NumPy, Pandas, Spark, etc.)
Applicants are encouraged to share online project/code portfolios & any demonstrations of community participation (e.g., public Github profiles, public StackOverflow profiles, public Kaggle profiles, technical blog postings, etc.)
All your information will be kept confidential according to EEO guidelines.