Who are we?
At Intersection, we are at the forefront of the smart cities revolution. Our mission is to improve daily life in cities & public spaces, with products that bridge the digital & physical worlds by delivering connectivity, information & content to enrich our everyday journeys & elevate the urban experience.
We pair our human-centered methodology with cutting-edge technology to design, develop, deliver, & maintain unique products & experiences in public spaces that deliver value to advertisers, cities, & consumers. Whether partnering with urban transit systems to revolutionize commuting & travel, with cities to transform how they connect with residents & visitors, or private developers to create unforgettable experiences in neighborhoods & districts, our solutions are scalable platforms on which our clients can build the future.
Intersection is backed by Alphabet through its urban technology company Sidewalk Labs.
What is the Role?
As a data engineer at Intersection, you will help to select & integrate tools & frameworks required to provide requested capabilities & facilitate access to data for business stakeholders. You will design & implement a secure data pipeline architecture, implement ETL processes that utilize the full line of AWS services, & monitor performance & advise about any necessary infrastructure changes to improve performance. Youll be an integral part of efforts to help define policies for the data engineering group, & implement security best practices across the whole data engineering architecture. You will report to the VP, Engineering.
Your First 30 Days:
- Learn & understand Intersections corporate, departmental & team goals.
- Develop a clear understanding of the product roadmap & current capabilities.
- Become familiar with Intersections current tools & processes.
Your First 60 days:
- Understand & get up to speed on the current Architecture.
- Pair with team members on committed initiatives.
- Work collaboratively with internal teams to deliver required data to business stakeholders.
Your First 90 Days:
- Build & deploy new ETL/pipelines using AWS Lambdas & Step functions.
- Interact with any API to build & deploy new ETL/pipelines using AWS Lambdas & Step functions.
- Complete database tasks & administration as required.
- Support snowflake & redshift data pipelines.
- Define & implement data retention policies across systems.
You are awesome for the role because:
- You have a proficient understanding of distributed computing principles.
- You have production experience with tools such as Hadoop, AWS Kinesis, Kafka, AWS EMR, Snowflake & experience architecting pipelines with these tools.
- You have experience with building stream-processing systems, using solutions such as Storm, Spark or Spark-Streaming.
- You have experience integrating data from multiple data sources.
- You have experience with NoSQL databases, such as HBase, Cassandra, MongoDB, DynamoDB, CosmosDB.
- You have experience with RDBMs databases, with a strong preference for Postgresql, Redshift & Snowflake.
- You have knowledge of various ETL techniques & frameworks, such as Flume.
- You have the proven ability to take data analyst requirements & translate it to a data pipeline that meets the analyst's needs.
- You have experience w/Tableau, Looker, & data report writing.
Intersection is an Equal Opportunity Employer.