Job Directory Data Infrastructure Engineer

Data Infrastructure Engineer
San Francisco, CA

Companies like
are looking for tech talent like you.

On Hired, employers apply to you with up-front salaries.
Sign up to start matching for free.

About

Job Description

We are a San Francisco based team building self-driving semi trucks. In June we raised a 30 million dollar series B led by Sequoia, crossed 100,000 autonomous miles driven and are moving freight daily between LA and Phoenix. Our engineering team is made up of experienced and highly talented people from companies like Mercedes, Volkswagen, Ford, Uber ATG and Apple.

The infrastructure team is responsible for building the systems that support all of our engineering and operations. This includes data ingestion and processing, real-time vehicle communication systems, machine learning pipelines, simulation infrastructure and much more.

Day-to-day Responsibilities:

* Deploy, build and maintain infrastructure responsible for ingesting data from our vehicles at various centers across the country. This includes the hardware and operational processes as well as the software that powers the system
* Develop telemetry systems which allow our vehicles to stream video and data on-demand over LTE
* Maintain the on-vehicle code responsible for data collection, monitoring and real-time communication
* Build scalable data pipelines which operate over petabytes of autonomous vehicle data to extract useful features, enable advanced queries, and power machine learning pipelines and simulation environments
* Maintain the software execution environment for our vehicles including the host operating system, containerized environments, and their deployment procedures.

Your Experience Might Include:

* Experience with big data and/or infrastructure
* BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience
* Significant experience with Python, C++, Go, or similar
* Experience working with classical relational and NoSQL databases
* Experience with Kafka, Hadoop, Spark, or other data processing tools
* Experience building scalable data pipelines
* Significant experience working with AWS and/or GCP
* Experience with Docker and Kubernetes or other container orchestration frameworks
* Proven ability to independently execute projects from concept to implementation
* Attention to detail and a passion for building scalable and reliable systems.

Benefits

* Help revolutionize transportation as we know it!

Let your dream job find you.

Sign up to start matching with top companies. It’s fast and free.