About
Job Description
Description
Job Description
At eero we deal with many billions of device data interactions everyday and we need engineers who can help us build scalable real-time data infrastructures to handle this growing stream of data. As a software engineer focusing on our data infrastructure, you'll have a key role in defining and building out data systems that handle everything from sensor data such as device temperature data to mesh network performance metrics to mobile device interaction data.
Our real-time data is critical to providing high performance, adaptive networks for our customers and actionable operational insights for the company, requiring a strong and scalable data infrastructure. We are a small company but with a large data challenge! For our data infrastructure we strive to choose technologies that build for our current and future data growth, creating a platform that makes use of technologies such as Kafka, Spark, HBase, and Parquet on AWS S3 as well as our own custom in-house low latency scalable data infrastructure.
Our data is one of our biggest opportunities to continuously improve things and make customer experiences better, and you can directly impact this and how things are done on eero's data team.
What you'll do:
Add your skills to building out our near real-time data infrastructure and platform
Create infrastructure to drive low latency Spark streaming based ETL data pipelines
Extend our Data Lake storage platform
Enhance our custom differential data handling and CEP platforms
Define API's and data formats
Build performant real-time systems that support querying our petabytes of data
Work with cloud service, mobile, experience, and device teams
Maintain and extend our big data systems
Create infrastructure to drive our round-trip machine learning platform
Participate in data projects we are looking to open source to the data community
We're looking for someone who has:
BS/MS/PhD in Computer Science, Engineering or a related subject
3+ years of proven software development experience with big data
Proven working experience in backend server development
Experience with a strongly typed language such as Java, C++, C#, Go, or Scala
In-depth dynamic language experience in Python or Ruby
Solid knowledge of SQL/HQL with the ability to tune and optimize its performance
Experience with Scala is a plus
Working knowledge of the general scaling and server development landscape, architectures, trends, and emerging technologies
Solid understanding of the full development life cycle
Experience with distributed big data solutions such as Spark, HBase, Kafka, Cassandra, Hadoop, Hive, Pig, Scalding
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.