Job Directory Software Development Engineer - Data Operations

Software Development Engineer - Data Operations
Playa Vista, CA

Companies like
are looking for tech talent like you.

On Hired, employers apply to you with up-front salaries.
Sign up to start matching for free.

About

Job Description

Are you excited about high performance Big Data implementation? Then great! This is a great opportunity for a Jr/Mid-level Big Data Engineer with both conceptual and hands-on experience working on big data processing and streaming framework who want to work at larger scale, be exposed to newer technologies, and grow their career.

We are looking for Engineers who are thorough and agile, capable of breaking down and solving problems, and have a strong will to get things done. In the Data Operations team you will work on real-world problems working on big data tech stack where reliability, accuracy and speed are paramount, take responsibility for your systems end-to-end and influence the direction of our technology that impacts customers around the world.

As a member of our Data Services team you will be a member of a service group responsible for continuing organizational expansion of our data processing projects. Ideal candidate must be enthused about all spectrum of big data development, including data transport, data processing, data warehouse/ETL integration, quick learning and self-starting. This is a demanding role that will require hands-on experience with big data processing development to be deployed on Linux. You will be responsible for the day to day operation and new developments. We are seeking a candidate with good skills in software development life cycle, build data services with Java (or Scala), script languages, like, Python etc. This position includes 24x7 production support.

What you will be doing:

* Design, develop and support various big data platforms applications, using supported tech stack technologies like Hadoop MR, Spark, Kafka, Druid, ETL and data warehouse applications integration
* Develop applications with Java/Scala, Spark programming languages and Big Data technology; including the script languages (Python, shell etc.) to wrap applications execution.
* Involved in the design and implementation of full cycle of data services, from data transportation, data processing, ETL to data delivery for reporting
* Identify, troubleshoot and resolve production data integrity and performance issues

What we look for:

* Experience using Java 7 or above
* One or more of the following data processing technologies: Hadoop Map Reduce, Spark, Kafka Streams, Flink, Storm, Apache Beam.
* Kafka (preferred) OR one of the mainstream queue systems: ZeroMQ, RabbitMQ, ActiveMQ
* working directly against RDMS and Data Warehousing (Strong SQL)
* PowerMock, Mockito or similar
* Writing tech specs and documentation
* Linux bash scripting

Great if you have:

* Experience with some of the following AWS tools: EMR, Kinesis, Firehose, Redshift, RDS, S3 API, Lambda, SQS
* Experience with Presto, Hive, Impala or similar SQL based engine for Big Data
* Experience with Redis, Cassandra, MongoDB or similar NoSQL databases.
* Scala, Python
* Experience with any of the following message / file formats: Parquet, Avro, Protocol Buffer

Extra perks we offer:

* Take time for yourself: Our vacation days are unlimited and you get a week off the 4th of July and the last 10 days of the year.
* Stay healthy: Choose from a variety very low cost medical, dental and vision plans to cover you and your loved ones.
* Enjoy your stay: Each Rubicon Project office enjoys a variety of benefits like daily catered lunches, a fully stocked kitchen with healthy snacks.

Let your dream job find you.

Sign up to start matching with top companies. It’s fast and free.