Our problems are common yet complex. As our industry has evolved, the way our clients consume data has changed. Bloomberg's application teams face challenges of large-scale data storage, low-latency retrievals, high-volume requests and high availability over a distributed environment. We create standardized solutions to these problems by building core services and technology frameworks for enterprise-wide use.
The Bloomberg High Availability Timeseries Store (BHATS) ecosystem is a consumer timeseries/time-related data store with support for storage of multi-temporal data sets and retrievals against multiple-temporal dimensions. This platform serves the needs for various periodic (daily, weekly, monthly, quarterly, etc.), date-specific, and potentially intraday timeframes (by hour or by minute) data sets with point in time, as-of-date, and as-reported use cases.
As an experienced Java developer, you will help us refresh and evolve many facets of our data and analytics infrastructure using existing Bloomberg technologies as the BAS Java framework as well as a new Kafka-based infrastructure to support the various microservices which comprise the business logic processing. Working knowledge of HDFS/HBase is a plus but not mandatory since we'll ensure you get exposed and trained accordingly.
In addition to building the core BHATS ingestion, retrieval and bulk operations workflows, integration with the Bloomberg Data Platform (BDP) and Bloomberg Query Language (BQL) for ingestion and retrieval respectively is another major project within our purview. If you're wondering, BDP is an initiative to help standardize the storage back-ends and structure data flows across our systems to improve discoverability and data provenance. On the other hand, BQL is a distributed analytics framework which allows internal and external users to express complex data retrieval, analytics and screening criteria.
What's in it for you:
Many of Bloomberg's timeseries/time-related data workflows will eventually be clients/users of the BHATS ecosystem. This means you'll gain exposure to our financial data sets, non-financial data sets as well as client-specific ones and how they're used across our client workflows while ensuring we build high performance, low-latency, and scalable software for these core infrastructure initiatives. Another added benefit: much of these applications are being built on top of open source technologies, so there are plenty of avenues to innovate and contribute to the open source community.
We'll trust you to:
* Take ownership of component(s) of the workflows supported within our ecosystem * Interact with development teams across Bloomberg and understand their application requirements and access patterns * Design and develop systems that meet our latency, volume, storage and scale expectations * Participate in meetings to help influence architectural decisions
You'll need to have:
* 3+ years of experience programming in Java * Experience developing, enhancing and maintaining high throughput, low-latency Hadoop systems in a mission-critical production environment * Familiarity working in a Test Driven and Agile development environment * BA, BS, MS, PhD in Computer Science, Engineering or related technology field
We'd love to see:
* 3+ years of experience with Hadoop/HDFS/MapReduce * 3+ years of experience with NoSQL data stores (preferably HBase or Cassandra) * The ability to enhance and maintain mission-critical software in a fast-paced environment * Experience with Spark, Kafka, Zookeeper or Storm
Do you want to build systems that impact the whole company? Apply below!
Let your dream job find you.
Sign up to start matching with top companies. It’s fast and free.