Discord is one of few places that combine who you play games with and how you play those games. We use this data to make Discord the best place to play games with your friends.
Join our Data Platform team to build a scalable data platform that powers features and informs teams. Help us build scalable distributed systems like you'd construct additional pylons in your base. You build those in your sleep, amirite?
Discord is a small group of passionate gamers whose mission is to bring people together around games. Diversity and inclusiveness are a critical part of how we get there. We believe that with diversity comes a better product, better decisions, and a better work environment. Everyone here is committed to making Discord representative of the world we want to live and play in.
What you'll be doing
Help us build a robust and scalable foundation for data, from logging & ingestion to streaming & batch pipelines, as well as the tools that every team uses to interface with it.
Build the platform that empowers our Data Science, Machine Learning & Analytics teams, as well as powers product features used by millions of users every day.
Own and operate our entire data stack using modern technologies such as Apache Beam, Kafka, Pub/Sub and Airflow.
Work closely with our backend and frontend engineering teams to log & ingest data into our platform and produce a robust high-quality data feed.
Build efficient solutions on top of GCP and AWS using Python, Scala and Go.
What you should have
Minimum of 4 years experience building scalable backend systems.
Experience working on, and deploying, large-scale systems in Python or Go, Scala/Java, or other similar languages.
Experience working with and managing varied forms of distributed data systems such as Kafka, Storm or Spark.
Love to work with high volume heterogenous data and distributed systems.
Self-motivation and the ability to take a high-level goal and deliver shippable code.
Proven track record of working with petabyte-scale data infrastructure. Discord has plenty of it.
Experience working with varied data applications and databases, such as Hadoop, BigQuery, Spark or Redshift.