About
Job Description
Responsibilities
* Work with architects, other data engineers and analysts to identify, engage, and integrate data sources for discovery and profiling and, where necessary, define data services that empower business processes
* Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed/elastic environments, and downstream applications and/or self-service solutions
* Operationalize data processing systems (dev ops)
* Develop solutions combining data blending, profiling, mining, statistical analysis, and machine learning, to better define and curate models, test hypothesis, and deliver key insights
* Participate in development sprints, demos, and retrospectives, as well as release and deployment
* Collaborate in establishing and evolving development, testing, and documentation standards, as well as related code reviews
* Partner with business analysts, application engineers, data scientists, etc., leveraging the appropriate tools, solutions, and/or processes as part of their data mining, profiling, blending, and analytical activities
* Collaborate with NBCU's technology domains to lead and ensure development and delivery of analytics solutions
* Ensure policies and standards are followed across the organization
* Deliver services that meet internal KPIs and customer SLAs
* The ability to quickly build rapport and gain the respect and cooperation of both technology and business peers
* Able to quickly learn new technologies as they become prevalent and widely implemented, and develop oneself
* Driven to utilize both proven and unique solutions to address business problems until issue is resolved
* Be able to deal with ambiguity and make quality decisions in a dynamic, fast-paced environment
* Customer-focused, action-oriented and driven to achieve results in a positive manner, displaying ethical behavior, integrity, and building trust at all times
* Strong teamwork, organizational and interpersonal skills; ability to communicate and persuade peers and thrive in a cross-functional matrix environment
Qualifications/Requirements
* 10+ years of progressive data application development experience, working in large scale/distributed SQL, NoSQL, and/or Hadoop environments
* 3+ years of experience modeling and implementing ETL/ELT on columnar MPP database technologies such as Snowflake, BigQuery and/or Redshift
* Experience with streaming architectures (Kafka, Kinesis, Pub/Sub, etc.)
* Experience working in hybrid cloud/on-premise environments as well as multi-cloud
* Experience implementing scalable, distributed, and highly available systems using cloud technologies, such as Microsoft Azure, Amazon Web Services, and/or Google Cloud
* Experience building microservices topologies, including operational concerns such as resiliency, observability, discovery and routing, etc.
* Undergraduate degree in the field of computer science or engineering, or focus on statistical analysis or equivalent experience highly desired
Desired Characteristics
* Analytical - You have experience in delivering self-service analytics solutions that promote data discovery
* Media-focused - Strong working knowledge of media including traditional and Direct to Consumer
* Communicator - You have excellent verbal and written skills with the ability to communicate ideas effectively across all levels of the organization, both technical and non-technical
* Action-oriented - You're constantly figuring out new problems and are regularly showing results with a positive attitude, always displaying ethical behavior, integrity, and building trust
* Technologist - You have a love for data and exploring new ways to do old things, as well as the outside the box thinking required to build scalable technology solutions with pattern-based thinking.