WHO IS SPR?
SPR is a digital technology consultancy that develops elegant solutions to transform the way people do business. We're 300+ strategists, developers, designers, architects, consultants, thinkers, and doers in Chicago and Milwaukee. We work with 160 clients in 10 unique industries - everything from corporate finance and global logistics to local breweries and Chicago startups.
We think about the end users and rigorously apply the latest technologies and frameworks to address our clients' needs. We enable companies to do more with data, engage with other people, build disruptive solutions, and operate productively. To do this, we hire smart technologists and sharp business leaders who are excellent communicators and have an interest in working on multiple projects across industries.
SPR offers a great environment for employees to learn, to build systems that make an impact, and to tackle exciting challenges. With our office's "Maker Space", you can explore your IoT side and develop fun projects with 3D printing and CNC machining. We operate in a fun, casual work environment and have great benefits including: competitive salary, bonuses, generous vacation time, big fitness incentives, and medical/dental/vision insurance.
By joining the SPR team, you'll be using your brain, working hard and making an impact through your projects - and you'll be rewarded for it.
WHAT IS THE POSITION?
As a Data Engineer at SPR, you must have experience building and operating data pipelines (both streaming and batch, utilizing both ETL and ELT architectures). You will be building data pipeline solutions by designing, adopting and applying big data strategies and architectures. You must be experienced in large-scale system implementations with a focus on complex data processing and analytics pipelines. You must demonstrate an understanding of data integration best practices, and expertise in data integration, data transformation, data modeling and data cleansing. The Data Engineer must be able to demonstrate innovative approaches to complex problems which deliver industry-leading experiences for our clients.
* Experience in designing and implementing innovative data integration solutions, utilizing Python with Spark clusters.
* Familiarity with architectural patterns for data-intensive solutions
* Expertise in real-time streaming and migrating batch-style data processing to streaming and micro-batch solutions
* Knowledge of the RDBMS core principles; set up, tune, design, as well as newer unstructured data tools
* Familiarity with consulting and traditional application design
* Excellent written and verbal communication skills
* Display solid problem-solving abilities in the face of ambiguity
* Must be a hands-on individual who is comfortable leading by example
* Experience with Agile Methodology
* Possess excellent interpersonal and organizational skills
* Able to manage your own time and work well both independently and as part of a team
TECHNOLOGIES WE USE
Cloud (Azure, AWS, Cloud Foundry, Heroku, Mesos, DC/OS) / / RDBMS (SQL Server, PostgreSQL, Oracle, DB2) /NoSQL (Mongo, Raven, DocumentDB, Cassandra, Maria, Riak) / Python (including Databricks) / / Big Data (Cloudera & Hortonworks Hadoop distributions, including Hive, Pig, Sqoop, Spark) / Integration Tools (Apache Nifi, Cloudera Streamsets, Azure Data Factory, AWS Glue, Talend) / ELK (ElasticSearch, Logstash, Kibana) / Machine Learning (Azure ML tooling, TensorFlow, AWS Sagemaker, scikit-learn) / Data Visualization (Grafana, Kibana) / Microsoft PowerShell / AWS SDK / Fast Data (Apache Ignite / Gridgain, Apache Geode/Pivotal Gemfire)
EDUCATION & EXPERIENCE
* 3-5 years of professional experience
* BA or BS, preferably in Computer Science, Engineering or Science/Technology-based discipline
If this sounds like the kind of challenge you would be up for every day, we would love to hear from you.