DATA PRINCIPAL ARCHITECT
WHO IS SPR?
SPR is a digital technology consultancy that develops elegant solutions to transform the way people do business. We're 300+ strategists, developers, designers, architects, consultants, thinkers, and doers in Chicago and Milwaukee. We work with 160 clients in 10 unique industries - everything from corporate finance and global logistics to local breweries and Chicago startups.
We think about the end users and rigorously apply the latest technologies and frameworks to address our clients' needs. We enable companies to do more with data, engage with other people, build disruptive solutions, and operate productively. To do this, we hire smart technologists and sharp business leaders who are excellent communicators and have an interest in working on multiple projects across industries.
SPR offers a great environment for employees to learn, to build systems that make an impact, and to tackle exciting challenges. With our office's "Maker Space", you can explore your IoT side and develop fun projects with 3D printing and CNC machining. We operate in a fun, casual work environment and have great benefits including: competitive salary, bonuses, generous vacation time, big fitness incentives, and medical/dental/vision insurance.
By joining the SPR team, you'll be using your brain, working hard and making an impact through your projects - and you'll be rewarded for it.
WHAT IS THE POSITION?
As a Data Principal Architect at SPR, you must have extensive experience building and operating Big Data solutions with a proven track record in driving business success by architecting, designing, adopting and applying big data strategies and architectures. You must be experienced in large-scale system implementations with a focus on complex data processing and analytics pipelines. You must demonstrate a deep understanding of data analytics best practices, and expertise in data modeling, data cleansing, data mining, machine learning and data virtualization. The Data Principal Architect must be a consensus builder across a highly- technical cross-functional team. They must be able to demonstrate innovative approaches to complex problems which deliver industry-leading experiences for our clients.
* Motivated, self-starter with ability to learn quickly
* Expert with SQL, python skills (R is a strong plus)
* Experience in architecting and engineering innovative data analysis solutions
* Familiarity with architectural patterns for data-intensive solutions
* Expertise in real-time streaming and migrating batch-style data processing to streaming and micro-batch solutions
* Use of distributed messaging systems to rewrite systems in place
* Knowledge of the RDBMS core principles; set up, tune, design, as well as newer unstructured data tools
* Experience developing large scale, complex logical data models (along with physical implementations of the logical models)
* Understand terminology involved in Data Science and high-level familiarity with Machine Learning concepts
* Familiarity with consulting and traditional application design
* Experience estimating technical solution builds and contributing to custom proposals
* Excellent written and verbal communication skills
* Display solid problem-solving abilities in the face of ambiguity
* Must be a hands-on individual who is comfortable leading by example
* Experience with Agile Methodology
* Possess excellent interpersonal and organizational skills
* Able to manage your own time and work well both independently and as part of a team
* Experience with sales pursuits a Must
* Amazon and/or Azure experience a Must
TECHNOLOGIES WE USE
Cloud (Azure, AWS, Cloud Foundry, Heroku, Mesos, DC/OS) //RDBMS (SQL Server, PostgreSQL, Oracle, DB2) /NoSQL (Mongo, Raven, DocumentDB, Cassandra, Maria, Riak)/Python (including Databricks) //Big Data (Cloudera & Hortonworks Hadoop distributions, including Hive, Pig, Sqoop, Spark) /Integration Tools (Apache Nifi, Cloudera Streamsets, Azure Data Factory, AWS Glue, Talend) /ELK (ElasticSearch, Logstash, Kibana)/Machine Learning (Azure ML tooling, TensorFlow, AWS Sagemaker, scikit-learn) /Data Visualization (Grafana, Kibana)/ Microsoft PowerShell /AWS SDK/Fast Data (Apache Ignite / Gridgain, Apache Geode/Pivotal Gemfire)
EDUCATION & EXPERIENCE
* Bachelor's Degree, preferably in Data Science, Analytics, Computer Science, Engineering or Science / Technology-based disciplines
* 10+ years of professional experience
If this sounds like the kind of challenge you would be up for every day, we would love to hear from you.