1. 6 years of experience, minimum 3 years in Bigdata
2. Sound understanding and experience with Hadoop ecosystem (Cloudera). Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand.
3. Experience in Spark, Scala, python, pyspark is mandatory
4. Experience in Hive, impala, Linux/Unix technologies is mandatory
5. Experience in Kafka and Flume is an advantage
6. Sound knowledge of relational databases (SQL) and experience with large SQL based systems.
Let your dream job find you.
Sign up to start matching with top companies. It’s fast and free.