Main requirements:
University degree in Computer Science or equivalent experience
3-5 years work experience
Familiar with Big Data Environment (e.g. Hadoop, AWS, EMR)
Professional experience with different database technologies, preferably RDMBS or NoSQL
Experienced in Data Mining, Machine Learning and Predictive Analytics
Deep knowledge in Hadoop and its ecosystem (Spark, Hive, Presto)
Expertise in Java, Scala or Python and SQL
Solid experience in data modeling, resilient engineering
Nice to have:
Familiarity with complex software systems or Microservice architectures
Experienced with Streaming Tools (e.g. Kinesis, Kafka)
Experience with AWS data environments
Practice in Clean Code, Test Driven Development or Pair Programming