About the opportunity:

We are looking for Big Data Engineers to work in financial services.

Responsibilities:

  • Work on a strategic Datahub platform where we have a few consultants to design and develop robust and scalable analytics processing applications, implementing data integration pipelines and solve analytics use cases, and convert the solution selected to Spark and other big data tools
  • Develop and maintain scalable cloud data platform in data vault model to store and process the data and play a lead role in driving the next evolution of their Data and Analytics Platforms
  • Help our partner to achieve a fully scalable and reliable data platform with improved performance on storage and data using cloud infrastructure.
  • Take the lead in designing solutions that will help our partner businesses change the way they operate and enable them to grow
  • You’ll work in a fast-paced and agile environment where you will have end-to-end accountability for developing, deploying and supporting your data assets as well as creating templates and implementation methods and standards.
  • Also, depending on the project, the platform could be recent, and some exciting work will need to be done to enable and enhance it, that’s where you play a key role.

Qualifications:

Must have:

  • Solid experience in programming languages (Scala or Java or Python)
  • Experience in Big Data and Hadoop technologies with main focus on Spark, Hive, Spark SQL, Presto (or other query engines), big data storage formats (such as Parquet, ORC, Avro)
  • Good data warehousing experience
  • Good understanding of data architecture principles, including data access patterns and data modelling
  • Good knowledge of the core AWS services (S3, IAM, EC2, ELB, CloudFormation)
  • Strong communication skills
  • Being able to work in autonomy in your projects

Nice to have:

  • Worked on a data transformation project with a large set of data
  • Reading and writing Spark transformation jobs using Java, Scala (Preferably in Java)
  • Experience working in a DevOps model in an Agile environment
  • Strong data tools experiences such as Talend, SSIS, DataStage or Informatica
  • You enjoy taking a requirement from scratch and deliver it on schedule
  • You have a proactive attitude and strong problem-solving skills
  • You are a team player, and take joy in sharing your knowledge and the best practices of data engineering
  • Strong stakeholders’ skills
  • Knowledge or experience with Snowflake, Alteryx, RedShift, BigQuery or other Big Data tools that can enhance the journey of our customer

Our projects are mostly long-term from 6 months to 2 years. During that time, we will discuss your career progression, training that you can undertake and make sure you keep learning through your engagement.

Our Recruitment Process:

Apply online! The HR team carefully studies your application under 3 days and contacts you if your profile matches one of our positions. First meeting! You discuss with HR about your background, your professional aspirations as well as about Maltem and the opportunities we offer. Challenge yourself in a technical interview with one of our experts. It is also an opportunity for you to have his feedback.

During the entire recruitment process, you have the opportunity to participate in events organised by Maltem and interact with consultants to learn more about us!

Employment type

Contractor / Perm

Maltem