Data Engineer, Ireland based.

Résumé du poste
CDI
Dublin
Salaire : Non spécifié
Début : 05 mai 2024
Télétravail total
Expérience : > 3 ans
Éducation : Bac +5 / Master
Compétences & expertises
Méthodologies Agile
Github
Terraform
Kafka
Pyspark
+3
Postuler

Echo Analytics
Echo Analytics

Cette offre vous tente ?

Postuler
jobs.faq.title

Le poste

Descriptif du poste

Our mission is to help organizations understand the offline world. The opportunity is Full-time, based in Dublin, Ireland.
What do we do?

We provide high-quality data which powers our customer’s products. The data is sourced from a variety of partners that provide ultra-granular mobility data from industries, such as advertising, mobility, smart cities, real estate, market research and financial services.

We aggregate, refine and use this data to build products which help our clients get insights into how people move and behave in the real world.

Who are we searching for?

A Data Engineer who will help migrate our existing infrastructure into the cloud and build reliable, efficient and scalable data processing systems.

To be successful as a Data Engineer, you should have prior software development and data processing experience, be able to collaborate with team members, be a strategic problem-solver and be willing to learn new data technologies.

The opportunity

You will primarily work within the AWS Cloud Platform with a variety of technologies Python, Spark (PySpark), EMR, S3, EC2, Lambda and many orchestration and scheduling tools, modern deployment and management systems. Moreover, you will participate in data modelling activities and the design of data flows through their implementation and support in production.

This is a unique opportunity to join, build and grow a successful start-up leading its market, from the early stages.

Your role 
  • Develop new features in collaboration with the other engineers and the CTO.

  • Collect and process a vast variety of diverse data sets.

  • Design datasets for external facing consumers for speed, consistency, cost, and efficiency.

  • Write complex SQL/PySpark operations to transform raw data.

  • Implement workflows and schedule them.

  • Manage data observability, scalability, transparency, and accuracy.

  • Ensure data is clean, consistent, and available.

  • Perform data quality checks, and build monitoring systems.

  • Investigate, test, and implement new tools, processes, and technologies on an ongoing basis.

  • Work closely with the sales team to provide data to our customers

  • Develop tools to make data extraction very efficient, flexible and time-predictable.

     

    What we’re looking for

    • Fluent English speaker (French is a plus).

    • At least 3 years of experience with Data technologies such as PySpark.

    • Minimum 3 or more years of experience developing and debugging in Python.

    • Comfortable working with AWS products such as EC2, S3 and EMR.

    • Experience with Data Flow, Data Processor, Kafka, Spark, EMR, Athena, RDS, or other streaming technologies is a plus.

    • Strong experience with Agile Methods.

    • Prior knowledge of GitHub CI/CD, and technologies such as Terraform.

    • Irish resident with the ability to work in Ireland without an employment permit.

       

      Recruitment process

      • Pre-qualification with Raphaëlle (online)
      • Coding game
      • Technical interview (online)
      • Logical test Interview with CEO (online)
      • Final step (onsite)

      Envie d’en savoir plus ?

      Postuler