- Company Name
- Groupe SEB
- Job Title
- DATA ENGINEER SPARK/SCALA (H/F)
- Job Description
-
**Job Title**
Data Engineer – Spark / Scala
**Role Summary**
Design, develop, and optimize data pipelines on an AWS Data Lake to support digital, e‑commerce, and connected‑products initiatives. Collaborate with data experts and business teams to translate data requirements into scalable, maintainable solutions using Spark, Scala, and DevOps practices.
**Expectations**
- Deliver robust, scalable data pipelines that meet performance and reliability targets.
- Ensure data quality, lineage, and security across multi‑source ingestion.
- Actively participate in Agile ceremonies and contribute to continuous improvement of data engineering practices.
- Communicate complex technical concepts clearly to technical and non‑technical stakeholders.
**Key Responsibilities**
- Design and implement ETL/ELT pipelines using Spark and Scala on AWS.
- Ingest, transform, and store data from heterogeneous sources into the AWS Data Lake.
- Optimize pipeline performance and resource utilization.
- Apply DevOps principles: CI/CD, infrastructure as code, monitoring, and automated testing.
- Work with data scientists, product owners, and domain experts to understand data needs and deliver actionable solutions.
- Maintain documentation of architecture, data models, and operational procedures.
- Participate in cross‑functional, international projects to provide data insights for marketing, e‑commerce, and connected products.
**Required Skills**
- Strong proficiency in Apache Spark and Scala.
- Experience with AWS data services (S3, Glue, Redshift, Lake Formation, Athena).
- Familiarity with DevOps tools (Git, Jenkins, Terraform, Docker, Kubernetes).
- Knowledge of data modeling, schema design, and data governance.
- Agile development experience and ability to work in iterative teams.
- Excellent written and spoken English.
- Strong analytical, problem‑solving, and communication skills.
**Required Education & Certifications**
- Minimum Master’s level (Bac+5) in Computer Science, Information Technology, or a related field.
- At least 3 years of professional data engineering experience.
- Relevant certifications (e.g., AWS Certified Data Analytics – Specialty, Databricks Certified Data Engineer) are an advantage.