Job Specifications
Data is a key part of AXA’s journey “from Payer to Partner”, and its transformation to a Data Driven company. To succeed in this journey, AXA Partners Data Strategy has been organised around 6 pillars: Enable Business initiatives, Protect Data, Organize and Govern initiatives, Manage Data, Empower People on Data, Deliver and operate Data Platforms.
Data Engineering is therefore the cornerstone of the data transformation journey, ensuring strong IT Data capabilities foundations, implementing sustainable Data Operations with high level of service delivery and enabling any analytical initiatives, to provide at the end business value across AXA Partners.
AXA Partners has defined a decentralized operating model for data appointing data teams in BUs and geographies to be responsible for the proper deployment of data strategy closer to business owners. Whereas a Central Data Office manages the Data Platforms, Big Data & Analytics frameworks, standards and guidelines.
As a Data Engineer, part of a Data Team, your main mission is to maintain and enhance the operationality of the Data Team Assets or components at the right-level to ensure that the Data Team Strategy’s ambitions can be met and customers are served with the right level of quality.
Your second mission, as a Data Ops, is to implement and enhance the current production data flows of the Datalake (from extraction to business actionable consumption).
Your third mission, is to support, upskill and advise on processes or technical tasks, other Data Fellows of your Data Team in the achievements of their own data jobs.
You will be part of the overall Data Partners community, and a key actor of the Data Engineer Network.
What You’ll Be Doing
In relation with Data Providers (source owners) and Data consumers (analysts, scientists etc.), make sure that all relevant data is easily accessible, industrialized and documented in the Big Data environments, whatever the complexity of transformations or large variety of data sources.
Take part in developing, enhancing and maintaining data applications in Data platforms
Liaise with the community of people involved in the extraction, integration, structuration and valorization of data (IT Owners, Data Teams, Business analysts and specialists such as Data Scientists)
Take part in improving and deploy data engineering standards, procedures, processes and operational guidelines around target data components managed by Central framework
Contribution to data road map definition, in coordination with other teams/customers
Interact with data scientists, analysts and business to define data sourcing strategy regarding the use-case
Manage interactions with data providers or other data engineers of other data teams for data lake project and BAU tasks
Support to other data fellows : technical demonstrations, UAT, training, expertise
Share with Central and community for data assets evolution
Take part in the strategic decisions of target data components with other of the Data Family
Development and automation of ingestion flows, data curation and access layers
Define solution urbanizations and take part in the development and deployment of applications following central standard and guidelines
Contribute to the improvement of standards and datalake processes for the Data Family
Design, develop, test, promote and industrialize data ingestion flows, datamarts and core target data components
Implement tools and end-to-end monitoring to ensure high availability of production data processing, data quality and reliability
Maintenance of data treatments in the Lake
Perform BAU and report regularly on this activity
Solve issues and problems
Support other technical data experts of your data team (data analyts etc.) or data end users in the use of data lake, applications or current platform assets
What You’ll Bring
Technical/ Functional Knowledge, Skills and Abilities
You are familiar with the Big Data stack in cloud environment (ADLS, Databricks, ADF, Azure DevOps etc)
You have very solid background on working with SQL Databases and Datawarehouses.
You have a very good understanding of spark processes
You are fluent in multiple development languages (SQL and Python are a must)
You have a good understanding of the software development process (git, ci/cd etc.)
You have a good knowledge about BI systems (Spotfire or Power BI)
BI skills are a plus
Entrepreneurial, team and can-do spirit, able to work unsupervised and with problem-solving skills, constantly striving for effective solutions
Creativity, innovation mindset and ability to correctly take into account technologies low stacks
Customer-centric with quick reactivity on issues or changing requirements to deliver high-level of service
Working with Agile mindset in an international, multicultural and complex environment (English is mandatory)
Experience
Education, Professional Qualifications and Experience
Degree in IT development with Big Data major or equivalent work experi