cover image
Hays

Data Platform Engineer

Hybrid

Toronto, Canada

Freelance

28-01-2026

Share this job:

Skills

Python Unity SQL Big Data Data Governance Data Engineering Apache Kafka Apache Spark GitHub CI/CD DevOps Azure DevOps apache Azure Agile Analytics Spark Databricks PySpark Kafka GitHub Actions

Job Specifications

Job Description: Data Platform Engineer – Data Acquisition & MDM (Azure Databricks, Kafka, Collibra, Cloudera)

Location: Downtown Toronto, ON (Hybrid – 3–4 days onsite)

Type: Long-term Contract

Program: TDS Data Platform – Data Acquisition & Master Data Management

About the Role

The TDS Data Platform team is seeking a highly skilled Data Platform Engineer with strong hands-on experience in Azure Databricks, Kafka, Collibra, Cloudera, and Granular Computing. This role focuses on building, optimizing, and maintaining enterprise-grade data acquisition pipelines and MDM frameworks within a large-scale cloud environment.

You will contribute to the design and development of next-generation data integration capabilities, ensuring reliable, scalable, and governed data flows to support analytics, regulatory reporting, and business operations.

Key Responsibilities

Data Acquisition & Engineering

Design, build, and maintain scalable data ingestion pipelines using Azure Databricks, Apache Spark, and Kafka.
Implement real-time and batch data acquisition frameworks for structured, semi-structured, and unstructured datasets.
Optimize ETL/ELT pipelines to ensure high performance, quality, and reliability.
Integrate on-prem and cloud-based systems (Cloudera → Azure Cloud transformations).

Master Data Management & Governance

Develop MDM data models, matching/merging logic, and golden record creation.
Configure and enforce data governance frameworks using Collibra.
Ensure data lineage, metadata accuracy, and compliance with enterprise standards.

Cloud & Platform Operations

Work within Azure-based environments to deploy and manage data platform components.
Collaborate with platform engineers on cluster performance, autoscaling, and cost optimization.
Troubleshoot data pipeline issues, optimize compute workloads, and improve operational SLAs.

Collaboration & Delivery

Work cross-functionally with Data Architects, Data Stewards, SMEs, and DevOps teams.
Participate in Agile ceremonies; provide estimates, deliver on sprints, and maintain documentation.
Support end-to-end data onboarding for internal business domains and external vendor sources.

Required Skills & Experience

Technical Must-Haves

5–10+ years of experience in Data Engineering or Big Data Platform roles.
Strong hands-on experience with:
Azure Databricks (PySpark, Delta Lake, ADLS, ADF)
Apache Kafka (real-time streaming & integrations)
Cloudera CDH/HDP ecosystem
Collibra Data Governance / Catalog
Granular Computing principles (data decomposition, granular data modeling)
Deep understanding of distributed computing, Spark optimization, partitioning, and cluster tuning.
Strong experience building complex ETL/ELT data pipelines.
Expertise in SQL, Python, and Cloud-native engineering patterns.

Nice-to-Have

Financial services or capital markets experience (TDS or similar domain).
Experience with Databricks Unity Catalog.
Knowledge of MDM tools (Informatica MDM, Reltio, or similar).
Familiarity with CI/CD (Azure DevOps, GitHub Actions).

About the Company

We are leaders in specialist recruitment and workforce solutions, offering advisory services such as learning and skill development, career transitions and employer brand positioning. As the Leadership Partner to our customers, we invest in lifelong partnerships that empower people and businesses to succeed. We help you achieve your career goals and deliver your business needs by combining meaningful innovation with our global scale and insights. Last year we helped over 280,000 people find their next career. Join the mill... Know more