cover image
Sigmaways Inc

Senior Data Engineer

Hybrid

Toronto, Canada

Senior

Freelance

11-09-2025

Share this job:

Skills

Python Java Scala Unity SQL Big Data Data Engineering Apache Airflow Monitoring Problem-solving apache Azure AWS Marketing Agile Analytics GCP Spark Databricks

Job Specifications

As a Senior Data Engineer, you will create and manage data and analytics solutions for B2B marketing, leveraging large scale datasets to build the data environment and integrations that drive advanced omnichannel campaigns.

Must be a hands on engineer with deep data engineering expertise and a passion for building scalable, high-impact solutions.

Responsibilities:

Develop and optimize algorithms, feature stores, analytical stores, and curated datasets to deliver high quality and reliable data at scale.
Drive data driven marketing capabilities by improving data quality, scalability, and efficiency.
Solve complex data challenges across multi layered datasets by building pipelines, libraries, and frameworks.
Support deployed data applications and analytical models, troubleshooting and resolving data issues.
Ensure compliance with governance standards through data lineage, validation, quality checks, and classification.
Integrate diverse sources including streaming, batch, real-time, and API based data to enrich insights and support decision making.
Experiment with emerging tools and techniques to streamline pipeline development, testing, and operations.
Collaborate with cross-functional partners to prioritize business challenges and develop innovative, data driven solutions.
Establish best practices for data engineering, including coding standards, peer reviews, and documentation.
Safeguard sensitive information by implementing robust data security and privacy measures.

Qualifications:

Bachelor's degree in Computer Science, Software Engineering, or related field or equivalent practical experience
Expertise in data engineering with hands on experience in Databricks as a primary development environment.
Experience in delivering multiple end-to-end data warehouse projects in Big Data ecosystems.
Proficient in Python, Java, and Scala, ideally with experience in backend software engineering.
Experience with orchestration tools such as Apache Airflow, Apache NiFi, Lakehouse Connect, Spark Python Data Source API, or similar platforms.
Skilled in performance tuning for schemas, SQL queries, ETL pipelines, and scripts.
Experience working in Agile environments, contributing to iterative development cycles.
Proven hands on experience deploying data-driven and ML applications at scale, including ingestion, feature engineering, and monitoring.
Expertise in building cloud-native solutions using Databricks, Azure, AWS, or GCP.
Familiarity with Unity Catalog and Delta tables.
Strong analytical and problem-solving skills, able to diagnose complex challenges and implement effective solutions.
Excellent communicator and collaborator, experienced in working with cross-functional, distributed teams.

About the Company

We are one of the region's fastest-growing, multi-award-winning full-lifecycle product engineering service providers. We collaborate with businesses to deliver talent, products, and services faster. Since 2006, we have partnered with pioneering start-ups, innovative enterprises, and the world's largest technology brands. We have utilized our fine-tuned product engineering processes to develop best-in-class solutions for customers in technology, e-commerce, retail, financial services, banking, and consumer products sectors ac... Know more