cover image
Sigmaways Inc

Sigmaways Inc

www.sigmaways.com

4 Jobs

77 Employees

About the Company

We are one of the region's fastest-growing, multi-award-winning full-lifecycle product engineering service providers. We collaborate with businesses to deliver talent, products, and services faster. Since 2006, we have partnered with pioneering start-ups, innovative enterprises, and the world's largest technology brands. We have utilized our fine-tuned product engineering processes to develop best-in-class solutions for customers in technology, e-commerce, retail, financial services, banking, and consumer products sectors across North America, Europe, and Asia.

We are thrilled to be recognized by multiple media outlets as a fast-growing private company:
#19 in Silicon Valley (Silicon Valley Business Journal) http://bizj.us/ti1ad/i/10
#79 in the Bay Area (San Francisco Business Times) http://bizj.us/th0lj/i/22
#814 in the US (Inc Magazine) http://inc.com/profile/sigmaways

Listed Jobs

Company background Company brand
Company Name
Sigmaways Inc
Job Title
Senior Big Data Engineer
Job Description
**Job Title:** Senior Big Data Engineer **Role Summary:** Design, build, and maintain scalable, next‑generation data platforms using Spark, Hadoop, Hive, Kafka, and cloud services (AWS, Azure, Snowflake). Develop robust pipelines, optimize big data ecosystems, and collaborate across product, engineering, and data science teams to deliver actionable business insights. **Expectations:** - Minimum 7 years of experience designing, developing, and operating Big Data platforms including Data Lakes, Operational Data Marts, and Analytics Data Warehouses. - Bachelor’s degree in Computer Science, Software Engineering, or a related discipline. - Proven proficiency in Spark, Hadoop, Hive, Kafka, and distributed data ecosystems. - Strong background in ETL pipeline development with Hive, Spark, EMR, Glue, Snowflake, Cloudera/MR, NiFi. - Solid understanding of SQL databases (PostgreSQL, MySQL/MariaDB). - Deep knowledge of AWS and Azure cloud infrastructure, distributed systems, and reliability engineering. - Experience with IaC and CI/CD (Terraform, Jenkins, Kubernetes, Docker). - Good programming skills in Python and shell scripting. **Key Responsibilities:** - Design, develop, and support end‑to‑end data applications and platforms focused on Big Data/Hadoop, Python/Spark, and related technologies. - Collaborate with leadership to conceptualize next‑generation data products and contribute to the overall technical architecture. - Work closely with product management, business stakeholders, engineers, analysts, and data scientists to engineer solutions that meet business needs. - Own components from inception through production release, ensuring quality, security, maintainability, and cost‑effectiveness. - Recommend and enforce software engineering best practices with enterprise-wide impact. - Lead continuous process improvements, troubleshoot production issues, and mentor peers on best practices. - Stay current with emerging technologies and rapidly adopt new tools and approaches. **Required Skills:** - Expertise in Spark, Hadoop/MR, Hive, Kafka, and distributed data ecosystems. - Hands‑on experience building ingestion, validation, transformation, and consumption pipelines using Hive, Spark, EMR, Glue ETL/Catalog, Snowflake, Cloudera/Hadoop, NiFi. - Strong SQL skills and experience with PostgreSQL, MySQL/MariaDB. - Deep knowledge of AWS and Azure cloud services (compute, storage, networking, IAM, security). - Proficiency with infrastructure-as-code (Terraform) and CI/CD pipelines (Jenkins). - Containerization and orchestration skills (Docker, Kubernetes). - Familiarity with REST APIs, data integration patterns, and microservices. - Excellent programming skills in Python and shell scripting. - Understanding of distributed systems, reliability engineering, and production best practices. **Required Education & Certifications:** - Bachelor’s degree in Computer Science, Software Engineering, or related field. - Professional certifications (e.g., AWS Certified Big Data – Specialty, Azure Data Engineer Associate) are a plus but not mandatory.
San francisco bay, United states
Hybrid
Senior
09-11-2025
Company background Company brand
Company Name
Sigmaways Inc
Job Title
Lead integration Engineer
Job Description
**Job Title:** Lead Integration Engineer **Role Summary:** Lead design, development, and support of scalable, secure cloud‑native integration solutions on AWS, leveraging serverless technologies, IaC (Terraform), and DevSecOps practices to meet Agile project objectives. **Expectations:** - Deliver high‑impact integration workloads that improve platform reliability and accelerate cloud transformation. - Actively participate in Scrum ceremonies, on‑call rotations, and occasional travel (≤15%). - Demonstrate end‑to‑end ownership from architecture through production support. **Key Responsibilities:** - Architect, develop, test, and deploy cloud solutions using Serverless, Lambda, RDS, EC2, and migration services. - Implement IaC with Terraform, manage CI/CD pipelines (GitLab/Jenkins) and enforce security controls (IAM, encryption). - Design and maintain messaging/integration interfaces via AWS API Gateway, MQ, and related technologies; virtualize legacy IBM ACE/ WebSphere MQ where required. - Troubleshoot production incidents, produce root‑cause analysis, and recommend improvements. - Draft and maintain technical documentation (use cases, scripts, architecture diagrams). - Lead code reviews, patching, and upgrade activities, ensuring compliance with quality standards. - Collaborate cross‑functionally on continuous improvement initiatives. **Required Skills:** *Cloud & Architecture* – AWS (Serverless, Lambda, RDS, EC2, Storage, migration services), Terraform, IaC best practices *Integration* – Messaging & integration patterns, AWS API Gateway, MQ, MFT, IBM ACE (plus) *Programming* – Node.js, Python, or Java; JSON/XML handling, modern design & testing practices *DevSecOps* – CI/CD pipelines (GitLab/Jenkins), alerting, metrics, monitoring dashboards, IAM, Kerberos, authorization, encryption *Methodologies* – Agile/Scrum, continuous improvement, incident handling, documentation **Required Education & Certifications:** - Bachelor’s degree in Computer Science, Engineering, or related field **or** equivalent professional experience. - Relevant cloud or integration credentials preferred (AWS Certified Solutions Architect, AWS Certified Developer, or equivalent).
San francisco bay, United states
Hybrid
Senior
09-12-2025
Company background Company brand
Company Name
Sigmaways Inc
Job Title
Site Reliability Engineer
Job Description
**Job Title:** Site Reliability Engineer **Role Summary:** Senior SRE responsible for continuously monitoring system health, performance, and capacity; diagnosing and resolving deep stack issues; automating routine operations; and partnering with dev teams to enhance application reliability and scalability in a fast‑growth environment. **Expectations:** - Deliver 24/7 uptime for critical services. - Maintain robust alerting and incident response processes. - Drive automation initiatives that reduce manual toil. **Key Responsibilities:** - Monitor service health, performance metrics, alerts, and capacity across production. - Deep‑stack diagnostics and performance tuning of applications and infrastructure. - Script and automate routine maintenance, scaling, and recovery tasks. - Collaborate with developers to integrate reliability considerations into workflow and architectural decisions. - Adapt quickly to changing priorities in a high‑growth setting. **Required Skills:** - Kubernetes administration and troubleshooting. - Strong proficiency with Unix/Linux environments. - Monitoring and debugging of Kafka message queues. - Python scripting for automation and tooling. - API monitoring, diagnostics, and performance analysis. **Required Education & Certifications:** - Bachelor’s degree in Computer Science or related technical field. - Minimum 5 years of relevant site reliability or DevOps experience. - Relevant certifications (e.g., Certified Kubernetes Administrator, Linux Professional Institute Certification) are a plus but not mandatory.
Canada
Remote
Mid level
18-01-2026
Company background Company brand
Company Name
Sigmaways Inc
Job Title
Data Engineer (Python, Spark, AWS)
Job Description
Job Title: Data Engineer (Python, Spark, AWS) Role Summary: Design, build, and maintain scalable, secure data pipelines and supporting web application features that provide actionable cybersecurity risk visibility. Work cross‑functionally in an Agile environment to develop new features, optimize existing systems, and ensure high reliability and performance across distributed data platforms. Expectations: • Deliver clean, maintainable, and well‑documented code. • Ensure data integrity, performance, and reliability at scale. • Collaborate effectively with product, QA, and operations teams. • Participate in on‑call production support and troubleshooting. • Continuous learning and application of emerging tools and best practices. Key Responsibilities: • Develop and optimize Python scripts and Spark jobs for large‑scale data processing. • Build and enhance web application features, including risk assessment workflows and data visualizations. • Write efficient SQL queries against relational databases such as PostgreSQL. • Integrate with RESTful APIs and front‑end technologies (JavaScript, React, HTML, CSS). • Contribute to architectural decisions and improve system reliability. • Maintain code quality through peer reviews, unit/integration tests, and code style compliance. • Deploy and manage infrastructure using Docker, AWS services, Terraform, and Kubernetes. • Coordinate with QA to ensure comprehensive test coverage. • Participate in continuous integration/continuous delivery pipelines and on‑call rotation. • Stay current on industry trends and tools for data engineering and infrastructure. Required Skills: • 5+ years of professional Python experience; scripting and data pipeline development. • Proficiency with Spark, Hadoop, and AWS data services (EMR, Glue, S3). • Deep understanding of SQL and experience with PostgreSQL. • Strong command of JavaScript and ability to develop dynamic front‑end interfaces. • Familiarity with Elixir, Ruby, React, HTML, CSS (full‑stack exposure). • Experience with Docker, Kubernetes, Terraform, and AWS infrastructure. • Knowledge of NoSQL (MongoDB, Elasticsearch) and stream processing (Kafka). • Solid grasp of software engineering fundamentals (data structures, design patterns, clean code). • Hands‑on experience in Agile (Scrum/Kanban) and DevOps practices. • Ability to troubleshoot and resolve production incidents. Required Education & Certifications: • Bachelor’s degree in Computer Science, Information Technology, Computer Engineering, or related field; equivalent work experience accepted. • Optional certifications (e.g., AWS Certified Solutions Architect, AWS Certified Data Analytics).
Toronto, Canada
Hybrid
Mid level
29-01-2026