cover image
StitcherAI

StitcherAI

stitcher.ai

1 Job

5 Employees

About the Company

StitcherAI provides an essential system of record for enterprise IT Finance teams striving to maximize the value of their IT investments. To tackle today’s IT Finance challenges, organizations require accurate, actionable, business-aligned data and an engagement model that enables alignment and action across the enterprise. Traditional FinOps and IT Finance tools don’t deliver results, prompting many companies to develop their own solutions that often have risks and limitations. StitcherAI addresses these gaps with its AI-powered system of record for finance. It creates business-aligned IT Finance datasets and delivers critical data directly to stakeholders, tools, and business processes—enabling meaningful action. Connect with us to discover the future of IT Finance!

Listed Jobs

Company background Company brand
Company Name
StitcherAI
Job Title
Principal Engineer
Job Description
Job Title: Principal Engineer Role Summary: Lead data engineering for an AI‑native FinOps platform, architecting scalable data pipelines and cloud‑native services that deliver cost insights to enterprise users. Expectations: • Own end‑to‑end problem resolution and solution ownership as a founder‑mindset engineer • Consistently deliver high‑quality results while scaling both product and engineering team • Communicate across disciplines and adapt swiftly to evolving priorities in a fast‑moving startup context Key Responsibilities: • Design and build high‑performance data systems using Python/Rust and modern open‑source tools • Architect, develop, test, and deploy data pipelines with frameworks such as Polars, Temporal, and Airflow • Integrate data from multiple sources (cloud platforms, SaaS APIs, storage formats) into a unified, actionable system • Implement RESTful APIs, Docker/Kubernetes deployments, CI/CD pipelines, and observability for production environments • Optimize performance through distributed aggregations, partitioning, clustering, and storage tuning Required Skills: • 5+ years building enterprise‑scale data platforms • 3+ years hands‑on experience in Python and/or Rust for cloud‑native systems • Deep knowledge of big data technologies (Spark, Hadoop, Hive) and data transformation libraries (Pandas, Polars) • Expertise in data pipeline orchestration (Temporal, Airflow) and cloud integration (AWS/GCP/Azure SaaS APIs) • Strong backend fundamentals: REST APIs, Docker, Kubernetes, CI/CD, observability, monitoring • Optional: AI/ML experience (forecasting, anomaly detection, GenAI training/deployment), authentication/authorization knowledge, FinOps or cost‑analytics background Required Education & Certifications: None specified.
Toronto, Canada
On site
Senior
04-02-2026