cover image
Klaviyo

Klaviyo

www.klaviyo.com

1 Job

2,717 Employees

About the Company

Klaviyo is the only CRM built for B2C Brands. Powered by its built-in data platform and AI insights, Klaviyo combines marketing automation, analytics, and customer service into one unified solution. With your data all in one place, you can know, engage, and grow your audience like never before.

Powered by its built-in data platform and AI insights, Klaviyo combines marketing automation, analytics, and customer service into one unified solution, making it easy for businesses to know their customers and grow faster. Klaviyo (CLAY-vee-oh) helps relationship-driven brands like Mattel, Glossier, Core Power Yoga, Daily Harvest and 167,000+ others deliver 1:1 experiences at scale, improve efficiency, and drive revenue.

Listed Jobs

Company background Company brand
Company Name
Klaviyo
Job Title
Software Engineer
Job Description
**Job title** Software Engineer – Backend Data Ingestion **Role Summary** Design, implement, and optimize scalable, fault‑tolerant data ingestion pipelines that process billions of events daily. Own end‑to‑end data workflows, ensuring high availability, low latency, and cost efficiency across real‑time and batch systems. Collaborate with product, infrastructure, and data science teams to deliver reliable, actionable datasets that power analytics and AI features. **Expectations** - 4+ years of software engineering, 2+ in data‑intensive or distributed systems. - Strong Python, SQL, and backend development skills. - Experience with distributed data frameworks (Spark, Flink), streaming (Kafka, Pulsar), and orchestration (Airflow). - Cloud‑native pipeline design, preferably on AWS. - Knowledge of data modeling, storage optimization, and governance. - Ability to work in fast‑paced, cross‑functional environments. **Key Responsibilities** - Build and tune high‑scale ingestion, batch, and streaming pipelines (Spark, Flink). - Design and maintain real‑time and batch ETL/ELT workflows with Airflow. - Implement fault‑tolerance, monitoring, and recovery for production data pipelines. - Optimize compute and storage for performance and cost. - Ensure data integrity, governance, and compliance across data stores (MySQL, S3, Redshift). - Collaborate with product, infrastructure, and ML teams to deliver clean, timely data. - Mentor peers and contribute to platform‑wide technical direction. **Required Skills** - Python (Node.js or Java a plus) - SQL, backend development frameworks - Distributed data processing: Apache Spark, Apache Flink - Streaming: Kafka, Apache Pulsar - Workflow orchestration: Airflow - Cloud platforms: AWS (EMR, Lambda, S3, Redshift) - Kubernetes, container orchestration - Data modeling, storage optimization, governance best practices - Strong problem‑solving, collaboration, and communication skills **Required Education & Certifications** - Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). - Relevant certifications in cloud or data engineering are a plus.
Boston, United states
Hybrid
Junior
12-11-2025