- Company Name
- Okta
- Job Title
- Senior Data Engineer, Data Engineering (Auth0)
- Job Description
-
**Job title**
Senior Data Engineer – Data Engineering (Auth0)
**Role Summary**
Lead the design, development, and operation of scalable, secure data pipelines that power analytics and customer‑facing features for Okta’s Customer Identity Cloud. Collaborate closely with product, engineering, and machine‑learning teams to translate large‑scale data into usable insights, while mentoring junior engineers and driving continuous improvement of the data platform.
**Expectations**
- Build, maintain, and optimize robust data infrastructure that supports real‑time security, eventing, and analytics use cases.
- Advise on and adopt modern data technologies to accelerate delivery and improve engineering efficiency.
- Mentor and coach teammates, contributing to team growth and excellence in operational support.
- Navigate ambiguity, prioritize effectively, and deliver high‑quality results in a fast‑changing environment.
**Key Responsibilities**
- Design and implement end‑to‑end data pipelines (ingestion, transformation, storage, serving).
- Develop data models and schemas that support analytics and threat‑intelligence features.
- Ensure data security and privacy in all data handling processes.
- Build developer‑friendly, API‑driven interfaces (REST or gRPC) for internal and external consumption.
- Collaborate with Data Enablement and ML teams to integrate data solutions into customer‑facing products.
- Participate in support rotation, diagnosing and resolving production incidents with a focus on reliability.
- Evaluate and recommend new tools, frameworks, and cloud services to improve platform performance and scalability.
- Produce and maintain technical documentation, standards, and best‑practice guidelines.
**Required Skills**
- 4+ years of software development experience; 2+ years with large‑scale data systems.
- Strong data modeling knowledge and practical application of relational and/or columnar schemas.
- Proficiency in big‑data processing technologies (Apache Spark, Flink, Kafka, etc.) and ETL frameworks.
- Experience with cloud data platforms (AWS Redshift, Athena, Snowflake, GCP BigQuery, Azure Synapse).
- Solid programming in Python, Scala, or Java; SQL expertise on distributed engines.
- Knowledge of data security and privacy best practices; familiarity with IAM and threat‑intel workflows.
- API design experience (REST, gRPC, OpenAPI).
- Containerization and orchestration skills (Docker, Kubernetes).
- Excellent written and verbal communication, problem‑solving, and mentoring abilities.
**Required Education & Certifications**
- Bachelor’s degree in Computer Science, Computer Engineering, Data Science, or a related technical field.
- Professional certifications (e.g., AWS Certified Big Data – Specialty, Google Cloud Professional Data Engineer, SnowPro Certified) are a plus but not mandatory.