- Company Name
- Bolt Insight
- Job Title
- Data Scientist – Analytics & Pipelines
- Job Description
-
**Job title**
Data Scientist – Analytics & Pipelines
**Role Summary**
Build, maintain, and optimize end‑to‑end analytical pipelines that ingest, cleanse, and structure diverse research, commercial, behavioral, and product data. Deliver reproducible, high‑trust datasets and actionable insights to internal dashboards, client reports, and product teams. Focus on data preparation, quality assurance, and insight enablement rather than model training.
**Expactations**
- Own the analytical backbone of the platform, ensuring data reliability and consistency.
- Translate ambiguous stakeholder inquiries into clear metrics and analytical definitions.
- Deliver insights in a repeatable, scalable manner that aligns with business objectives.
- Collaborate effectively across engineering, product, research, and client teams.
**Key Responsibilities**
- Design and implement scalable ETL/ELT pipelines for multi‑source data (market research, commercial, behavioral, product).
- Write and maintain dbt models and SQL transformations; document data logic comprehensively.
- Deploy automated data quality checks, monitoring, and observability for warehouse/lakehouse infrastructure.
- Produce recurring insight artifacts: trend analyses, segment and cohort comparisons, distribution views, profiling reports, and thematic frequency tracking.
- Standardize insight definitions across studies to enable longitudinal comparability.
- Build self‑serve reporting layers and analytics tools; define and monitor KPI metrics for platform health and insight quality.
- Present findings clearly to both technical and non‑technical audiences.
- Partner with engineering to refine data architecture, performance, and observability.
- Support client engagements with robust, explainable data slices and evidence backing insights.
**Required Skills**
- 3+ years’ experience in data science, analytics, BI, or research data roles.
- Advanced SQL and Python proficiency; comfortable writing complex queries and scripts.
- Proven track record building full‑stack ETL/ELT pipelines and analytical datasets.
- Hands‑on experience with dbt or equivalent transformation tooling.
- Ability to work with messy, heterogeneous data from multiple sources.
- Strong analytical thinking and communication skills; adept at clarifying vague questions into rigorous outputs.
**Required Education & Certifications**
- Bachelor’s degree in Computer Science, Statistics, Data Science, or related quantitative field (or equivalent professional experience).
- Relevant certifications (e.g., SQL, dbt, AWS/Azure data services) are a plus but not mandatory.