Job Specifications
Who you are:
You're a hands on Senior Data Engineer with strong Python skills and experience building robust data pipelines in cloud environments. You enjoy solving complex data problems, working cross-functionally, and building the infrastructure that powers analytics and product development. You're ready to take the next step in your career at a company where your work will directly impact the business and customer experience. Does this sound like you?
If so, keep reading and apply today!
What you'll do:
Design, build, and maintain reliable data pipelines to support internal analytics and customer-facing product features.
Architect, develop, and maintain robust data infrastructure using structured (Postgres) and non-structured databases (MongoDB)
Write, optimize, and maintain extensive SQL queries and scripts to support complex data retrieval and analysis.
Design, develop, and manage ETL processes using APIs to integrate various data sources seamlessly into our analytics environment.
Perform database optimization, performance tuning, and capacity planning to ensure maximum efficiency and reliability.
Own data integrations across systems and ensure data is clean, validated, and well-modeled.
Collaborate closely with data scientists, analysts, and software engineers to ensure the right data is available in the right format at the right time.
Leverage modern data tools and cloud infrastructure to build scalable solutions that meet business and technical requirements.
Troubleshoot data issues and continuously improve pipeline performance and observability.
What you have:
Advanced proficiency in Python and SQL in a production environment, including query optimization and schema design.
Proven experience working programmatically with structured (Postgres) and unstructured (MongoDB) databases.
Strong background in database performance optimization and capacity planning.
Demonstrated expertise in developing and maintaining ETLs using APIs and automation frameworks.
Experience with cloud platforms such as AWS (especially services like Glue, Redshift, or S3).
Solid understanding of data modeling and best practices for data integrity and security.
Familiarity with Spark, Hive, Databricks, or similar big data tools.
Working knowledge of Terraform, Kubernetes, and CI/CD workflows.
Excellent communication and collaboration skills, with the ability to work cross-functionally and translate technical concepts to non-technical audiences.
Experience with orchestration tools like Prefect or Airflow.
Exposure to modern data workflows, notebooks, and warehouse modeling.
Ensure data quality and integrity through proactive monitoring, testing, and validation processes.
Why join Polly?
High Bar of Talent: Polly consistently performs in the top quartile of start-up companies, and we consider the people of Polly the engine helping us achieve success. Many candidates choose Polly because of the collaborative, smart, and fun people that work here. We strive to hire the best to continue to raise that bar, and
About the Company
We didn't just set a new bar in product and pricing technology; we redefined it. Polly has pioneered the next generation of mortgage capital markets technology with its cutting-edge, data-driven platform. Its enterprise-grade solutions, including the industry's only cloud-native, commercially scalable product, pricing, and eligibility (PPE) engine and first-of-its-kind Polly/(tm) AI platform, empower the nation's top banks, credit unions, and mortgage lenders to increase profitability, automate workflows, and revolutionize t...
Know more