- Company Name
- Visa
- Job Title
- Machine Learning Engineer
- Job Description
-
**Job Title:**
Machine Learning Engineer
**Role Summary:**
Design, implement, and maintain end‑to‑end automated ML pipelines for Visa’s Global Data Office, ensuring seamless deployment, monitoring, and governance of models that power AI/ML products across the payments ecosystem. Work cross‑functionally with data scientists, product managers, software engineers, and business stakeholders to translate analytical insights into production‑ready solutions while prioritizing scalability, reliability, and responsible AI.
**Expectations:**
– 3+ years of AI/ML development and deployment experience (or 2+ years with an advanced degree).
– Advanced degree (MSc, MBA, PhD, etc.) in a quantitative discipline (Statistics, CS, Economics, Engineering).
– Demonstrated ability to build and run production‑grade pipelines in distributed environments.
**Key Responsibilities:**
1. Build, test, and deploy scalable ML pipelines (model training, validation, refit, recalibration, monitoring, and serving) using Spark, Python, Hive/Scala, Airflow, GitHub, and MLflow.
2. Collaborate with data scientists, engineers, product managers, and cross‑functional teams to design and deliver AI/ML solutions that meet business objectives.
3. Lead end‑to‑end model lifecycle management—concept, design, implementation, roll‑out, and maintenance—ensuring high quality and performance.
4. Apply model governance, monitor drift, and refine models continuously to uphold accuracy and compliance.
5. Remain current on state‑of‑the‑art algorithms and integrate relevant innovations into Visa’s products.
6. Design and maintain reliable, high‑performance distributed systems for large‑scale data ingestion, processing, and storage (Hadoop, Parquet, Avro, HBase, etc.).
7. Write production‑ready code in Linux/Unix, shell scripting, and Jupyter Notebooks, and optionally leverage AWS for MLOps workflows.
**Required Skills:**
* Programming & Pipeline Tools: Spark, Scala, Python, Hive, Airflow, GitHub, MLflow.
* Big Data & Distributed Systems: Hadoop ecosystem (Spark, MLlib, GraphX), Hadoop storage formats (Parquet, Avro, HBase).
* Statistical & ML Techniques: Predictive modeling (regression), classification, clustering, PCA, factor analysis, decision trees, transformers, large‑language models.
* Data Engineering: Large‑scale ingestion, processing, and storage across distributed systems.
* DevOps & Ops: Linux shell scripting, Jupyter, model drift monitoring and governance, model re‑training workflows.
* Cloud (optional): AWS MLOps services.
**Required Education & Certifications:**
* Bachelor’s Degree or higher in a quantitative field (Statistics, Computer Science, Engineering, Economics).
* 3+ years professional experience in AI/ML model development and deployment, or 2+ years with an advanced degree.
* No mandatory certifications, but knowledge of model governance and MLOps best practices is preferred.
Foster city, United states
Hybrid
Junior
25-11-2025