Job Specifications
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA
Title: Machine Learning Engineer
Location: San Jose, CA
Responsibilities:
Productize and optimize models from Research into reliable, performant, and cost-efficient services with clear SLOs (latency, availability, cost).
Scale training across nodes/GPUs (DDP/FSDP/ZeRO, pipeline/tensor parallelism) and own throughput/time-to-train using profiling and optimization.
Implement model-efficiency techniques (quantization, distillation, pruning, KV-cache, Flash Attention) for training and inference without materially degrading quality.
Build and maintain model-serving systems (vLLM/Triton/TGI/ONNX/TensorRT/AITemplate) with batching, streaming, caching, and memory management.
Integrate with vector/feature stores and data pipelines (FAISS/Milvus/Pinecone/pgvector; Parquet/Delta) as needed for production.
Define and track performance and cost KPIs; run continuous improvement loops and capacity planning.
Partner with ML Ops on CI/CD, telemetry/observability, model registries; partner with Scientists on reproducible handoffs and evaluations.
Educational Qualifications:
Bachelors in computer science, Electrical/Computer Engineering, or a related field required; Master's preferred (or equivalent industry experience).
Strong systems/ML engineering with exposure to distributed training and inference optimization.
Industry Experience:
3-5 years in ML/AI engineering roles owning training and/or serving in production at scale.
Demonstrated success delivering high-throughput, low-latency ML services with reliability and cost improvements.
Experience collaborating across Research, Platform/Infra, Data, and Product functions.
Technical Skills:
Familiarity with deep learning frameworks: PyTorch (primary), TensorFlow.
Exposure to large model training techniques (DDP, FSDP, ZeRO, pipeline/tensor parallelism); distributed training experience a plus
Optimization: experience profiling and optimizing code execution and model inference: (PTQ/QAT/AWQ/GPTQ), pruning, distillation, KV-cache optimization, Flash Attention
Scalable serving: autoscaling, load balancing, streaming, batching, caching; collaboration with platform engineers.
Data & storage: SQL/NoSQL, vector stores (FAISS/Milvus/Pinecone/pgvector), Parquet/Delta, object stores.
Write performant, maintainable code
Understanding of the full ML lifecycle: data collection, model training, deployment, inference, optimization, and evaluation.
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA
About the Company
Here at Enigma, we specialize in Generative AI recruitment, specifically focused on Machine Learning and Software Engineering disciplines. With a combined experience of 20+ years, we understand the intricacies of finding the perfect role as well as the right talent for your team.
But what sets Enigma apart? Our consultative approach. We don't just match candidates with job openings; we guide candidates, founders, and hiring managers through the recruitment process. Our value-added services go beyond traditional recruitment ...
Know more