- Company Name
- Smith & Associates
- Job Title
- AI/ML Engineer
- Job Description
-
**Job Title:** AI/ML Engineer
**Role Summary**
Design, develop, and deliver production‑grade AI features, including LLM‑powered applications, agentic AI workflows, and traditional machine‑learning models. Drive the full lifecycle from data preparation and prototype to AWS deployment, monitoring, and iterative improvement.
**Expectations**
- Deliver end‑to‑end AI solutions that meet performance and reliability standards.
- Translate research papers and emerging techniques into operational prototypes.
- Collaborate closely with software engineering teams to integrate AI components into system architecture.
- Actively monitor model health, address drift, and refine models based on telemetry and user feedback.
**Key Responsibilities**
- Build and tune LLM features: prompt engineering, RAG pipelines, plug‑ins, guardrails, and evaluation metrics.
- Design, prototype, and ship agentic AI systems (tool calls, planning, multi‑agent orchestration).
- Develop, validate, and deploy classification, regression, clustering, NLP, and CV models.
- Write clean, well‑tested Python code for data processing, modeling, and RESTful services.
- Package and deploy models on AWS (S3, Lambda, ECS/EKS, SageMaker) with CI/CD pipelines.
- Construct efficient data pipelines and interface with SQL/NoSQL databases and vector stores (FAISS, Pinecone, Milvus).
- Monitor deployed models for latency, drift, and accuracy; iterate based on analytics and user input.
- Produce documentation and communicate progress to cross‑functional teams.
**Required Skills**
- Strong knowledge of algorithms, data structures, and complexity analysis.
- Solid grasp of core ML concepts: bias/variance, feature engineering, cross‑validation, regularization, metrics.
- Experience with LLM fundamentals: tokenization, fine‑tuning, RAG, evaluation.
- Familiarity with agentic AI components: tool calling, planning, memory, multi‑agent orchestration.
- Understanding of Model Context Protocol (MCP) for context sharing and tool orchestration.
- Proficiency in Python with NumPy, pandas, scikit‑learn; experience in PyTorch or TensorFlow preferred.
- AWS fundamentals: IAM, S3, Lambda, ECS/EKS, SageMaker; equivalent cloud knowledge acceptable.
- Textual and vector database experience: SQL (efficient queries, normalization, indexing), NoSQL (DynamoDB), vector DBs (FAISS, Pinecone, Milvus).
- Version control (Git), testing, linting, code reviews.
- Optional: Airflow/Prefect, message queues, API dev (FastAPI/Flask), observability tools, security/privacy fundamentals.
- Basic linear algebra, probability, calculus.
**Required Education & Certifications**
- Bachelor’s or Master’s degree in Computer Science, Data Science, Electrical Engineering, or related field (or equivalent projects/internships).
- 0–2 years of professional experience (internships, open‑source contributions, or significant personal projects count).