cover image
Tror - AI for everyone

Tror - AI for everyone

tror.ai

3 Jobs

54 Employees

About the Company

TROR is an artificial intelligence consultancy specialising in developing
powerful and customized AI solutions for business. With top AI Experts we take pride in providing the best cutting-edge AI consultancy. Our years of
experience in various industries helps us to develop and implement bespoke
AI solutions for businesses. Our on demand AI products have helped over 100
companies drive transformational results.

Listed Jobs

Company background Company brand
Company Name
Tror - AI for everyone
Job Title
AI/ML Architect
Job Description
Job Title: AI/ML Architect Role Summary Lead the design and deployment of scalable, observable, and compliant AI/ML solutions that integrate seamlessly with clinical workflows and hospital data platforms. Own the full model lifecycle from development to drift monitoring, with a focus on HIPAA‑regulated environments and real‑time inference. Expectations - 8+ years of experience in AI/ML or data architecture roles. - Proven track record of designing end‑to‑end ML pipelines and MLOps operations in cloud environments. - Strong understanding of clinical data standards (FHIR, HL7, SMART‑on‑FHIR) and experience with enterprise EHR platforms (e.g., Epic). - Ability to architect and secure GenAI workloads, including LLMs, embeddings, and agentic systems. Key Responsibilities - Design and implement end‑to‑end ML pipelines using Airflow, MLflow, Vertex AI, or Azure ML. - Build and deploy containerized inference services on Kubernetes (K8s) with Docker, ensuring observability through AppDynamics or similar. - Own the model lifecycle: versioning, rollback, shadow testing, and drift monitoring. - Architect GenAI pipelines (LLM inference, embeddings, RAG) with tools such as Vertex AI, Azure OpenAI, LangChain, and vector databases (FAISS, Pinecone, Weaviate, ChromaDB). - Implement governance guardrails for agentic systems, including prompt injection protection, moderation, and output fallback. - Integrate real‑time inference services with Epic via FHIR APIs, ensuring end‑to‑end audit trails and access controls. - Enforce HIPAA‑compliant encryption, access controls, and audit logging across all services. - Collaborate with cross‑functional teams to define and enforce MLOps best practices and compliance standards. Required Skills - Kubernetes, Docker, and container orchestration expertise. - Deep familiarity with GCP Vertex AI, Azure ML, and Snowflake. - Proficiency in building and monitoring ML pipelines (Airflow, MLflow). - Experience with HIPAA‑regulated, real‑time model deployment and security controls. - Knowledge of clinical data standards (FHIR, HL7, SMART‑on‑FHIR). - Strong scripting/automation skills (Python, Bash, Terraform). - Familiarity with GenAI tools (LLMs, embeddings), LangChain, and vector databases. Required Education & Certifications - Bachelor’s or Master’s degree in Computer Science, Data Science, or related field. - Relevant certifications, such as Certified Kubernetes Administrator (CKA), GCP Professional AI Engineer, or Azure AI Engineer Associate, are preferred.
Raritan, United states
Hybrid
Senior
18-11-2025
Company background Company brand
Company Name
Tror - AI for everyone
Job Title
Java/Python Flink Engineer
Job Description
**Job Title** Java/Python Flink Engineer **Role Summary** Design, develop, test, and optimize real‑time streaming applications using Apache Flink (DataStream API) with Java or Python. Build scalable, fault‑tolerant, high‑throughput solutions that integrate with streaming platforms such as Apache Kafka and run on AWS infrastructure. **Expectations** - Minimum 10 years of professional software development experience. - At least 3 years dedicated to stream processing with Apache Flink. - Proven ability to deliver production‑ready, highly available, and low‑latency services. - Strong grasp of distributed system principles (consistency, fault tolerance, stateful processing). - Hands‑on experience on AWS big‑data services (e.g., EMR, S3, Kinesis). **Key Responsibilities** - Implement and maintain stream processing pipelines in Java or Python using Flink’s DataStream API. - Design stateful, exactly‑once processing logic with checkpointing and savepoints. - Integrate Flink jobs with Apache Kafka and other messaging systems for data ingestion and output. - Tune performance, memory usage, and latency; adjust watermarking and window strategies. - Monitor job health, troubleshoot failures, and conduct root‑cause analyses. **Required Skills** - Advanced proficiency in Java (preferred) or Python for enterprise‑level applications. - Deep, production‑grade experience with Apache Flink (DataStream API, windowing, state processors). - Solid knowledge of Apache Kafka or comparable streaming platforms. - Familiarity with AWS services relevant to big‑data and streaming (EMR, S3, Kinesis, Lambda). - Strong understanding of distributed systems concepts, event‑time vs. processing‑time semantics, and state consistency models. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Software Engineering, or a related technical field. - Professional certifications in big‑data analytics or cloud platforms are acceptable but not mandatory.
Mountain view, United states
On site
Senior
03-12-2025
Company background Company brand
Company Name
Tror - AI for everyone
Job Title
Business Data Analyst with salesforce and healthcare exp
Job Description
**Job Title** Business Data Analyst – Salesforce & Healthcare **Role Summary** Act as the liaison between business stakeholders and technical teams, translating business requirements into data and analytics solutions. Leverage Salesforce, Snowflake, DBT, Power BI, and SQL Server to deliver insights and support data‑driven decision making within a healthcare environment. **Expectations** - 8‑10 years of business analysis experience in data or analytics projects. - Strong background in the pharmaceutical or healthcare domain. - Proven expertise with Salesforce, Snowflake, Power BI, DBT, and SQL Server. - Familiarity with Agile Scrum, Jira/Azure DevOps, and user story creation. **Key Responsibilities** - Elicit, document, and validate business requirements for data and reporting solutions. - Define KPIs, metrics, and data sources in collaboration with stakeholders. - Translate business needs into functional specifications for Snowflake, DBT, Power BI, and SQL Server. - Support data modeling, validation, and quality assurance with engineering and BI teams. - Conduct gap analyses and propose data‑driven solutions to business challenges. - Facilitate Agile ceremonies (backlog grooming, sprint planning, reviews). - Draft user stories, acceptance criteria, and UAT plans. - Ensure completed solutions meet business expectations and compliance standards. **Required Skills** - Business analysis, requirements gathering, and documentation. - Salesforce administration and data architecture knowledge. - Proficiency with Snowflake, DBT, Power BI, SQL Server, and SQL querying. - Agile Scrum methodology and tools (Jira, Azure DevOps). - Strong analytical, communication, and stakeholder management skills. - Understanding of data governance, quality, and regulatory compliance. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Data Analytics, Business Information Systems, or a related field (or equivalent experience). - Salesforce certifications (e.g., Salesforce Certified Data Architecture & Management Designer) and/or related analytics certifications are preferred.
Morrisville, United states
On site
Senior
02-12-2025