cover image
team.blue

team.blue

team.blue

5 Jobs

2,772 Employees

About the Company

team.blue is an ecosystem of successful brands working together across regions to provide customers with everything they need to succeed online. 60+ successful brands make up the group; within those brands, over 3000 experts serve its 3.3+ million customers across Europe and beyond.

team.blue's brands are a mix of traditional hosting businesses, offering services from domain names, email, shared hosting, e-commerce and server hosting solutions and specialist SaaS providers offering adjacent products such as compliance, marketing tools and team collaboration products. This broad product offering makes it a one-stop partner for online businesses and entrepreneurs across Europe.

Listed Jobs

Company background Company brand
Company Name
team.blue
Job Title
AI/ML - Senior Platform Engineer
Job Description
Job Title: Senior AI/ML Platform Engineer Role Summary: Design and maintain scalable ML/AI platforms to support model deployment, LLM management, and ML lifecycle operations across multi-GPU environments. Expectations: - 4+ years in platform engineering, DevOps, or infrastructure roles - 2+ years in ML/AI infrastructure or platform development Key Responsibilities: - Build and maintain scalable platforms for ML/AI deployment, with focus on LLM inference optimization (latency, throughput) - Develop automated deployment pipelines using containerization (Docker/Kubernetes) and CI/CD tools - Implement monitoring (Prometheus/Grafana), logging, and alerting for ML workloads - Manage GPU cluster resources for cost efficiency and performance - Design disaster recovery and backup strategies for ML infrastructure - Create self-service APIs/tools for data scientists to deploy models independently - Automate infrastructure provisioning using Terraform/CloudFormation - Implement model versioning, rollback, and A/B testing frameworks - Collaborate with data science teams to optimize deployment workflows - Enforce secure ML model serving practices with security teams Required Skills: - Cloud platforms: AWS, Azure, or GCP (GPU-enabled services) - Programming: Python, Go/Java/Rust - Containerization: Docker, Kubernetes (GPU scheduling/resource management) - ML frameworks: PyTorch/TensorFlow, model serving (TorchServe/TensorFlow Serving) - Infrastructure as Code: Terraform/CloudFormation - MLops practices, GPU computing (CUDA/multi-GPU), and distributed training - Streaming data tools: Kafka/Kinesis/Pulsar Required Education & Certifications: - Bachelor’s degree in Computer Science/Software Engineering or related field - Certifications not specified.
Ghent, Belgium
Remote
Senior
07-11-2025
Company background Company brand
Company Name
team.blue
Job Title
AI/ML - Platform Lead
Job Description
**Job Title**: AI/ML - Platform Lead **Role Summary**: Lead the development and maintenance of an enterprise AI/ML platform to support AI/ML workloads across an organization spanning 22 European countries. Focus on architecting scalable infrastructure, driving adoption of GenAI technologies, and mentoring cross-functional teams to operationalize LLMs and AI/ML workflows. **Expectations**: 6+ years of hands-on experience building AI/ML platforms; deep expertise in cloud infrastructure, AI/ML operations, and cross-functional collaboration. Proficiency in modern LLM toolchains, model optimization, and backend development required. **Key Responsibilities**: - Architect and implement scalable AI/ML infrastructure for GPU-accelerated workloads, including private cloud environments, Kubernetes, and observability frameworks. - Build core GenAI application platforms to power LLM workflows (e.g., RAG, inference pipelines, vector databases). - Design and optimize services for training/inference scalability, model versioning, and latency management in production LLMs. - Drive cross-functional adoption of ML/AI tooling via reusable components, automation, and metrics-driven dashboards. - Define technical roadmaps aligning innovation with compliance, fairness, and security standards for AI/ML systems. - Mentor teams of engineers, scientists, and domain experts to operationalize LLMs, implement RAG systems, and integrate multi-modal data processing. **Required Skills**: - **Technical**: Kubernetes, Python (FastAPI), Go, Docker, Terraform; expertise in AIOps/MLOps practices, ETL pipelines, and observability frameworks. - **AI/ML Specialization**: LLM toolchains (LangChain, LlamaIndex), model optimization (quantization, ONNX), RAG systems, vector databases (Qdrant, Pinecone), LLM inference engines (vLLM, Tensor-LLM). - **Architecture**: Microservices, event-driven systems (Kafka, SSE), and production-grade ML pipeline design. - **Problem Solving**: Model fine-tuning, prompt engineering, semantic search, and latency-critical deployment strategies. **Required Education & Certifications**: Bachelor’s or Master’s in Computer Science, Engineering, or equivalent; certifications in cloud (AWS/GCP/Azure) or AI/ML (e.g., TensorFlow, PyTorch) preferred but not mandatory.
Ghent, Belgium
Remote
Senior
07-11-2025
Company background Company brand
Company Name
team.blue
Job Title
Data Engineer
Job Description
**Job Title:** Data Engineer **Role Summary** Develop and maintain central data warehouse (DWH) and scalable data pipelines to optimize data ingestion, processing, and integration for analytical and operational use. Focus on scalable, secure, and cost-efficient data architectures to support business strategies and analytics. **Expectations** Senior-level experience (7+ years) in data engineering roles, preferably in related industries (hosting, SaaS, or data-driven sectors). Advanced degree in Computer Science, STEM, or equivalent work experience. Demonstrated expertise in delivering high-performance, stable data solutions with measurable business impact. **Key Responsibilities** - Design and maintain efficient, scalable data pipelines for structured/unstructured data. - Build and optimize data models (dimensional, star/snowflake schemas) for analytical and operational needs. - Integrate internal/external data sources (APIs, databases, systems) into DWH. - Architect and implement scalable data architectures aligned with governance and performance standards. - Optimize data platforms for performance, security, scalability (schema design, indexing, query optimization). - Guide implementation of governance practices (data lineage, quality, metadata management). - Translate business requirements (marketing, finance, M&A) into technical solutions. **Required Skills** - Advanced SQL (RDBMS: SQL Server, Oracle, PostgreSQL, MySQL, MariaDB). - Cloud data platforms: Databricks (PySpark, Delta Lake), Google BigQuery (partitioning, materialized views). - Workflow orchestration: Airflow, dbt. - Object storage systems (S3, ADLS, GCS); data formats (Parquet, ORC, Avro, JSON). - Containerization (Docker, Kubernetes), CI/CD pipelines, and data versioning/schemas. - Data warehouse modeling techniques (dimensional modeling, star/snowflake schema). - Strong analytical and problem-solving skills with ability to manage multiple projects. **Required Education & Certifications** - Advanced degree in Computer Science, STEM (Mathematics, Physics, Statistics, Engineering). - Proven record of delivering data intelligence solutions with business value and efficient resource use. - Valid work eligibility for the country of employment.
Paris, France
Remote
Senior
10-11-2025
Company background Company brand
Company Name
team.blue
Job Title
Data Scientist
Job Description
**Job Title:** Data Scientist **Role Summary:** Leverage advanced analytics, machine learning, and statistical modeling to turn complex data into actionable business insights. Partner with cross‑functional teams to design, build, and operationalize data‑driven solutions that support strategic objectives and improve decision‑making. **Expactations:** - Deliver high‑impact analytical solutions aligned with business goals. - Communicate findings clearly to both technical and non‑technical stakeholders. - Continuously improve data science pipelines and stay current with emerging techniques. - Contribute to the data science roadmap and prioritize projects based on organizational needs. **Key Responsibilities:** - Identify business challenges and develop data‑based strategies to address them. - Build, deploy, and maintain predictive and machine‑learning models. - Perform large‑scale data analysis, visualization, and reporting. - Design and enhance end‑to‑end data science pipelines. - Collaborate with product managers, engineers, and leadership to ensure solutions meet requirements. - Mentor junior team members and share best practices. **Required Skills:** - Proficient in Python, R or Scala; strong SQL skills. - Experience with ML frameworks (TensorFlow, Scikit‑learn) and big‑data tools (Hadoop, Spark). - Solid understanding of statistical methods and experimental design. - Ability to translate technical results into business recommendations. - Strong problem‑solving, analytical thinking, and communication skills. **Required Education & Certifications:** - Master’s or Ph.D. in Data Science, Statistics, Mathematics, Computer Science, or related field. - 5+ years of professional data‑science experience with a proven record of solving business problems.
Ghent, Belgium
Remote
Mid level
21-01-2026