cover image
Basecamp Research

Basecamp Research

www.basecamp-research.com

1 Job

57 Employees

About the Company

Basecamp Research is an AI company dedicated to solving the most pressing challenges in the life sciences by exploring beyond known biology. The company discovers and designs proteins with artificial intelligence trained on its proprietary knowledge graph of nature, the largest and most diverse of its kind, to design biological solutions for pharma, food and industrial applications. Understanding the full genetic, evolutionary, and environmental context of each protein allows Basecamp Research to design tailored proteins for specific applications without the need for expensive and time-consuming directed evolution campaigns. We're a team of explorers, scientists and policy experts driven by our ambition to protect and learn from nature's diversity, whilst delivering life-changing breakthroughs to those who need them most. – BCR

Listed Jobs

Company background Company brand
Company Name
Basecamp Research
Job Title
Software Engineer
Job Description
Job Title: Software Engineer Role Summary: Design and operate production-level scientific and AI pipelines that enable high‑throughput biological data processing, inference, and analysis. Own internal tooling (APIs, CLIs, dashboards) and contribute to scalable workflow orchestration on HPC and Kubernetes platforms. Expectations: - 1‑5 years of relevant experience or substantial projects in software, data, ML, or infrastructure engineering. - Deliver robust, maintainable code from day one and grow into specialized areas. - Demonstrate ownership, initiative, and ability to collaborate across interdisciplinary teams. Key Responsibilities: - Develop and maintain data‑processing, inference, and analysis workflows used by scientists and ML researchers. - Contribute to containerised pipelines deployed on HPC and Kubernetes. - Build and improve internal tools (APIs, CLIs, dashboards) that support biological and ML workflows. - Extend orchestration with Dagster, Temporal, or similar to increase reproducibility and observability. - Manage performance, logging, monitoring, and operational reliability in distributed systems. - Collaborate with platform engineering on infrastructure, GPU scheduling, and cluster reliability. - Partner with scientists to translate biological workflows into scalable automated systems. - Participate in technical design discussions, code reviews, and practice improvements. Required Skills: - Python and Go proficiency. - Docker, Kubernetes, cloud‑native development. - Experience with workflow systems (Dagster, Temporal, Airflow). - Linux systems and shell scripting. - Performance tuning, logging, monitoring, and observability practices (Prometheus, Grafana, Datadog). - Strong fundamentals, problem‑solving, and builder mindset. Required Education & Certifications: - Bachelor’s degree in Computer Science, Computer Engineering, Software Engineering, or related field, or equivalent experience. - No mandatory certifications required.
London, United kingdom
On site
Fresher
09-12-2025