Job Specifications
About The AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.
We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
About The Team
Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product.
Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.
What You Might Work On
Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)
Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale
Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal
Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them
Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer
Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
Contribute to open standards and open source, and share lessons with the broader community where appropriate
If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it
Role Summary
Build and maintain a modern, mission-aware detection engineering practice. You'll own AISI's threat model, define detections that reflect AISI-specific risks, and collaborate with DSIT's SOC to extend coverage and context. You'll focus on signal quality, not alert volume. You will extend coverage to AI/ML surfaces, instrumenting the model lifecycle and AI platforms so threats to model weights, data pipelines, GPU estates, and inference endpoints are visible, correlated, and actionable.
Responsibilities
Define and evolve AISI's threat model, working with platform, research, and policy teams
Write detection rules, correlation logic, and hunt queries tailored to AISI's risk surface
Ensure relevant signals are logged, routed, and contextualised appropriately
Maintain detection playbooks, triage documentation, and escalation workflows
Act as a liaison between AISI engineering and DSIT's central SOC
Evaluate detection gaps and propose new signal sources or telemetry improvements
Extend the threat model to AI/ML: data/feature pipelines, training/finetuning, evaluations/release gates, registries, GPUs, and inference services
Develop detections for AI-specific risks: model weight custody/exfil (e.g., anomalous KMS decrypts, S3 access), registry tampering, dataset poisoning, training pipeline/image compromise, GPU abuse/cryptomining, and inference abuse (prompt injection/data exfil patterns, anomalous RAG connector access)
Integrate AI platform telemetry (e.g., SageMaker/Bedrock logs, model registry events, provenance/attestation)
Define hunts and correlations that tie AI safety/evaluation signals (red-team hits, eval regressions, release gate overrides) to security events and insider/outsider activity
Author and rehearse AI-focused incident playbooks (weights leak, compromised model artefacts, inference abuse campaigns) with DSIT SOC
Profile Requirements
Strong understanding of detection-as-code, MITRE ATT&CK, log pipelines, and cloud signal sources
Able to navigate outsourced SOC relationships while owning internal threat understanding
Familiarity with AWS CloudTrail, GuardDuty, KMS, S3 access logs, EKS/ECS audit, custom log ingestion; exposure to SageMaker/Bedrock or equivalent a plus
Curious, methodical, and proactive mindset
Practical grasp of AI/ML attack surfaces and telemetry needs (model registries, weights custody, GPU/accelerator fleets, inference gateways, vector stores)
Familiarity with AI threat frameworks (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) desirable
Key Competencies
Detection engineering mindset focused on signal quality and measurable coverage
Familiarity with MITRE ATT&CK and detectio