Software Engineer - AI
TomTom
Software Engineering, Data Science
Amsterdam, Netherlands
Posted on Jan 7, 2026
We are looking for an experienced AI Governance Engineer to own the technical governance of generative AI and machine learning tools across the enterprise. This is a high-impact, hands-on role that sits at the intersection of software engineering, security, compliance, and AI. You will translate complex policies into practical guardrails, evaluate and onboard new AI capabilities safely, and ensure our developers can move fast without breaking things (or regulations).
You are the bridge between “AI looks cool” and “AI is safe, compliant, and cost-effective at enterprise scale.”
What you'll do
- Design, implement, and maintain the technical governance framework for all GenAI/LLM tools and integrations.
- Evaluate, approve, onboard, monitor, and retire third-party and open-source AI tools (LLMs, vector databases, agents, orchestration platforms, etc.).
- Build and operate usage monitoring, cost tracking, compliance dashboards, and automated audit trails.
- Create lightweight, developer-friendly processes for tool requests, risk assessments, exceptions, and approvals.
- Partner with Legal, Security, Compliance, Procurement, and Engineering to translate policies into enforceable technical controls.
- Identify and mitigate AI-specific risks: prompt injection, data leakage, model misuse, training-data exposure, etc.
- Develop and deliver training, playbooks, workshops, and documentation that get used.
- Stay ahead of evolving regulations, new attack vectors, and model capabilities.
- Continuously optimize the balance between innovation velocity, risk, cost, and compliance.
What you’ll need
- Strong software engineering background with deep understanding of development lifecycles, CI/CD, APIs, integrations, and versioning.
- Hands-on knowledge of modern GenAI/ML stack: LLMs, embeddings, vector stores, RAG, model endpoints, agents, and orchestration tools.
- Solid experience with at least one major cloud provider (Azure or AWS) – especially AI/ML services, networking, IAM, and governance features.
- Proven ability to interpret security, privacy, data-classification, and responsible AI policies and turn them into practical technical controls.
- Strong grasp of AI-specific security risks (prompt injection, jailbreaks, data exfiltration, inversion attacks, etc.) and mitigation patterns.
- Experience implementing identity & access governance (RBAC/ABAC, least privilege, audit logging) for AI systems.
- Demonstrated skill in cross-functional collaboration and communicating complex technical and risk concepts to non-technical stakeholders.
- Comfort in building processes, dashboards, and lightweight automation from scratch.
Whats nice to have
- Prior prompt engineering or red-teaming experience.
- Familiarity with enterprise AI orchestration platforms.
- Experience with policies-as-code, audit automation, or infrastructure-as-code approaches to governance.
- Contributions to responsible AI frameworks or previous work in highly regulated industries
Share this job
Apply to this Job