Description & Requirements
We are a technology consulting firm building and operating next-generation AI supercompute infrastructure for the world's most ambitious organizations. As Architect of Platform Engineering, you will own the full stack, from bare metal and operating system up through cluster orchestration, job scheduling, and observability across engagements with leading enterprise and public sector clients pushing the frontier of AI adoption.
As a repeatedly awarded NVIDIA Consulting Partner of the Year in EMEA, we hold one of the deepest and most recognized NVIDIA partnerships in the region. This gives our engineers privileged access to adoption programs and NVIDIA's engineering teams.
You will work with technology and at a scale that most engineers won't encounter for years.
This role sits at the intersection of deep technical ownership and client-facing leadership. You will shape platform strategy for clients, embed within their teams, and deliver outcomes that define how large-scale AI infrastructure is built and run. You will work in close partnership with NVIDIA to bring cutting-edge GPU architecture and software capabilities directly to client environments.
We are looking for a hands-on technical Architect who thrives in entrepreneurial, high-velocity environments.
What We Expect:
- 8+ years of hands-on infrastructure and platform engineering experience, including full ownership of production systems
- cluster architecture, control plane operations, custom controllers/operators, multi-tenancy, and large-scale fleet management
- Slurm experience or other HPC/AI workload scheduling: job queuing, fair-share scheduling, MPI integration
- Strong Linux internals knowledge: kernel tuning, cgroups, namespaces, NUMA topology, hugepages, and storage subsystems
- Familiarity with high-speed networking: InfiniBand, RoCE, RDMA; tuning for distributed training workloads
- Infrastructure as Code fluency: Terraform, Ansible, Helm or equivalent
- Demonstrated ability to lead technical engagements with enterprise clients. Translating ambiguous requirements into clear deliverables, managing stakeholders across seniority levels, and navigating complex organizational dynamics
- Entrepreneurial mindset. Comfortable operating with autonomy and moving fast without sacrificing rigor
Bonus Experience
- Proven experience managing NVIDIA GPU infrastructure: driver lifecycle, CUDA toolchain, MIG/MPS partitioning, NVLink/NVSwitch topologies, and GPUDirect RDMA
- Familiarity with NVIDIA Base Command Platform, DGX SuperPOD, or CSP GPU cloud deployments
- Experience with DCGM, or other GPU profiling and telemetry tooling
- Prior consulting, professional services, or client delivery experience in an infrastructure or cloud practice
- Contributions to open-source platform tooling or CNCF ecosystem projects
Area of responsibility
· Cluster Orchestration. Design, deploy, and operate Kubernetes and Slurm clusters at scale across client environments; own the full lifecycle from provisioning to decommission, including upgrades, rollbacks, and capacity planning
- Operating System Layer. Own OS hardening, kernel tuning, driver management (NVIDIA CUDA, OFED, MIG), and node lifecycle automation across heterogeneous GPU fleets
- Monitoring & Observability. Build and evolve monitoring, alerting, and telemetry stacks (Prometheus, Grafana, DCGM Exporter, OpenTelemetry) to deliver deep visibility into cluster health, GPU utilization, and job performance
- Reliability Engineering. Define SLOs, drive postmortem culture, and lead incident response for production AI compute infrastructure; treat reliability as a systemic, architectural property, not just dashboards and runbooks
- Platform Strategy & Advisory. Translate complex technical requirements into platform roadmaps and architectural recommendations tailored to each client's business context and maturity level
You will define how some of the most demanding AI compute environments in Europe and beyond are built and operated across a portfolio of clients rather than a single environment. No two engagements are the same. The problems here, GPU fleet reliability at scale, distributed training fault tolerance, enterprise-grade platform governance are unsolved at the frontier, and you will have our firm's senior network behind you to tackle them. If you want broad impact, technical depth, and the autonomy to build a platform engineering practice rather than just execute within one, this is the role.
- Flexible working hours,
- Permanent employment or contract,
- Medical and health insurance,
- Multisport and other lifestyle benefits,
- Language courses,
- Friendly coworkers & team spirit,
- Multiple geographies and clients,
- Work for well-known brands,
- Exposure to trailblazing business and technology projects,
- A place in the first line of a digital transformation,
- Everyday opportunities to influence how and where we do our business,
- A development path to fit your needs.
#LI-MB3
