Senior Engineering Manager – Akamai Inference Cloud

Company:

Akamai

Location:

Seniority:

Senior

Salary:

Skills:

Management

Work model:

Remote

Type of employment:

Do you thrive on building the future of AI infrastructure?

Are you ready to lead a world-class team at the intersection of AI and edge computing?

Join the Akamai Inference Cloud Team!

The Akamai Inference Cloud team is part of Akamai’s Cloud Technology Group. We build AI platforms for efficient, compliant, and high-performing applications. These platforms support customers in running inference models and empower developers to create advanced AI solutions effectively. #AIC

Partner with the best

As a Senior Engineering Manager, you will oversee and establish a high-performing team of platform and ML engineers. Your team will create a global AI inference platform, offering OpenAI-compatible endpoints and managing workloads across regions. This leadership role requires relevant experience, technical expertise, architectural insights on AI platforms, and a proven track record in delivering AI/ML products at scale.

As a Senior Engineering Manager, you will be responsible for:

  • Building and scaling a world-class engineering team from the ground up, recruiting top talent in AI infrastructure and ML operations.
  • Leading the technical strategy for a global AI inference platform that is performant, compliant, economical, and explainable.
  • Ensuring availability, performance, scalability, and security within the Akamai Inference Cloud environment.
  • Designing global traffic orchestration for AI workloads and establishing platform standards and blueprints for production-grade AI applications.
  • Evaluating AI tools thoroughly while ensuring platform adherence to FedRAMP, GDPR, SOX, and various other regulatory standards.

Do what you love

To be successful in this role you will:

  • Have relevant experience with a track record of building and scaling high-performing engineering teams that shipped successful AI/ML products.
  • Bring hands-on experience with AI inference optimization, model serving, and LLM deployment at scale with deep knowledge of inference frameworks (TensorRT, vLLM, TorchServe, Triton).
  • Implement containerization strategies for AI workloads, optimize hardware-specific processes, and demonstrate expertise in cloud-native technologies like Kubernetes and distributed systems globally.
  • Demonstrate expertise in building highly available, low-latency platforms with strict SLOs and cost optimization strategies for compute-intensive AI workloads.
  • Have experience with AI application platforms (RAG, agents, fine-tuning).
  • Possess knowledge of AI safety and explainability frameworks.
  • Show familiarity with GPU infrastructure and hardware acceleration.

Build your career at Akamai

Our ability to shape digital life today relies on developing exceptional people like you. The kind that can turn impossible into possible. We’re doing everything we can to make Akamai a great place to work. A place where you can learn, grow and have a meaningful impact.

With our company moving so fast, it’s important that you’re able to build new skills, explore new roles, and try out different opportunities. There are so many different ways to build your career at Akamai, and we want to support you as much as possible. We have all kinds of development opportunities available, from programs such as GROW and Mentoring, to internal events like the APEX Expo and tools such as Linkedin Learning, all to help you expand your knowledge and experience here.

Learn more

Not sure if this job is the right match for you or want to learn more about the job before you apply? Schedule a 15-minute exploratory call with the Recruiter and they would be happy to share more details.