Help us use technology to make a big green dent in the universe!
Kraken powers some of the most innovative global developments in energy.
We’re a technology company focused on creating a smart, sustainable energy system. From optimising renewable generation, creating a more intelligent grid and enabling utilities to provide excellent customer experiences, our operating system for energy is transforming the industry around the world in a way that benefits everyone.
It’s a really exciting time in energy. Help us make a real impact on shaping a better, more sustainable future.
Kraken is the operating system for the energy transition. We help energy companies, utilities and system operators transform how they operate so the world can move faster towards a zero-carbon future. We build technology that solves real, messy problems at scale - across data, software and increasingly AI. Our teams move fast, take ownership, and care deeply about impact. AI is a key investment area for Kraken Technologies as we look to expand our existing capabilities. A crucial part of this is broadening the foundational infrastructure to enable teams across the organisation to use AI effectively to accelerate our mission.
You’ll work in the AI Foundations team which exists to enable AI across the entire company. We build the shared platforms, tooling and patterns that enable engineering & product teams to safely, reliably and efficiently use machine learning and generative AI across the business. This is not a research lab. This is a delivery-focused team that sits at the intersection of platform engineering, applied ML and developer enablement.
We’re hiring a Senior Software Engineer into a newly formed AI Foundations team. This is a growth hire to help Kraken engineers adopt new AI models and tools, and to build the internal tooling that makes AI/ML integration reliable, repeatable, and easy to use across teams. This role is focused on the day-to-day engineering of ML and AI (shipping and operating AI-enabled software) rather than building underlying models from scratch. You’ll work closely with the AI Foundations team, AI Foundry, customer-facing AI teams, and engineers across Kraken - providing guidance, building shared components, and accelerating AI adoption across the org.
What You'll Do
Build and maintain Python services and tooling that support AI/ML use cases (e.g., APIs, integrations, automation, internal developer tools) and run reliably in production.Help engineers adopt new models/tools from an engineering perspective - sharing best practices, patterns, and practical guidance.Develop and evolve backend services (Django preferred) including business logic, ORM/data access patterns, admin tooling, and workflows.Operate in AWS: deploy, run, and support AI-enabled systems; make sensible architecture/cost tradeoffs; partner effectively with infra/DevOps stakeholders.Prototype and productionise LLM-powered features and integrations, using common LLM frameworks and MLOps tooling (see Tech Stack below).Improve observability and reliability using Datadog (metrics/logs/traces, dashboards/alerts) and help establish good monitoring practices as we scale.Communicate clearly across audiences - able to “talk tech to non-tech and vice versa,” produce strong documentation, and collaborate cross-functionally.
What Success Looks Like
Engineers across Kraken can use new models/tools effectively, with clear engineering patterns, documentation, and reusable components.You’re actively involved in shipping and supporting AI/ML integration tooling and improving day-to-day engineering workflows around AI. Strong collaboration across AI Foundations, AI Foundry, and other engineering teams helps accelerate adoption; not hiring this role would slow AI adoption and impact team velocity.
What We're Looking For
Strong Python: senior/advanced capability designing components end-to-end, writing clean idiomatic code, testing thoroughly, and debugging complex production issues.Solid software engineering fundamentals (system design, concurrency, code quality, testing strategy, maintainable architecture; strong reasoning about tradeoffs).Cloud experience (AWS) running production services; comfortable owning reliability/scalability considerations and collaborating with platform/infra partners.Strong communication and collaboration across technical and non-technical stakeholders.Learning agility and drive: proven ability to ramp quickly on new domains/tools and deliver in evolving AI environments.
Nice To Have
Django experience (preferred) and strong backend engineering patterns (security, performance, maintainability).Familiarity with LLM frameworks / AI engineering tooling, such as Pydantic AI, LiteLLM, LangChain, and data/ML platforms like Databricks and MLflow.Experience with Datadog for observability/monitoring in production environments.Exposure to AWS Bedrock specifically (mentioned as part of the environment).