Software Engineer — GPU Networking & Distributed Systems
Baseten • San Francisco, California, United StatesABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
At Baseten, we are building the global operating system for distributed, heterogeneous AI hardware. We believe that as LLM and multi-modal workloads scale, the network is the computer. We are looking for foundational engineers to lead our GPU Networking efforts, making RDMA a first-class building block in our infrastructure and unlocking the next generation of distributed inference optimizations.
THE OPPORTUNITY
Networking and compute are no longer separate disciplines; they are converging. The massive throughput of H100, B200, and NVL72 architectures enables and demands a new approach where communication is co-optimized alongside computation. We are entering an era where the network is an active accelerator, leveraging smart hardware offloads and direct interconnects to ensure that data movement operates at wire-speed.
In this role, you will go beyond network configuration to architect the software fabric that unifies thousands of GPUs into a cohesive operating system. While you will leverage the best of the open-source ecosystem, you won't be limited by it. Where off-the-shelf solutions stop, you will build from scratch, engineering the primitives required to co-optimize communication and compute for Disaggregated Serving, Wide Expert Parallelism (WideEP), and lightening cold starts.
WHAT YOU'LL DO
Make RDMA First-Class: You will work on integrating RDMA/RoCE/InfiniBand capabilities directly into our inference stack, helping us move beyond TCP/IP to unlock order-of-magnitude improvements in bandwidth and latency.
Optimize Distributed Inference: You will implement and tune the networking layers necessary for efficient Disaggregated KV Cache Offload and WideEP, ensuring seamless communication across NVLink and InfiniBand for our MoE models.
Enable Serverless-Grade Startup Speeds for LLMs: You will work deeply with checkpointing and storage mechanisms to enable sub-10-second startup for trillion-parameter models.
Deep-Dive into Hardware: You will characterize and validate networking performance on bleeding-edge clusters (H100/H200, B200/B300, GB200/300 NVL72), writing the acceptance tests that ensure our hardware delivers peak achievable throughput and minimal latency.
Build Observability: You will design the tools that let us visualize packet flow, congestion, and effective bandwidth across the GPU interconnects, helping us diagnose complex distributed system behaviors.
Optimize Kernels: You will work with communication libraries (NCCL, NVSHMEM) and potentially write custom communication kernels to overlap compute and data transfer.
WHO YOU ARE
You have deep experience with high-performance networking protocols (InfiniBand, RoCE v2) and understand the physics of data movement.
You are fluent in C++ or Python, with the ability to bridge the gap between high-level logic and hardware. You have a deep understanding of the memory hierarchy in modern NVIDIA architectures (H100/Blackwell) and know how to optimize for it.
You like going deep. You aren't afraid to dive into TensorRT-LLM source code, write custom C++ / Python bindings, or debug NVLink topology issues.
You know when to use an off-the-shelf solution and when we need to build a custom solution because the upstream tools (like standard Kubernetes networking) are too slow for our needs.
HIGHLY PREFERRED:
Deep knowledge of NCCL, NVSHMEM, and UCX.
Experience with GPUDirect Storage (GDS) or high-performance filesystems like Weka or 3FS.
Familiarity with TensorRT-LLM, vLLM, or Sglang.
Experience running low-level benchmarks to "qualify" new hardware clusters.
Why join the Model Performance team?
Bleeding Edge Hardware: We are preparing to bring Blackwell (B200/B300) and then Rubin architectures online. You will be one of the first engineers in the industry optimizing networking for NVL72/GB300 racks.
We go deep: We operate at every depth. Whether it’s tuning hardware interconnects, writing custom communication kernels, or designing distributed inference strategies, we work across the entire stack to deliver performance that goes far and beyond.
High Impact: The networking optimizations you build will directly enable features that no one else in the industry has fully mastered yet, like seamless multi-node WideEP and instant model hydration.
BENEFITS
Competitive compensation, including meaningful equity.
100% coverage of medical, dental, and vision insurance for employee and dependents
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
Paid parental leave
Company-facilitated 401(k)
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.