Alice & Bob

Calibration Data Infrastructure Engineer

Alice & Bob • FR
Hybrid
Alice & Bob is developing the first universal, fault-tolerant quantum computer to solve the world’s hardest problems.
The quantum computer we envision building is based on a new kind of superconducting qubit: the Schrödinger cat qubit 🐈‍⬛. In comparison to other superconducting platforms, cat qubits have the astonishing ability to implement quantum error correction autonomously!

We're a diverse team of 140+ brilliant minds from over 20 countries united by a single goal: to revolutionise computing with a practical fault-tolerant quantum machine. Are you ready to take on unprecedented challenges and contribute to revolutionising technology? Join us, and let's shape the future of quantum computing together!

The Calibration team automatizes calibrations of our cat‑qubit Quantum Processing Unit (QPU) to maximize performance and maintain the processor in working condition. Automatic calibrations generate large volumes of data: calibration results, error logs, performance metrics, and hardware diagnostics. Today, answering simple questions such as “What is the success rate of nightly recalibrations over the last month, and where did they fail?” requires tedious manual log gathering. As Senior Calibration Data Infrastructure Engineer, you will design and implement the data infrastructure that makes these questions trivial to answer. You will build systems to store, organize, and query calibration results, enabling meta‑analysis of time series across performance, hardware failures, and data analysis issues. Your work will empower the team to quantify execution at multiple levels and accelerate the reliability of our QPU operations.

Responsibilities:

  • Design and implement a robust data storage and retrieval system for calibration results, error logs, and performance metrics.
  • Develop pipelines to automatically collect, normalize, and index calibration outputs for easy querying and meta‑analysis.
  • Build tools and APIs that allow scientists and engineers to quickly answer operational questions (success rates, failure points, drift statistics).
  • Implement time‑series analysis frameworks to track calibration dynamics, detect anomalies, and generate reports.
  • Establish standards for data schemas, provenance, retention, and reproducibility of calibration results.
  • Provide visibility through automated reporting on calibration performance, hardware reliability, and analysis quality.
  • Mentor engineers and contribute to long‑term strategy for calibration data infrastructure.
  • Requirements:

  • 5+ years experience in backend engineering, data infrastructure, or DevOps with production systems.
  • Strong proficiency in Python and experience with data engineering frameworks (Pandas, SQLAlchemy, Spark, or equivalent).
  • Expertise in time‑series databases (TimescaleDB, InfluxDB, Prometheus) and log aggregation systems (ELK stack, Grafana, or similar).
  • Proven track record in designing scalable data pipelines and APIs for scientific or hardware‑related data.
  • Experience with observability stacks (metrics, logs, traces) and building dashboards for technical users.
  • Familiarity with statistical analysis and anomaly detection; ability to collaborate with scientists on model integration.
  • Strong understanding of CI/CD, testing, and reproducibility in scientific or hardware‑in‑the‑loop environments.
  • Excellent communication skills and ability to translate operational needs into technical solutions.