Onehouse

Staff Software Engineer, Distributed Data Systems (India)

Onehouse • IN
JavaC++ Hybrid
About Onehouse
Onehouse is a mission-driven company dedicated to freeing data from data platform lock-in. We deliver the industry’s most interoperable data lakehouse through a cloud-native managed service built on Apache Hudi. Onehouse enables organizations to ingest data at scale with minute-level freshness, centrally store it, and make available to any downstream query engine and use case (from traditional analytics to real-time AI / ML).

We are a team of self-driven, inspired, and seasoned builders that have created large-scale data systems and globally distributed platforms that sit at the heart of some of the largest enterprises out there including Uber, Snowflake, AWS, Linkedin, Confluent and many more. Riding off a fresh $35M Series B backed by Craft, Greylock and Addition Ventures, we're now at $68M total funding and looking for rising talent to grow with us and become future leaders of the team. Come help us build the world's best fully managed and self-optimizing data lake platform!

The Community You Will Join
When you join Onehouse, you're joining a team of passionate professionals tackling the deeply technical challenges of building a 2-sided engineering product. Our engineering team serves as the bridge between the worlds of open source and enterprise: contributing directly to and growing Apache Hudi (already used at scale by global enterprises like Uber, Amazon, ByteDance etc) and concurrently defining a new industry category - the transactional data lake. The Data Infrastructure team is the grounding heartbeat to all of this. We live and breathe databases, building cornerstone infrastructure by working under Hudi's hood to solving incredibly complex optimization and systems problems.

The Impact You Will Drive:

  • As a foundational member of the Data Infrastructure team, you will productionize the next generation of our data tech stack by building the software and data features that actually process all of the data we ingest.
  • Accelerate our open source <> enterprise flywheel by working on the guts of Apache Hudi's transactional engine and optimizing it for diverse Onehouse customer workloads.
  • Act as a SME to deepen our teams' expertise on database internals, query engines, storage and/or stream processing.
  • A Typical Day:

  • Design new concurrency control and transactional capabilities that maximize throughput for competing writers.
  • Design and implement new indexing schemes, specifically optimized for incremental data processing and analytical query performance.
  • Design systems that help scale and streamline metadata and data access from different query/compute engines.
  • Solve hard optimization problems to improve the efficiency (increase performance and lower cost) of distributed data processing algorithms over a Kubernetes cluster.
  • Leverage data from existing systems to find inefficiencies, and quickly build and validate prototypes.
  • Collaborate with other engineers to implement and deploy, safely rollout the optimized solutions in production.
  • What You Bring to the Table:

  • Strong, object-oriented design and coding skills (Java and/or C/C++ preferably on a UNIX or Linux platform).
  • Experience with inner workings of distributed (multi-tiered) systems, algorithms, and relational databases.
  • You embrace ambiguous/undefined problems with an ability to think abstractly and articulate technical challenges and solutions.
  • An ability to prioritize across feature development and tech debt with urgency and speed.
  • An ability to solve complex programming/optimization problems.
  • An ability to quickly prototype optimization solutions and analyze large/complex data.
  • Experience running production services, at scale.
  • Robust and clear communication skills.

  • Nice to haves (but not required):
  • Experience working with database systems, Query Engines or Spark codebases.
  • Experience in optimization mathematics (linear programming, nonlinear optimization).
  • Existing publications of optimizing large-scale data systems in top-tier distributed system conferences.
  • PhD degree with 2+ years industry experience in solving and delivering high-impact optimization projects.