Senior Software Engineer - Research
Lightricks • HaifaLightricks is an AI-first company creating next-generation content creation technology for businesses, enterprises, and studios with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through API integration.
The company is also known globally for pioneering consumer creativity through products like Facetune, one of the world’s most recognized creative brands, which helped introduce AI-powered visual expression to hundreds of millions of users worldwide. We combine deep research, user-first design, and end-to-end execution from concept to final render, to bring the future of expression to all.
Why Join Us
We’re here to push the boundaries of what’s possible with AI and video - not for the buzz, but for the craft, the challenge, and the chance to make something genuinely new.
We believe in an environment where people are encouraged to think, create and explore. Real impact happens when people are empowered to experiment, evolve, and elevate together.
At Lightricks, every breakthrough starts with great people and a collaborative mindset. If you're looking for a place that combines deep tech, creative energy, and zero buzzword culture, you might be in the right place.
What you will be doing
As an ML Software Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of Lightricks’ machine learning inference systems. You’ll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. You’ll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering Lightricks' next-generation AI products.
This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.
Responsibilities
- Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
- Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
- Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
- Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
- Contribute to ComfyUI and internal infrastructure to improve the usability and performance of model execution flows
- Investigate performance bottlenecks at all levels of the stack—from Python to kernel-level execution
- Navigate and enhance a large, complex, production-grade codebase
- Drive innovation in low-level system design to support future ML workloads
Your Skills and Experience
- 5+ years of experience in high-performance software engineering
- Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
- Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
- Proven track record of optimizing compute-intensive systems
- Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
- Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
- Collaborative and team-oriented mindset, with experience working across functional teams
Preferred Requirements
- Experience with low-level profiling and debugging tools (e.g., Nsight, perf, gdb, VTune)
- Familiarity with machine learning frameworks (e.g., PyTorch, TensorRT, ONNX Runtime)
- Contributions to performance-critical open-source or ML infrastructure projects
- Experience with cloud infrastructure and GPU scheduling at scale