This job is no longer available
The position may have been filled or the posting has expired. Browse similar opportunities below.
Machine Learning Engineer at Relace
Job Description
Relace is building the models and infrastructure that code agents reach for. We power the fastest model on OpenRouter (10,000 tok/s) and deliver optimized small language models designed for retrieval, application, and core code generation functions.
Our technology supports some of the world's fastest-moving companies - including Lovable, Figma, and Vercel - as they deploy and scale code generation to hundreds of millions of users. We recently raised our Series A from a16z, and we're growing quickly.
Our team is made up of mathematicians, physicists, and computer scientists who are deeply passionate about their craft. If you thrive on ambitious technical problems, care about elegant systems design, and want to build the foundation of how code gets written at scale, this is the place for you.
The RoleWe're looking for a Machine Learning Engineer who loves getting close to the metal. This is a hands-on engineering role focused on making models faster, more efficient, and more reliable through low-level optimizations and smart systems design.
The ideal candidate is excited by CUDA kernels, memory layouts, GPU scheduling, and squeezing performance out of complex training and inference workloads. They should be just as comfortable optimizing compute and networking paths as they are working alongside research teams to productionize new architectures.
This is a role for someone who enjoys deep performance tuning, understands the realities of running large-scale ML systems, and thrives in fast-moving, high-leverage environments.
RequirementsStrong background in systems-level ML engineering.
Experience with CUDA, GPU kernel optimization, and performance tuning.
Fluency in Python and at least one systems language (C++ or Rust preferred).
Familiarity with distributed training frameworks (e.g., PyTorch, JAX, DeepSpeed, or similar).
Experience working with large-scale training or inference infrastructure.
Understanding of memory management, parallelization, and hardware-aware model optimization.
2+ years of experience working in ML infrastructure or performance-critical environments.
Willingness to work in-person from our SF office in FiDi.
Typical mid-level pay: $141k for Computer and Information Research Scientists nationally
Senior roles pay 76% more than entry—experience is well rewarded.
Balanced market
High demand and responsive wages. Negotiate confidently on all fronts.
Who this leverage applies to
Where to negotiate
Likely Possible Unlikely
Watch out for
Does this path compound?
Both the field and your earnings can grow significantly.
Good time to build expertise—demand will chase supply.