This job is no longer available

The position may have been filled or the posting has expired. Browse similar opportunities below.

Back to Jobs
R

Machine Learning Engineer at Relace

Relace No longer available

Job Description

About Us

Relace is building the models and infrastructure that code agents reach for. We power the fastest model on OpenRouter (10,000 tok/s) and deliver optimized small language models designed for retrieval, application, and core code generation functions.

Our technology supports some of the world's fastest-moving companies - including Lovable, Figma, and Vercel - as they deploy and scale code generation to hundreds of millions of users. We recently raised our Series A from a16z, and we're growing quickly.

Our team is made up of mathematicians, physicists, and computer scientists who are deeply passionate about their craft. If you thrive on ambitious technical problems, care about elegant systems design, and want to build the foundation of how code gets written at scale, this is the place for you.

The Role

We're looking for a Machine Learning Engineer who loves getting close to the metal. This is a hands-on engineering role focused on making models faster, more efficient, and more reliable through low-level optimizations and smart systems design.

The ideal candidate is excited by CUDA kernels, memory layouts, GPU scheduling, and squeezing performance out of complex training and inference workloads. They should be just as comfortable optimizing compute and networking paths as they are working alongside research teams to productionize new architectures.

This is a role for someone who enjoys deep performance tuning, understands the realities of running large-scale ML systems, and thrives in fast-moving, high-leverage environments.

Requirements
  • Strong background in systems-level ML engineering.

  • Experience with CUDA, GPU kernel optimization, and performance tuning.

  • Fluency in Python and at least one systems language (C++ or Rust preferred).

  • Familiarity with distributed training frameworks (e.g., PyTorch, JAX, DeepSpeed, or similar).

  • Experience working with large-scale training or inference infrastructure.

  • Understanding of memory management, parallelization, and hardware-aware model optimization.

  • 2+ years of experience working in ML infrastructure or performance-critical environments.

  • Willingness to work in-person from our SF office in FiDi.

Typical mid-level pay: $141k for Computer and Information Research Scientists nationally

National salary averages
Expected mid-level
$141k
Entry
Mid
Senior
Expected
$81k Market range (10th-90th percentile) $232k

Senior roles pay 76% more than entry—experience is well rewarded.

Balanced market

High demand and responsive wages. Negotiate confidently on all fronts.

Hiring leverage
Lean candidate
Wage leverage
Moderate
Mobility
Low mobility

Who this leverage applies to

Stronger for: All experience levels, Credentialed candidates
Weaker for: Self-taught practitioners

Where to negotiate

Base salary
Sign-on bonus
Title / level
Remote flexibility
Scope & responsibility
Start date / PTO

Likely Possible Unlikely

Watch out for

Limited mobility: Few adjacent roles—switching employers is harder.

Does this path compound?

Job Growth →
High churn
Growth, flat pay
🚀 Compound
Growth + pay upside
⚠️ Plateau
Limited growth
Specialize
Experts earn more
Pay Upside →
Growth + pay upside

Both the field and your earnings can grow significantly.

+20%
10yr growth
Advanced degrees are common in this field.
Typical: Master's degree

Good time to build expertise—demand will chase supply.

Labor data: BLS 2024