Link copied to clipboard!
Back to Jobs
Machine Learning Engineer, Training Infrastructure at Hedra
Hedra
San Francisco, CA
Information Technology
Posted 0 days ago
Job Description
About HedraHedra is a pioneering generative media company backed by top investors at Index, A16Z, and Abstract Ventures. We're building Hedra Studio, a multimodal creation platform capable of control, emotion, and creative intelligence.At the core of Hedra Studio is our Character-3 foundation model, the first omnimodal model in production. Character-3 jointly reasons across image, text, and audio for more intelligent video generation — it’s the next evolution of AI-driven content creation.At Hedra, we’re a team of hard-working, passionate individuals seeking to fundamentally change content creation and build a generational company together. We value startup energy, initiative, and the ability to turn bold ideas into real products. Our team is fully in-person in SF/NY with a shared love for whiteboard problem-solving.OverviewWe are looking for an ML Engineer with 3+ YOE in high-performance computing systems to manage and optimize our computational infrastructure for training and deploying our machine learning models. The ideal candidate has diverse experience managing ML workloads at scale, supporting our 3DVAE and video diffusion models. We encourage you to apply even if you don't meet every requirement — we value curiosity, creativity, and the drive to solve hard problems.ResponsibilitiesDesign, implement, and maintain scalable computing solutions for training and deploying ML models, ensuring infrastructure can handle large video datasets.Manage and optimize the performance of our computing clusters or cloud instances, such as AWS or Google Cloud, to support distributed training.Ensure that our infrastructure can handle the resource-intensive tasks associated with training large generative models.Monitor system performance and implement improvements to maximize efficiency and utilization, using tools like Airflow for orchestration.Collaborate across research teams to understand their computational needs and provide appropriate solutions, facilitating seamless model deployment.QualificationsBachelor’s degree in Computer Science, Information Technology, or a related field, with a focus on system administration.Experience with cloud computing platforms such as Amazon Web Services, Google Cloud, or Microsoft Azure, essential for managing large-scale ML workloads.Values engineering processes and version control (CI/CD).Knowledge of containerization technologies like Docker and Kubernetes required for deployments at scale.Understanding of distributed training techniques and how to scale models across multi-node clusters aligning with video generation needs.Strong problem-solving and communication skills, given the need to collaborate with diverse teams.This role is vital for ensuring the computational backbone supports the company’s ML efforts, focusing on deployment and scalability.BenefitsCompetitive compensation + equity401k (no match)Healthcare (Silver PPO Medical, Vision, Dental)Lunch and snacks at the office
Resume Suggestions
Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.
Quantify your achievements with specific metrics and results whenever possible to show impact.
Emphasize your proficiency in relevant technologies and tools mentioned in the job description.
Showcase your communication and collaboration skills through examples of successful projects and teamwork.