Back to Jobs
DeepMind

Research Scientist, Tech Lead Manager, Model Threat Mitigation at DeepMind

DeepMind San Francisco, CA

Job Description

SnapshotArtificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.About UsModel distillation is a key innovation enabling the acceleration of AI turning large general models into small and specialized models used across the industry. However distillation techniques can also be used to steal critical model capabilities representing a significant threat to the intellectual property and integrity of our foundational models.The RoleAs the Tech Lead Manager for Model Threat Mitigation you will lead the workstream dedicated to protecting Google DeepMinds most valuable AI assets. You will grow and manage the current team of Research Scientists Research Engineers and Software Engineers responsible for both detecting unauthorized distillation and mitigating these threats.This is a unique opportunity to define the comprehensive defense strategy for our models. You will oversee the full lifecycle of defense from researching novel detection systems and identifying capability theft to deploying core mitigations and contributing to the model training.Key ResponsibilitiesLead and Manage the Team: Build manage and mentor a diverse team of researchers and engineers. Foster a culture of technical excellence and creative problem-solving in a high-stakes adversarial environment.Define Technical Strategy: Set the research and engineering roadmap for the Distillation Defense workstream communicating this to stakeholders and your team.Drive Impact & Policy: Lead the effort to set protective policies for Core Model Capabilities. Work with leadership to define acceptable risk levels and trade-offs between model defensibility and performance.Cross-Functional Collaboration: Partner closely with Product Legal and other key teams across GDM and Google to ensure defenses are integrated into the Gemini ecosystem and to influence the broader standard for responsible AI defense.About YouWe are looking for an experienced technical leader who is passionate about securing the future of AI. You thrive on ambiguity and are capable of translating complex research questions into concrete engineering roadmaps. You are an expert in thinking like the adversary and are driven to build robust resilient systems. You have a proven track record of leading teams to land research impact or ship production systems in a multi-team environment.Minimum qualifications:Ph.D. in Computer Science or a related quantitative field or a B.S./M.S. in a similar field with 5 years of relevant industry experience.2 years of experience in technical leadership or people management managing research scientists or software engineers.Demonstrated research or product expertise in a field related to adversarial ML model security model evaluation pre-training post-training or distillation.Experience designing and implementing large-scale ML systems and/or counter-abuse infrastructure.Preferred qualifications:Deep technical expertise in one or more of the following: model distillation model stealing Reinforcement Learning Supervised Fine-Tuning embeddings analysis or security.Experience managing research teams with a focus on publication or applying novel research to production environments.Experience interacting with leadership and cross-functional partners (e.g. Legal Product).Strong software engineering skills and experience with ML frameworks like JAX PyTorch or TensorFlow.Current or prior US security clearance.The US base salary range for this full-time position is between $248000 - $349000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.Required Experience:Manager Key Skills Laboratory Experience,Machine Learning,Python,AI,Bioinformatics,C/C++,R,Biochemistry,Research Experience,Natural Language Processing,Deep Learning,Molecular Biology Employment Type : Full Time Experience: years Vacancy: 1 Monthly Salary Salary: 248000 - 349000

Resume Suggestions

Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.

Quantify your achievements with specific metrics and results whenever possible to show impact.

Emphasize your proficiency in relevant technologies and tools mentioned in the job description.

Showcase your communication and collaboration skills through examples of successful projects and teamwork.

Explore More Opportunities