Link copied to clipboard!
Back to Jobs
Senior GenAI Research Engineer Optimization and Kernels at Databricks
Databricks
San Francisco, CA
Information Technology
Posted 0 days ago
Job Description
At Databricks we are obsessed with enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions.The Mosaic AI organization enables companies to develop AI models and systems using their own data with technologies ranging from pre-training LLMs from scratch to augmented generation using the latest retrieval techniques. Mosaic AI does so by producing novel science and putting it into production. Mosaic AI is committed to the belief that a companys AI models are just as valuable as any other core IP and that high-quality AI models should be available to all.Job DescriptionAs a research engineer on the Scaling team you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art. You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. And most importantly you will love our customers: our goal is to make our customers successful in applying state-of-the-art LLMs and AI systems and we encode our scientific expertise into our products to make that possible.The Impact you will haveAs a research engineer on the Scaling Team at Databricks you will:Drive performance improvements through advanced optimization techniques including kernel fusion mixed precision memory layout optimization tiling strategies and tensorization for training-specific patternsDesign implement and optimize high-performance GPU kernels for training workloads (e.g. attention mechanisms custom layers gradient computation activation functions) targeting NVIDIA architecturesDesign and implement distributed training frameworks for large language models including parallelism strategies (data tensor pipeline ZeRO-based) and optimized communication patterns for gradient synchronization and collective operationsProfile debug and optimize end-to-end training workflows to identify and resolve performance bottlenecks applying memory optimization techniques like activation checkpointing gradient sharding and mixed precision training.What We Look forBS/MS/PhD in Computer Science or related field with hands-on experience writing and tuning CUDA kernels for ML training applications or hands-on experience in distributed training frameworks (PyTorch DDP DeepSpeed Megatron-LM FSDP)Strong understanding of NVIDIA GPU architecture (memory hierarchy tensor cores warp scheduling SM occupancy) and proficiency with CUDA debugging/profiling tools (Nsight NVProf)Deep understanding of parallelism techniques and memory optimization strategies for large-scale model training with proven ability to debug and optimize distributed workloadsStrong software engineering skills in Python and PyTorch with experience supporting production training workflows and knowledge of LLM training dynamics including hyperparameter tuning and optimization strategies.Required Experience:Senior IC Key Skills Administration And Accounting,ABAP,BPO,Art And Craft,Employee Benefits,ACCA Employment Type : Full Time Experience: years Vacancy: 1
Resume Suggestions
Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.
Quantify your achievements with specific metrics and results whenever possible to show impact.
Emphasize your proficiency in relevant technologies and tools mentioned in the job description.
Showcase your communication and collaboration skills through examples of successful projects and teamwork.