
← Events
/
Webinar
Navigating Post-Training for Coding LLMs
Nov 27, 2024
at
17:00 CET
·
Online
Join us for an exclusive panel discussion featuring experts from two leading AI companies - Aleksei Petrov (Founding Engineer, poolside) and Boris Yangel (Head of AI R&D, Nebius) - as they dive deep into fine-tuning of coding large language models (LLMs). Our panelists will reveal the strategies, tools, and best practices that drive optimal model performance, covering everything from metric tracking and model alignment to handling real-world challenges.
Recording
Agenda and key topics
What you’ll learn:
Which use cases for code models and agent-based systems need trajectory evaluation, and why
Which model skills are essential for autonomous behavior and solving long-horizon tasks in code
How trajectory annotation assists in training and evaluating models
How to ensure the safety of models and agent-based systems
When to use synthetic data, when to get experts involved in data annotation, and how hybrid annotation works
You’ll also find out where researchers are focusing their efforts, and what types of breakthrough products we can expect to be developed next.




