A frame from the talk

Taming LLMs with Structured Outputs

At DevIgnition 2024, I gave a talk on using structured outputs to get better responses from large language models (LLMs).

I was particularly enthusiastic about this topic, as I've been working with LLMs for a while now, and I've found that structured outputs can be a game-changer when integrating LLMs into your software.

You can watch a recording of the talk here:

If you'd prefer to just skim through the slides instead, you can do that as well:

This was a big personal accomplishment for me, as this was my first time giving a technical presentation of this kind, especially on this scale.

I hope you find the talk informative!