Skip to content

Applied AI

The Difference Between a Base LLM and an Instruction-Tuned LLM

Introduction

Large language models (LLMs) can behave very differently depending on how they were trained.

BaseLLMs are trained purely on next-token prediction over a large corpus of text. Instruction-tuned LLMs, by contrast, are further trained to follow prompts in a more helpful and structured way.

To explore how these two types of language models behave, we will take a look at two models from Hugging Face's SmolLM family:

Interview Series: Working with an SRE

Preamble

In this insightful interview, Paul Bütow, a Principal Site Reliability Engineer at Mimecast, shares over a decade of experience in the field. Paul highlights the role of an Embedded SRE, emphasizing the importance of automation, observability, and effective incident management. We also focused on the key question of how you can work effectively with an SRE weather you are an individual contributor or a manager, a software engineer or data scientist. And how you can learn more about site reliability engineering.