Evaluation: Model Benchmarks and LLM Application Assessment
LLMOps Part 10: Understanding model benchmarks, LLM application evaluation, and tooling.
A collection of 14 posts
LLMOps Part 10: Understanding model benchmarks, LLM application evaluation, and tooling.
LLMOps Part 9: A foundational guide to the evaluation of LLM applications, covering challenges and a practical taxonomy of evaluation methods.
LLMOps Part 8: A concise overview of memory, dynamic and temporal context in LLM systems, covering short and long-term memory, dynamic context injection, and some of the common context failure modes in agentic applications.
LLMOps Part 7: A conceptual overview of context engineering, covering context types, context construction principles, and retrieval-centric techniques for building high-signal inputs.
LLMOps Part 6: Exploring prompt versioning, defensive prompting, and techniques such as verbalized sampling, role prompting and more.
LLMOps Part 5: An introduction to prompt engineering (a subset of context engineering), covering prompt types, the prompt development workflow, and key techniques in the field.
LLMOps Part 4: An exploration of key decoding strategies, sampling parameters, and the general lifecycle of LLM-based applications.
LLMOps Part 2: A detailed walkthrough of tokenization, embeddings, and positional representations, building the foundational translation layer that enables LLMs to process and reason over text.
LLMOps Part 1: An overview of AI engineering and LLMOps, and the core dimensions that define modern AI systems.
MLOps Part 5: A detailed walkthrough of data engineering for MLOps, covering data sources, format performance trade-offs, and ETL/ELT pipelines.
MLOps Part 4: A practical walkthrough of W&B-powered reproducibility.
MLOps Part 3: A practical exploration of reproducibility and versioning, covering deterministic training, data and model versioning, and experiment tracking.
MLOps Part 2: A deeper look at the ML lifecycle, plus a minimal train-to-API and containerization demo using FastAPI and Docker.
MLOps Part 1: An introduction to machine learning in production, covering pitfalls, system-level concerns, and an overview of the full ML lifecycle.