ColBERT and Late Interaction Retrieval: How It Works and When to Use It

A practical guide to ColBERT late interaction retrieval for ML engineers: how MaxSim scoring over per-token embeddings outperforms single-vector bi-encoders, using RAGatouille for indexing and search, two-stage retrieval with bi-encoder first stage plus ColBERT reranking, fine-tuning ColBERT on domain-specific query-document triples with RAGTrainer, and when to use bi-encoder vs ColBERT vs cross-encoder for different RAG pipeline architectures.

How to Compare Two Documents with a Local LLM

A practical guide to comparing documents with a local LLM using Ollama: a general compare_documents function with focus parameter, structured diff output using Pydantic with additions, removals, modifications, conflicts, and summary fields, a chunked comparison approach for long documents that exceed the context window, question-answering across two documents simultaneously, and specific use cases where local inference is essential including legal contracts, research papers, and policy documents.

Hard Negative Mining for Embedding Model Training

A practical guide to hard negative mining for ML engineers training embedding models: why random negatives produce weak gradient signal, BM25-mined hard negatives with rank_bm25, embedding-mined negatives with FAISS and sentence-transformers, cross-encoder filtering to identify the hardest candidates, training with MultipleNegativesRankingLoss, and iterative mining pipelines used by state-of-the-art models like E5 and BGE.

How to Use Ollama with Go

A complete guide to the official Ollama Go library: installing with go get, streaming chat with the callback handler, accumulating a non-streaming response, raw generate completion, generating embeddings with nomic-embed-text, listing and pulling models with progress callbacks, connecting to a remote Ollama server with a custom client URL, and building a full multi-turn CLI chatbot with conversation history.

How to Use HuggingFace Fast Tokenizers Efficiently

A practical guide to HuggingFace fast tokenizers for ML engineers: how Rust-backed fast tokenizers differ from slow Python tokenizers, using offset mappings for NER and QA span alignment, high-throughput batched tokenisation with datasets.map and multiprocessing, sliding window tokenisation for long documents with stride and overflow, training a custom BPE vocabulary with the tokenizers library, and debugging gotchas around special tokens and sequence pair handling.

How to Use Local AI with Obsidian: Smart Notes Without the Cloud

A guide to connecting Ollama with Obsidian for fully local AI-assisted note-taking: the Ollama community plugin for per-note summarisation and action item extraction, Smart Connections plugin for semantic indexing of the entire vault with nomic-embed-text and vault-wide RAG chat, Text Generator plugin via the OpenAI-compatible endpoint, a practical meeting notes workflow, building a queryable personal knowledge base, hardware recommendations, and getting started with just two ollama pulls.

IA3 vs LoRA: Choosing a Parameter-Efficient Fine-Tuning Method

A practical comparison of IA3 and LoRA for ML engineers: how IA3 activation scaling works versus LoRA weight updates, when each method wins (data volume, task type, adapter size), implementing IA3 with HuggingFace PEFT for classification and causal LM tasks, combining IA3 with 4-bit quantisation on consumer GPUs, and a decision framework for choosing between PEFT methods in production fine-tuning projects.

How to Use Ollama with JavaScript and Node.js

A complete guide to the official Ollama npm package in Node.js: installing with npm/yarn/bun, generate and chat with stream:false and stream:true, multi-turn CLI chatbot with readline, generating embeddings and computing cosine similarity, model management including pull with progress, delete, and ps, connecting to a remote Ollama server with a custom client, structured output using Zod schema passed directly to the format parameter, and image input for vision models.

Sequence Packing for LLM Training: Eliminating Padding Waste

A practical guide to sequence packing for ML engineers training LLMs: measuring padding waste and estimating speedup, greedy packing implementation with EOS separation, the attention leakage problem in naive packing, document-aware attention masks with Flash Attention cu_seqlens, TRL SFTTrainer packing configuration, and how to verify packing efficiency and model quality after implementation.

Ollama REST API Reference: Every Endpoint with Examples

A complete Ollama REST API reference with curl examples for every endpoint: health check, /api/generate with streaming and options, /api/chat with multi-turn history and structured output format parameter, /api/embeddings, /api/tags to list models, /api/pull with progress streaming, /api/delete, /api/copy, /api/create from a Modelfile string, /api/ps for loaded models, /api/show for model details, and the OpenAI-compatible /v1/chat/completions, /v1/models, and /v1/embeddings endpoints.