Hard Negative Mining for Embedding Model Training

A practical guide to hard negative mining for ML engineers training embedding models: why random negatives produce weak gradient signal, BM25-mined hard negatives with rank_bm25, embedding-mined negatives with FAISS and sentence-transformers, cross-encoder filtering to identify the hardest candidates, training with MultipleNegativesRankingLoss, and iterative mining pipelines used by state-of-the-art models like E5 and BGE.

How to Use Ollama with Go

A complete guide to the official Ollama Go library: installing with go get, streaming chat with the callback handler, accumulating a non-streaming response, raw generate completion, generating embeddings with nomic-embed-text, listing and pulling models with progress callbacks, connecting to a remote Ollama server with a custom client URL, and building a full multi-turn CLI chatbot with conversation history.

How to Use HuggingFace Fast Tokenizers Efficiently

A practical guide to HuggingFace fast tokenizers for ML engineers: how Rust-backed fast tokenizers differ from slow Python tokenizers, using offset mappings for NER and QA span alignment, high-throughput batched tokenisation with datasets.map and multiprocessing, sliding window tokenisation for long documents with stride and overflow, training a custom BPE vocabulary with the tokenizers library, and debugging gotchas around special tokens and sequence pair handling.

How to Use Local AI with Obsidian: Smart Notes Without the Cloud

A guide to connecting Ollama with Obsidian for fully local AI-assisted note-taking: the Ollama community plugin for per-note summarisation and action item extraction, Smart Connections plugin for semantic indexing of the entire vault with nomic-embed-text and vault-wide RAG chat, Text Generator plugin via the OpenAI-compatible endpoint, a practical meeting notes workflow, building a queryable personal knowledge base, hardware recommendations, and getting started with just two ollama pulls.

IA3 vs LoRA: Choosing a Parameter-Efficient Fine-Tuning Method

A practical comparison of IA3 and LoRA for ML engineers: how IA3 activation scaling works versus LoRA weight updates, when each method wins (data volume, task type, adapter size), implementing IA3 with HuggingFace PEFT for classification and causal LM tasks, combining IA3 with 4-bit quantisation on consumer GPUs, and a decision framework for choosing between PEFT methods in production fine-tuning projects.

How to Use Ollama with JavaScript and Node.js

A complete guide to the official Ollama npm package in Node.js: installing with npm/yarn/bun, generate and chat with stream:false and stream:true, multi-turn CLI chatbot with readline, generating embeddings and computing cosine similarity, model management including pull with progress, delete, and ps, connecting to a remote Ollama server with a custom client, structured output using Zod schema passed directly to the format parameter, and image input for vision models.

Sequence Packing for LLM Training: Eliminating Padding Waste

A practical guide to sequence packing for ML engineers training LLMs: measuring padding waste and estimating speedup, greedy packing implementation with EOS separation, the attention leakage problem in naive packing, document-aware attention masks with Flash Attention cu_seqlens, TRL SFTTrainer packing configuration, and how to verify packing efficiency and model quality after implementation.

Ollama REST API Reference: Every Endpoint with Examples

A complete Ollama REST API reference with curl examples for every endpoint: health check, /api/generate with streaming and options, /api/chat with multi-turn history and structured output format parameter, /api/embeddings, /api/tags to list models, /api/pull with progress streaming, /api/delete, /api/copy, /api/create from a Modelfile string, /api/ps for loaded models, /api/show for model details, and the OpenAI-compatible /v1/chat/completions, /v1/models, and /v1/embeddings endpoints.

Multi-Task Learning: Hard Parameter Sharing, Soft Sharing, and When It Beats Single-Task Models

A practical guide to multi-task learning for ML engineers: hard parameter sharing with task-specific heads, soft parameter sharing with cross-encoder regularisation, gradient cosine similarity for detecting negative transfer, homoscedastic uncertainty loss weighting, task sampling strategies, and an honest assessment of when multi-task training beats separate single-task baselines and when it does not.

Tabby: The Self-Hosted Coding Assistant

A complete guide to Tabby, the open-source self-hosted coding assistant: what it does and how it compares to GitHub Copilot, installing via brew, binary, or Docker, running with built-in code models on CPU and GPU, connecting to VS Code and JetBrains IDEs with API token setup, model selection by hardware tier from 1.3B CPU to 13B GPU, enabling repository context indexing for project-aware completions, and running as a systemd service for persistent availability.