Gemma 3: Google’s Multimodal Local LLM Explained

A practical guide to running Google’s Gemma 3 locally with Ollama: the 1B, 4B, 12B, and 27B variants and their VRAM requirements, native multimodal image analysis at every size above 1B, CLI and Python usage including image inputs, how Gemma 3 4B compares to Llama 3.2 8B on reasoning tasks, the 12B as a multimodal sweet spot, 27B for frontier-class local quality on Apple Silicon, configuring a 32K context Modelfile, strong multilingual support, and how to choose between Gemma 3 and other local model families.

How to Build LLM Guardrails for Production Applications

A practical guide to LLM guardrails for production: input guardrails for PII detection and prompt injection blocking, output guardrails for policy compliance and schema validation, Guardrails AI and NeMo Guardrails frameworks, latency-aware architecture with layered synchronous and async checks, and tool call validation for agentic systems.

How to Summarise YouTube Videos Locally with Ollama

A practical guide to summarising YouTube videos locally with Ollama and the YouTube Transcript API: fetching transcripts without an API key, basic summarisation in bullet points, extracting video IDs from any URL format, six summary formats including TL;DR, study notes, and Q&A generation, handling long videos with chunk summarisation, a complete command-line tool with argparse and streaming output, working around videos without captions using Whisper, and choosing the right model for different content types.

Feature Engineering for Tabular Data: Techniques That Actually Matter in Production

A practical guide to feature engineering for tabular ML in production: numerical transforms and when they matter, target encoding without leakage, interaction and ratio features, cyclical datetime encoding, rolling aggregation features with correct temporal windowing, and building reproducible sklearn pipelines that produce identical outputs at training and serving time.

How to Use Ollama with JavaScript and Node.js

A complete guide to using Ollama from JavaScript and Node.js: installing the official ollama npm package, chat completions with system prompts and options, streaming responses with async iterators, text generation for classification, generating embeddings with cosine similarity, managing models programmatically, building a streaming Express SSE endpoint, consuming the stream from browser JavaScript, connecting to a remote Ollama host, multi-turn conversation with history, TypeScript types, and when to use the native JS library versus the OpenAI SDK.

Ollama Keep-Alive and Model Preloading: Eliminate Cold Start Latency

A practical guide to eliminating Ollama cold-start latency: how keep-alive works and why it matters, setting keep_alive per-request to -1 for permanent loading or 0 for immediate unloading, setting OLLAMA_KEEP_ALIVE globally, pre-loading models at application startup with a minimal dummy request, running multiple models simultaneously with OLLAMA_MAX_LOADED_MODELS, inspecting loaded models and VRAM usage via /api/ps, manually unloading models to free VRAM, and recommended settings for interactive chat, batch processing, multi-model RAG, and low-VRAM machines.

Tabby: The Self-Hosted Coding Assistant That Beats Copilot for Completions

A complete guide to Tabby, the self-hosted coding assistant built specifically for inline tab completions: how it differs from Continue and why dedicated completion models are faster and more accurate, Docker installation with NVIDIA GPU, choosing between StarCoder2 and DeepSeek-Coder models, VS Code, Neovim and JetBrains plugin setup, Docker Compose for persistent deployment, repository indexing for codebase-aware completions, monitoring acceptance rates in the built-in dashboard, and when to use Tabby versus Continue.