Sponsor of the Day:
Jerkmate
https://unsloth.ai/docs/models/tutorials/glm-5
GLM-5: How to Run Locally Guide | Unsloth Documentation
Run the new GLM-5 model by Z.ai on your own local device!
guide unsloth documentationglm 5run locally
https://unsloth.ai/docs/get-started/install/intel
Fine-tuning LLMs on Intel GPUs with Unsloth | Unsloth Documentation
Learn how to train and fine-tune large language models on Intel GPUs.
fine tuning llmsintel gpusunsloth documentation
https://unsloth.ai/docs/models/qwen3-coder-next
Qwen3-Coder-Next: How to Run Locally | Unsloth Documentation
Guide to run Qwen3-Coder-Next locally on your device!
qwen3 coder nextrun locally unslothdocumentation
https://unsloth.ai/docs/models/tutorials
Large language model (LLMs) Tutorials | Unsloth Documentation
large language modelunsloth documentationllmstutorials
https://unsloth.ai/docs/get-started/install/updating
Updating Unsloth | Unsloth Documentation
To update or use an old version of Unsloth, follow the steps below:
unsloth documentationupdating
https://unsloth.ai/docs/basics/continued-pretraining
Continued Pretraining | Unsloth Documentation
AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.
unsloth documentationcontinuedpretraining
https://unsloth.ai/docs/models/tutorials/qwen3-coder-how-to-run-locally
Qwen3-Coder: How to Run Locally | Unsloth Documentation
Run Qwen3-Coder-30B-A3B-Instruct and 480B-A35B locally with Unsloth Dynamic quants.
run locally unslothqwen3 coderdocumentation
https://unsloth.ai/docs/basics/finetuning-from-last-checkpoint
Finetuning from Last Checkpoint | Unsloth Documentation
Checkpointing allows you to save your finetuning progress so you can pause it and then continue.
unsloth documentationfinetuninglastcheckpoint
https://unsloth.ai/docs/basics/inference-and-deployment/deploy-llms-phone
How to Run and Deploy LLMs on your iOS or Android Phone | Unsloth Documentation
Tutorial for fine-tuning your own LLM and deploying it on your Android or iPhone with ExecuTorch.
android phoneunsloth documentationrundeployllms
https://unsloth.ai/docs/basics/chat-templates
Chat Templates | Unsloth Documentation
Learn the fundamentals and customization options of chat templates, including Conversational, ChatML, ShareGPT, Alpaca formats, and more!
unsloth documentationchattemplates
https://unsloth.ai/docs/get-started/fine-tuning-for-beginners
Fine-tuning for Beginners | Unsloth Documentation
fine tuningunsloth documentationbeginners
https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
Vision Reinforcement Learning (VLM RL) | Unsloth Documentation
Train Vision/multimodal models via GRPO and RL with Unsloth!
reinforcement learningunsloth documentationvisionvlmrl
https://unsloth.ai/docs/models/qwen3.5
Qwen3.5 - How to Run Locally | Unsloth Documentation
Run the new Qwen3.5 LLMs including Medium: Qwen3.5-35B-A3B, 27B, 122B-A10B, Small: Qwen3.5-0.8B, 2B, 4B, 9B and 397B-A17B on your local device!
run locally unslothqwen3 5documentation
https://unsloth.ai/docs/models/tutorials/phi-4-reasoning-how-to-run-and-fine-tune
Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation
run fine tunephi 4unsloth documentationreasoning
https://unsloth.ai/docs/models/qwen3.5/fine-tune
Qwen3.5 Fine-tuning Guide | Unsloth Documentation
Learn how to fine-tune Qwen3.5 LLMs with Unsloth.
fine tuning guideqwen3 5unsloth documentation
https://unsloth.ai/docs/models/gemma-4/train
Gemma 4 Fine-tuning Guide | Unsloth Documentation
Train Gemma 4 by Google with Unsloth.
fine tuning guidegemma 4unsloth documentation
https://unsloth.ai/docs/basics/text-to-speech-tts-fine-tuning
Text-to-Speech (TTS) Fine-tuning Guide | Unsloth Documentation
fine tuning guidespeech ttsunsloth documentationtext
https://unsloth.ai/docs/models/tutorials/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune
Qwen3-VL: How to Run Guide | Unsloth Documentation
Learn to fine-tune and run Qwen3-VL locally with Unsloth.
run guide unslothqwen3 vldocumentation
https://unsloth.ai/docs/basics/faster-moe
Fine-tune MoE Models 12x Faster with Unsloth | Unsloth Documentation
Train MoE LLMs locally using Unsloth Guide.
fine tune12x fasterunsloth documentationmoemodels
https://unsloth.ai/docs/models/qwen3.6
Qwen3.6 - How to Run Locally | Unsloth Documentation
Run the new Qwen3.6-27B and 35B-A3B models locally!
run locally unslothqwen3 6documentation
https://unsloth.ai/docs/models/nemotron-3
NVIDIA Nemotron 3 Nano - How To Run Guide | Unsloth Documentation
nvidia nemotron 3run guide unslothnanodocumentation
https://unsloth.ai/docs/models/tutorials/lfm2.5
Liquid LFM2.5: How To Run & Fine-tune | Unsloth Documentation
Run and fine-tune LFM2.5 Instruct and Vision locally on your device!
run fine tuneunsloth documentationliquidlfm25
https://unsloth.ai/docs/basics/vision-fine-tuning
Vision Fine-tuning | Unsloth Documentation
Learn how to fine-tune vision/multimodal LLMs with Unsloth
fine tuningunsloth documentationvision
https://unsloth.ai/docs/basics/inference-and-deployment/vllm-guide
vLLM Deployment & Inference Guide | Unsloth Documentation
Guide on saving and deploying LLMs to vLLM for serving LLMs in production
guide unsloth documentationvllmdeploymentinference
https://unsloth.ai/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
DeepSeek-R1-0528: How to Run Locally | Unsloth Documentation
A guide on how to run DeepSeek-R1-0528 including Qwen3 on your own local device!
run locally unslothdeepseek r10528documentation
https://unsloth.ai/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
Gemma 3 - How to Run Guide | Unsloth Documentation
How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!
run guide unslothgemma 3documentation
https://unsloth.ai/docs/get-started/fine-tuning-llms-guide
Fine-tuning LLMs Guide | Unsloth Documentation
Learn all the basics and best practices of fine-tuning. Beginner-friendly.
fine tuning llmsguide unsloth documentation
https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
gpt-oss Reinforcement Learning | Unsloth Documentation
gpt ossreinforcement learningunsloth documentation
https://unsloth.ai/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint
llama-server & OpenAI endpoint Deployment Guide | Unsloth Documentation
Deploying via llama-server with an OpenAI compatible endpoint
guide unsloth documentationllamaserveropenaiendpoint
https://unsloth.ai/docs/models/tutorials/functiongemma
FunctionGemma: How to Run & Fine-tune | Unsloth Documentation
Learn how to run and fine-tune FunctionGemma locally on your device and phone.
run fine tuneunsloth documentationfunctiongemma
https://unsloth.ai/docs/blog/500k-context-length-fine-tuning
500K Context Length Fine-tuning | Unsloth Documentation
context lengthfine tuningunsloth documentation500k
https://unsloth.ai/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
Fine-tuning LLMs with Blackwell, RTX 50 series & Unsloth | Unsloth Documentation
Learn how to fine-tune LLMs on NVIDIA's Blackwell RTX 50 series and B200 GPUs with our step-by-step guide.
fine tuning llmsrtx 50 seriesunsloth documentationblackwell
https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide
Reinforcement Learning (RL) Guide | Unsloth Documentation
Learn all about Reinforcement Learning (RL) and how to train your own DeepSeek-R1 reasoning model with Unsloth using GRPO. A complete guide from beginner to...
guide unsloth documentationreinforcement learningrl
https://unsloth.ai/docs/models/tutorials/llama-4-how-to-run-and-fine-tune
Llama 4: How to Run & Fine-tune | Unsloth Documentation
How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.
run fine tunellama 4unsloth documentation
https://unsloth.ai/docs/basics/codex
How to Run Local LLMs with OpenAI Codex | Unsloth Documentation
Use open models with OpenAI Codex on your device locally.
run localopenai codexunsloth documentationllms
https://unsloth.ai/docs/basics/inference-and-deployment
Inference & Deployment | Unsloth Documentation
Learn how to save your finetuned model so you can run it in your favorite inference engine.
inference deploymentunsloth documentation
https://sbert.net/examples/sentence_transformer/training/unsloth/README.html
Training with Unsloth — Sentence Transformers documentation
sentence transformerstrainingunslothdocumentation
https://unsloth.ai/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
Unsloth Requirements | Unsloth Documentation
Here are Unsloth's requirements including system and GPU VRAM requirements.
requirements documentationunsloth
https://unsloth.ai/docs/basics/unsloth-benchmarks
Unsloth Benchmarks | Unsloth Documentation
Unsloth recorded benchmarks on NVIDIA GPUs.
unslothbenchmarksdocumentation
https://unsloth.ai/docs/get-started/unsloth-model-catalog
Unsloth Model Catalog | Unsloth Documentation
model catalogunslothdocumentation