Sponsor of the Day:
Jerkmate
https://deepignorance.ai/
Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs
Filtering pretraining can prevent unsafe knowledge, doesn’t sacrifice general performance, and results in models that are resistant to tampering.
open weight llmspretraining datatamper resistantdeepignorance
https://simonwillison.net/2025/Aug/15/inconsistent-performance/
Open weight LLMs exhibit inconsistent performance across providers
Artificial Analysis published a new benchmark the other day, this time focusing on how an individual model—OpenAI’s gpt-oss-120b—performs across different...
open weight llmsinconsistent performanceexhibitacrossproviders
https://simonwillison.net/2025/Jul/30/chinese-models/
The best available open weight LLMs now come from China
Something that has become undeniable this month is that the best available open weight models now come from the Chinese AI labs. I continue to have a lot of...
open weight llmsbest availablecomechina