Robuta

https://towardsdatascience.com/deep-dive-into-anthropics-sparse-autoencoders-by-hand-%ef%b8%8f-eebe0ef59709/
Jan 13, 2025 - Explore the concepts behind the interpretability quest for LLMs
deep diveanthropicsparseautoencodershand
https://towardsdatascience.com/hands-on-time-series-anomaly-detection-using-autoencoders-with-python-7cd893bbc122/
Jan 8, 2025 - Here's how to use Autoencoders to detect signals with anomalies in a few lines of codes
time seriesanomaly detectionhandsusingautoencoders
https://www.jonvet.com/blog/llm-transcoder-and-saes
Jun 22, 2025 - Sparse Autoencoders (SAEs) and Cross-Layer Transcoders (CLTs) are two approaches to interpretability of transformer models. Read up on what they're good...
crosslayersparseautoencoders
https://www.r-bloggers.com/2018/07/pca-vs-autoencoders-for-dimensionality-reduction/
Jul 28, 2018 - There are a few ways to reduce the dimensions of large data sets to ensure computational efficiency such as backwards […] The post PCA vs Autoencoders for...
dimensionality reductionpcavsautoencodersbloggers
https://arize.com/blog/llm-interpretability-and-sparse-autoencoders-openai-anthropic/
Sep 17, 2024 - Breaking down two papers that focus on the sparse autoencoder--an unsupervised approach for extracting interpretable features from an LLM.
llminterpretabilitysparseautoencodersresearch
https://towardsdatascience.com/neuro-symbolic-systems-the-art-of-compromise-2/
Dec 10, 2025 - Neural and symbolic models compress the world in fundamentally different ways, and Sparse Autoencoders (SAEs) offer a bridge to connect them.
neural networksblurrysymbolicsystemsfragmented
https://www.alanzucconi.com/2018/03/14/an-introduction-to-autoencoders/
Aug 24, 2024 - This tutorial introduces the concept of Neural Networks and Autoencoders, powerful computational models that are used in Machine Learning. If you are...
neural networksalan zucconiintroductionautoencoders