Robuta

https://openreview.net/forum?id=cV9OF45hBb&referrer=%5Bthe%20profile%20of%20Peiqi%20Wang%5D(%2Fprofile%3Fid%3D~Peiqi_Wang2)
We aim to select data subsets for the fine-tuning of large language models to more effectively follow instructions. Prior work has emphasized the importance of...
diversity measurementinstruction tuningsubsetselectiondatasets
https://research.google/blog/the-flan-collection-advancing-open-source-methods-for-instruction-tuning/?ref=blog.duy.dev
Posted by Shayne Longpre, Student Researcher, and Adam Roberts, Senior Staff Software Engineer, Google Research, Brain Team Language models are now...
open sourceinstruction tuningflancollectionadvancing
https://aclanthology.org/2024.acl-long.649/
Huiyuan Lai, Malvina Nissim. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024.
multilingual instructionlanguage modelsmcottuningreasoning
https://openreview.net/forum?id=6uLhmE6tvCn&referrer=%5Bthe%20profile%20of%20Hyung%20Won%20Chung%5D(%2Fprofile%3Fid%3D~Hyung_Won_Chung1)
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnable parameters to Large Language Models (LLMs) without...
instruction tuningwinning combinationmixtureexpertsmeets
https://www.lenovo.com/ie/en/knowledgebase/instruction-tuning-enhancing-large-language-models-for-specific-tasks/
large language modelsinstruction tuningenhancingspecifictasks
https://openreview.net/forum?id=CjrPqvvUXL&referrer=%5Bthe%20profile%20of%20Armel%20Randy%20Zebaze%5D(%2Fprofile%3Fid%3D~Armel_Randy_Zebaze1)
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using...
large language modelsinstruction tuningcodeopenreview
https://aclanthology.org/2024.emnlp-main.1036/
Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae. Proceedings of the 2024 Conference on Empirical Methods in Natural...
instructionmatterssimpleyeteffective
https://www.marktechpost.com/2023/09/05/this-ai-paper-explains-how-programming-languages-can-enhance-each-other-through-instruction-tuning/
Sep 5, 2023 - This AI Paper Explains How Programming Languages Can Enhance Each Other Through Instruction Tuning
programming languagesaipaperenhance
https://openreview.net/forum?id=rzkniB2ivX&referrer=%5Bthe%20profile%20of%20Armel%20Randy%20Zebaze%5D(%2Fprofile%3Fid%3D~Armel_Randy_Zebaze1)
large language modelsinstruction tuningcodeopenreview
https://huggingface.co/spaces/soharab/instruction-tuning-sd-cartoonizer
Discover amazing ML apps made by the community
instruction tuninghugging facesdcartoonizerspace
https://openreview.net/forum?id=df3n4ddLg2E&referrer=%5Bthe%20profile%20of%20Hyung%20Won%20Chung%5D(%2Fprofile%3Fid%3D~Hyung_Won_Chung1)
We study the design decision of publicly available instruction tuning methods, by reproducing and breaking down the development of Flan 2022 (Chung et al.,...
flancollectiondesigningdatamethods
https://arxiv.org/abs/2305.12147
Abstract page for arXiv paper 2305.12147: LogiCoT: Logical Chain-of-Thought Instruction-Tuning
instruction tuninglogicalchainthought
https://www.lenovo.com/sg/en/knowledgebase/instruction-tuning-enhancing-large-language-models-for-specific-tasks/
large language modelsinstruction tuningenhancingspecifictasks
https://aclanthology.org/2024.emnlp-main.139/
Leonardo Ranaldi, Andre Freitas. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024.
instruction tuninglanguage modelsselfrefinealigning
https://neptune.ai/blog/instruction-fine-tuning-evaluation-and-advanced-techniques
Oct 28, 2025 - Learn about key concepts like instruction masking or two-stream architecture and methods to prevent catastrophic forgetting.
fine tuningadvanced techniquesinstructionevaluation