Skip to content
Go back

arXiv AI Publications - 2025 Week 43

Published:  at  11:00 AM
Available Languages:

Publications de la semaine #43 - 2025

Here are the top 5 most relevant AI papers from arXiv week 43/2025, complete with analysis and insights.

Publications at a Glance


Measuring Reasoning in LLMs: a New Dialectical Angle

Published
10/20/2025
arXiv ID
Authors
Soheil Abbasloo

Key Insights

This research introduces a novel framework, SIEV, that evaluates reasoning in language models through a dialectical approach, emphasizing the dynamic interaction of ideas rather than just the correctness of answers. By highlighting reasoning gaps in state-of-the-art models, it challenges existing evaluation metrics and provides a more nuanced understanding of LLM capabilities.

Potential Impact

The adoption of the SIEV framework could revolutionize the way language models are assessed, leading to the development of more sophisticated models that not only provide correct answers but also exhibit deeper reasoning processes. This shift could enhance applications in fields requiring critical thinking and complex problem-solving, such as education, law, and scientific research.

back to list

Activation Manifold Projection: Liberating Task-Specific Behaviors from LLM Architectures

Published
10/19/2025
arXiv ID
Authors
Al Kari

Key Insights

This research introduces the Cartridge Activation Space Transfer (CAST) framework, which offers a novel method for transferring task-specific behaviors from one LLM architecture to another by learning a direct mapping between their activation manifolds. This approach effectively decouples the learned skills from the source architecture, enabling zero-shot translation of LoRA adapters across different model families.

Potential Impact

CAST could significantly enhance model interoperability in the field of natural language processing by allowing for more flexible and efficient reuse of task-specific capabilities across diverse architectures. This advancement may lead to reduced resource consumption in fine-tuning and a broader application of LLMs in various tasks without the need for extensive retraining.

back to list

CircuitSeer: Mining High-Quality Data by Probing Mathematical Reasoning Circuits in LLMs

Published
10/21/2025
arXiv ID
Authors
Shaobo Wang, Yongliang Miao, Yuancheng Liu, Qianli Ma, Ning Liao, Linfeng Zhang

Key Insights

This research introduces CircuitSeer, a novel method for data selection that leverages the internal mechanisms of large language models, specifically by identifying specialized attention heads that activate during complex reasoning tasks. By quantifying the reasoning complexity of data based on its influence on these circuits, CircuitSeer offers a more efficient and effective approach to curating high-quality training datasets.

Potential Impact

CircuitSeer could significantly reduce the computational costs associated with training large language models by enabling the selection of smaller, high-quality datasets without reliance on external models or heuristics. This innovation may enhance the performance of LLMs in reasoning tasks, leading to more accessible and efficient applications across various fields, from education to AI-driven decision-making.

back to list

ssToken: Self-modulated and Semantic-aware Token Selection for LLM Fine-tuning

Published
10/21/2025
arXiv ID
Authors
Xiaohan Qin, Xiaoxing Wang, Ning Liao, Cancheng Zhang, Xiangdong Zhang, Mingquan Feng, Jingzhi Wang, Junchi Yan

Key Insights

The research introduces ssToken, a novel token selection method for fine-tuning large language models that combines self-modulated signals with semantic-aware metrics, overcoming limitations of existing methods that rely on additional reference models and pure loss information. This approach not only enhances the selection of semantically important tokens but also improves performance while maintaining training efficiency.

Potential Impact

ssToken could significantly improve the fine-tuning process for large language models by enabling more effective and efficient token selection, potentially leading to better model performance across various applications. This innovation may also encourage the development of more adaptive training methodologies in the field of natural language processing, shifting focus towards integrating semantic understanding into data selection processes.

back to list

Plan Then Retrieve: Reinforcement Learning-Guided Complex Reasoning over Knowledge Graphs

Published
10/23/2025
arXiv ID
Authors
Yanlin Song, Ben Liu, Víctor Gutiérrez-Basulto, Zhiwei Hu, Qianqian Xie, Min Peng, Sophia Ananiadou, Jeff Z. Pan

Key Insights

This research introduces Graph-RFT, a two-stage reinforcement fine-tuning framework that enhances knowledge graph question answering by integrating autonomous planning and adaptive retrieval from both knowledge graphs and web sources. The methodology addresses limitations in existing approaches, particularly in handling incomplete knowledge and performing coherent, multi-step reasoning.

Potential Impact

By enabling LLMs to effectively plan and retrieve information in a structured manner, this framework could significantly improve the accuracy and efficiency of knowledge graph question answering applications. It has the potential to redefine how AI systems interact with knowledge sources, enhancing their ability to tackle complex queries in real-world scenarios.

back to list



Previous Post
arXiv AI Publications - 2025 Week 44
Next Post
arXiv AI Publications - 2025 Week 42