Shao H, Wang F, Xie Z. S
2AF: An action framework to self-check the Understanding Self-Consistency of Large Language Models.
Neural Netw 2025;
187:107365. [PMID:
40101554 DOI:
10.1016/j.neunet.2025.107365]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 02/18/2025] [Accepted: 03/04/2025] [Indexed: 03/20/2025]
Abstract
Large Language Models (LLMs), which are trained on massive text data, have demonstrated remarkable advancements in language understanding capabilities. Nevertheless, it remains unclear to what extent LLMs have effectively captured and utilized the implicit relationships inherent in the text. This study introduces 'Understanding Self-Consistency', a new perspective that reflects LLMs' ability to grasp in-depth knowledge relationships through their consistency performance. Specifically, Understanding Self-Consistency refers to the model's capacity to maintain logical and contextual consistency between inputs and responses. Inspired by human cognitive behavior, we design a self-check action framework named S2AF. Wherein, a self-question and answering mechanism is emphasized and forms a logically closed loop including four classes of actions, allowing our S2AF to generate, question, answer, and evaluate autonomously. Experimental results on six LLMs across two datasets show that LLMs exhibit objective ability values of the understanding self-consistency and demonstrate their differentiated grasp of knowledge relationships across different reasoning paradigms. Moreover, our findings reveal that LLMs' performance can be improved with their own outputs (which we call 'self-enhanced Feedforward'). Notably, S2AF merely relies on factual logical relationships, showcasing its potential to advance the development of embodied artificial intelligence (EAI).
Collapse