ReflAct: World-Grounded Decision Making in LLM Agents via Goal-State Reflection
Abstract
ReflAct, a new reasoning backbone for LLM agents, improves goal alignment and reduces hallucinations by continuously reflecting on the agent's state, surpassing ReAct and other enhanced variants.
Recent advances in LLM agents have largely built on reasoning backbones like ReAct, which interleave thought and action in complex environments. However, ReAct often produces ungrounded or incoherent reasoning steps, leading to misalignment between the agent's actual state and goal. Our analysis finds that this stems from ReAct's inability to maintain consistent internal beliefs and goal alignment, causing compounding errors and hallucinations. To address this, we introduce ReflAct, a novel backbone that shifts reasoning from merely planning next actions to continuously reflecting on the agent's state relative to its goal. By explicitly grounding decisions in states and enforcing ongoing goal alignment, ReflAct dramatically improves strategic reliability. This design delivers substantial empirical gains: ReflAct surpasses ReAct by 27.7% on average, achieving a 93.3% success rate in ALFWorld. Notably, ReflAct even outperforms ReAct with added enhancement modules (e.g., Reflexion, WKM), showing that strengthening the core reasoning backbone is key to reliable agent performance.
Community
💡 tl;dr: ReflAct is a decision-making framework for LLM agents that improves ReAct by prompting reflection on the agent’s state and task goal before acting, resulting in more reliable, goal-aligned behavior.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reasoning Court: Combining Reasoning, Action, and Judgment for Multi-Hop Reasoning (2025)
- SymPlanner: Deliberate Planning in Language Models with Symbolic Representation (2025)
- Agentic Reasoning and Tool Integration for LLMs via Reinforcement Learning (2025)
- Are Retrials All You Need? Enhancing Large Language Model Reasoning Without Verbalized Feedback (2025)
- A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions (2025)
- Pre-Act: Multi-Step Planning and Reasoning Improves Acting in LLM Agents (2025)
- HyperTree Planning: Enhancing LLM Reasoning via Hierarchical Thinking (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper