Semantic Exploration with Adaptive Gating for Efficient Problem Solving with Language Models

Sungjae Lee*, Hyejin Park*, Jaechang Kim, Jungseul Ok
Pohang University of Science and Technology (POSTECH)
* Equal Contribuion

Semantic Exploration with Adaptive Gating (SEAG)

SEAG enhances the efficiency and effectiveness of language model reasoning by the following mechanisms:
  • Adaptive Gating: Dynamically determines when to initiate tree search based on uncertainty estimation.
  • Semantic Exploration: Expands and explores reasoning trees more efficiently by leveraging linguistic semantics, enhancing how to perform tree search.
  • Early Stopping: Considers semantic confidence to determine when to terminate the tree search early.

Abstract

Recent advancements in large language models (LLMs) have shown remarkable potential in various complex tasks requiring multi-step reasoning methods like tree search to explore diverse reasoning paths. However, existing methods often suffer from computational inefficiency and redundancy. First, they overlook the diversity of task difficulties, leading to unnecessarily extensive searches even for easy tasks. Second, they neglect the semantics of reasoning paths, resulting in redundant exploration of semantically identical paths. To address these limitations, we propose Semantic Exploration with Adaptive Gating (SEAG), a computationally efficient method. SEAG employs an adaptive gating mechanism that dynamically decides whether to conduct a tree search, based on the confidence level of answers from a preceding simple reasoning method. Furthermore, its tree-based exploration consolidates semantically identical reasoning steps, reducing redundant explorations while maintaining or even improving accuracy. Our extensive experiments demonstrate that SEAG significantly improves accuracy by 4.3% on average while requiring only 31% of computational costs compared to existing tree search-based methods on complex reasoning benchmarks including GSM8K and ARC with diverse language models such as Llama2, Llama3, and Mistral.

Semantic Exploration

Illustration of tree search-based reasoning methods (ToT [1] and RAP [2]) and semantic exploration (ours).
To improve tree search-based reasoning, we introduce semantic exploration, which consists of two main components:
  • Semantic Clustering: Prevents redundant expansion and exploration of semantically duplicate nodes by leveraging semantic equivalence.
  • Semantics-aware Selection: Prioritizes the exploration of nodes with higher semantic consistency, enhancing search efficiency.
Compared to existing tree search-based methods, our approach reduces redundant visits to semantically equivalent nodes and limits unnecessary subtree expansions, while encouraging more frequent exploration of semantically consistent nodes.

Main Results

Scatter plots showcasing accuracy and the number of LLM inferences for GSM8K and ARC datasets.

SE significantly outperforms tree search-based methods such as RAP and ToT, achieving both higher accuracy and reduced inference costs. With the addition of AG, SEAG further improves accuracy by 4.3% while requiring only 31% as many inferences as RAP, which is our closest baseline.
✔️ SEAG achieves superior reasoning performance with enhanced computational efficiency.

Analysis: Necessity of Adaptive Gating

Accuracy comparison across different entropy ranges for GSM8K and ARC datasets.

CoT-SC [3] achieves high accuracy on low-entropy problems. In contrast, RAP and SE, which incorporate tree search, exhibit superior accuracy on high-entropy problems.
✔️ Adaptive gating leveraging CoT-SC can enhance both efficiency and accuracy.

Analysis: Efficiency Improvement by Semantic Exploration

Reduction in search space due to semantic clustering across multiple depths.

Semantic clustering eliminates approximately 20–60% of semantically redundant paths, significantly reducing the search space across various depths and datasets.
✔️ SE effectively prevents the repeated expansion and exploration of semantically redundant paths by leveraging semantic equivalence.

References

[1] Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." NeurIPS'23.

[2] Hao, Shibo, et al. "Reasoning with language model is planning with world model." EMNLP'23.

[3] Wang, Xuezhi, et al. "Self-Consistency Improves Chain of Thought Reasoning in Language Models." ICLR'23.

BibTeX

@article{lee2025seag,
        title={Semantic Exploration with Adaptive Gating for Efficient Problem Solving with Language Models},
        author={Lee, Sungjae and Park, Hyejin and Kim, Jaechang and Ok, Jungseul},
        journal={arXiv preprint arXiv:2501.05752},
        year={2025}
      }