New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples
Hierarchical Reasoning Models (HRM) tackle complex reasoning tasks while being smaller, faster, and more data-efficient than large AI models.Read More
Hierarchical Reasoning Models (HRM) tackle complex reasoning tasks while being smaller, faster, and more data-efficient than large AI models.Read More
LLMs have demonstrated exceptional performance across multiple tasks by utilizing few-shot inference, also known as in-context learning (ICL). The main problem lies in selecting the most representative demonstrations from large training datasets. Early methods selected demonstrations based on relevance using similarity scores between each example and the input question. Current methods suggest using additional selection rules, along with similarity, to enhance the efficiency of demonstration selection. These improvements introduce significant computational overhead when the number of shots increases. The effectiveness of selected demonstrations should also consider the specific LLM in use, as different LLMs exhibit varying capabilities and knowledge domains. Researchers from Shanghai Jiao Tong University, Xiaohongshu Inc., Carnegie Mellon University, Peking University, No Affiliation, University College London, and University of Bristol have proposed FEEDER (FEw yet Essential Demonstration prE-selectoR), a method to identify a core subset of demonstrations containing the most representative examples in training data, adjusted to specific LLMs. To construct this subset, “sufficiency” and “necessity” metrics are introduced in the pre-selection stage, along with a tree-based algorithm. Moreover, FEEDER reduces training data size by 20% while maintaining performance and seamlessly integrating with various downstream demonstration selection techniques in ICL across LLMs ranging from 300M to 8B parameters. FEEDER is evaluated on 6 text classification datasets: SST-2, SST-5, COLA, TREC, SUBJ, and FPB, covering tasks from sentiment classification and linguistic analysis to textual entailment. It is also evaluated on the reasoning dataset GSM8K, the semantic-parsing dataset SMCALFlow, and the scientific question-answering dataset GPQA. The official splits for each dataset are directly followed to get the training and test data. Moreover, multiple LLM variants are utilized to evaluate the performance of the method, including two GPT-2 variants, GPT-neo with 1.3B parameters, GPT-3 with 6B parameters, Gemma-2 with 2B parameters, Llama-2 with 7B parameters, Llama-3 with 8B parameters, and Qwen-2.5 with 32B parameters as the LLM base. Results regarding in-context learning performance show that FEEDER enables retention of almost half the training samples while achieving superior or comparable performance. Evaluation of few-shot performance on complex tasks using LLMs like Gemma-2 shows that FEEDER improves performance even when LLMs struggle with challenging tasks. It performs effectively with large numbers of shots, handling situations where LLM performance usually drops when the number of examples increases from 5 to 10 due to noisy or repeated demonstrations. Moreover, FEEDER minimizes negative impact on LLM performance by evaluating the sufficiency and necessity of each demonstration, and helps in the performance stability of LLMs On bi-level optimization, FEEDER achieves improved performance by utilizing a small yet high-quality dataset for fine-tuning while simultaneously reducing computational expenses, aligning with the core-set selection principle. Results indicate that fine-tuning LLMs provides greater performance improvements compared to augmenting LLMs with contexts, with FEEDER achieving even better performance gains in fine-tuning settings. Performance analysis reveals that FEEDER’s effectiveness first rises and then drops with increasing number of runs or rounds (R and K, respectively), confirming that identifying representative subsets from training datasets enhances LLM performance. However, overly narrow subsets may limit potential performance gains. In conclusion, researchers introduced FEEDER, a demonstration pre-selector designed to use LLM capabilities and domain knowledge to identify high-quality demonstrations through an efficient discovery approach. It reduces training data requirements while maintaining comparable performance, offering a practical solution for efficient LLM deployment. Future research directions include exploring applications with larger LLMs and extending FEEDER’s capabilities to areas such as data safety and data management. FEEDER makes a valuable contribution to demonstration selection, providing researchers and practitioners with an effective tool for optimizing LLM performance while reducing computational overhead. Check out the Paper. All credit for this research goes to the researchers of this project. Meet the AI Dev Newsletter read by 40k+ Devs and Researchers from NVIDIA, OpenAI, DeepMind, Meta, Microsoft, JP Morgan Chase, Amgen, Aflac, Wells Fargo and 100s more [SUBSCRIBE NOW] The post FEEDER: A Pre-Selection Framework for Efficient Demonstration Selection in LLMs appeared first on MarkTechPost.
FEEDER: A Pre-Selection Framework for Efficient Demonstration Selection in LLMs Lire l’article »
The move underscores Meta’s strategy of spending aggressively now to secure a dominant position in what it views as the next foundational technology platform.Read More
Advancements in artificial intelligence are rapidly closing the gap between digital reasoning and real-world interaction. At the forefront of this progress is embodied AI—the field focused on enabling robots to perceive, reason, and act effectively in physical environments. As industries look to automate complex spatial and temporal tasks—from household assistance to logistics—having AI systems that truly understand their surroundings and plan actions becomes critical. Introducing RoboBrain 2.0: A Breakthrough in Embodied Vision-Language AI Developed by the Beijing Academy of Artificial Intelligence (BAAI), RoboBrain 2.0 marks a major milestone in the design of foundation models for robotics and embodied artificial intelligence. Unlike conventional AI models, RoboBrain 2.0 unifies spatial perception, high-level reasoning, and long-horizon planning within a single architecture. Its versatility supports a diverse set of embodied tasks, such as affordance prediction, spatial object localization, trajectory planning, and multi-agent collaboration. Key Highlights of RoboBrain 2.0 Two Scalable Versions: Offers both a fast, resource-efficient 7-billion-parameter (7B) variant and a powerful 32-billion-parameter (32B) model for more demanding tasks. Unified Multi-Modal Architecture: Couples a high-resolution vision encoder with a decoder-only language model, enabling seamless integration of images, video, text instructions, and scene graphs. Advanced Spatial and Temporal Reasoning: Excels at tasks requiring an understanding of object relationships, motion forecasting, and complex, multi-step planning. Open-Source Foundation: Built using the FlagScale framework, RoboBrain 2.0 is designed for easy research adoption, reproducibility, and practical deployment. How RoboBrain 2.0 Works: Architecture and Training Multi-Modal Input Pipeline RoboBrain 2.0 ingests a diverse mix of sensory and symbolic data: Multi-View Images & Videos: Supports high-resolution, egocentric, and third-person visual streams for rich spatial context. Natural Language Instructions: Interprets a wide range of commands, from simple navigation to intricate manipulation instructions. Scene Graphs: Processes structured representations of objects, their relationships, and environmental layouts. The system’s tokenizer encodes language and scene graphs, while a specialized vision encoder utilizes adaptive positional encoding and windowed attention to process visual data effectively. Visual features are projected into the language model’s space via a multi-layer perceptron, enabling unified, multimodal token sequences. Three-Stage Training Process RoboBrain 2.0 achieves its embodied intelligence through a progressive, three-phase training curriculum: Foundational Spatiotemporal Learning: Builds core visual and language capabilities, grounding spatial perception and basic temporal understanding. Embodied Task Enhancement: Refines the model with real-world, multi-view video and high-resolution datasets, optimizing for tasks like 3D affordance detection and robot-centric scene analysis. Chain-of-Thought Reasoning: Integrates explainable step-by-step reasoning using diverse activity traces and task decompositions, underpinning robust decision-making for long-horizon, multi-agent scenarios. Scalable Infrastructure for Research and Deployment RoboBrain 2.0 leverages the FlagScale platform, offering: Hybrid parallelism for efficient use of compute resources Pre-allocated memory and high-throughput data pipelines to reduce training costs and latency Automatic fault tolerance to ensure stability across large-scale distributed systems This infrastructure allows for rapid model training, easy experimentation, and scalable deployment in real-world robotic applications. Real-World Applications and Performance RoboBrain 2.0 is evaluated on a broad suite of embodied AI benchmarks, consistently surpassing both open-source and proprietary models in spatial and temporal reasoning. Key capabilities include: Affordance Prediction: Identifying functional object regions for grasping, pushing, or interacting Precise Object Localization & Pointing: Accurately following textual instructions to find and point to objects or vacant spaces in complex scenes Trajectory Forecasting: Planning efficient, obstacle-aware end-effector movements Multi-Agent Planning: Decomposing tasks and coordinating multiple robots for collaborative goals Its robust, open-access design makes RoboBrain 2.0 immediately useful for applications in household robotics, industrial automation, logistics, and beyond. Potential in Embodied AI and Robotics By unifying vision-language understanding, interactive reasoning, and robust planning, RoboBrain 2.0 sets a new standard for embodied AI. Its modular, scalable architecture and open-source training recipes facilitate innovation across the robotics and AI research community. Whether you are a developer building intelligent assistants, a researcher advancing AI planning, or an engineer automating real-world tasks, RoboBrain 2.0 offers a powerful foundation for tackling the most complex spatial and temporal challenges. Check out the Paper and Codes. All credit for this research goes to the researchers of this project | Meet the AI Dev Newsletter read by 40k+ Devs and Researchers from NVIDIA, OpenAI, DeepMind, Meta, Microsoft, JP Morgan Chase, Amgen, Aflac, Wells Fargo and 100s more [SUBSCRIBE NOW] The post RoboBrain 2.0: The Next-Generation Vision-Language Model Unifying Embodied AI for Advanced Robotics appeared first on MarkTechPost.
arXiv:2507.18043v1 Announce Type: new Abstract: Inference-time steering methods offer a lightweight alternative to fine-tuning large language models (LLMs) and vision-language models (VLMs) by modifying internal activations at test time without updating model weights. However, most existing approaches rely on fixed, global intervention vectors, overlook the causal influence of individual input tokens, and fail to leverage informative gradients from the model’s logits, particularly in multimodal settings where visual and textual inputs contribute unevenly. To address these limitations, we introduce GrAInS, an inference-time steering approach that operates across both language-only and vision-language models and tasks. GrAInS uses contrastive, gradient-based attribution via Integrated Gradients to identify the top-k most influential tokens, both positively and negatively attributed based on their contribution to preferred versus dispreferred outputs. These tokens are then used to construct directional steering vectors that capture semantic shifts from undesirable to desirable behavior. During inference, GrAInS adjusts hidden activations at transformer layers guided by token-level attribution signals, and normalizes activations to preserve representational scale. This enables fine-grained, interpretable, and modular control over model behavior, without retraining or auxiliary supervision. Empirically, GrAInS consistently outperforms both fine-tuning and existing steering baselines: it achieves a 13.22% accuracy gain on TruthfulQA using Llama-3.1-8B, reduces hallucination rates on MMHal-Bench from 0.624 to 0.514 with LLaVA-1.6-7B, and improves alignment win rates on SPA-VL by 8.11%, all while preserving the model’s fluency and general capabilities.
GrAInS: Gradient-based Attribution for Inference-Time Steering of LLMs and VLMs Lire l’article »
arXiv:2507.18263v1 Announce Type: new Abstract: Direct speech translation (ST) has garnered increasing attention nowadays, yet the accurate translation of terminology within utterances remains a great challenge. In this regard, current studies mainly concentrate on leveraging various translation knowledge into ST models. However, these methods often struggle with interference from irrelevant noise and can not fully utilize the translation knowledge. To address these issues, in this paper, we propose a novel Locate-and-Focus method for terminology translation. It first effectively locates the speech clips containing terminologies within the utterance to construct translation knowledge, minimizing irrelevant information for the ST model. Subsequently, it associates the translation knowledge with the utterance and hypothesis from both audio and textual modalities, allowing the ST model to better focus on translation knowledge during translation. Experimental results across various datasets demonstrate that our method effectively locates terminologies within utterances and enhances the success rate of terminology translation, while maintaining robust general translation performance.
Locate-and-Focus: Enhancing Terminology Translation in Speech Language Models Lire l’article »
arXiv:2507.18580v1 Announce Type: new Abstract: This paper presents our system for CCL25-Eval Task 10, addressing Fine-Grained Chinese Hate Speech Recognition (FGCHSR). We propose a novel SRAG-MAV framework that synergistically integrates task reformulation(TR), Self-Retrieval-Augmented Generation (SRAG), and Multi-Round Accumulative Voting (MAV). Our method reformulates the quadruplet extraction task into triplet extraction, uses dynamic retrieval from the training set to create contextual prompts, and applies multi-round inference with voting to improve output stability and performance. Our system, based on the Qwen2.5-7B model, achieves a Hard Score of 26.66, a Soft Score of 48.35, and an Average Score of 37.505 on the STATE ToxiCN dataset, significantly outperforming baselines such as GPT-4o (Average Score 15.63) and fine-tuned Qwen2.5-7B (Average Score 35.365). The code is available at https://github.com/king-wang123/CCL25-SRAG-MAV.
arXiv:2406.12548v3 Announce Type: replace Abstract: Personalized large language models (LLMs) have attracted great attention in many applications, such as emotional support and role-playing. However, existing works primarily focus on modeling explicit character profiles, while ignoring the underlying personality traits that truly shape behaviors and decision-making, hampering the development of more anthropomorphic and psychologically-grounded AI systems. In this paper, we explore the modeling of Big Five personality traits, which is the most widely used trait theory in psychology, and propose P-React, a mixture of experts (MoE)-based personalized LLM. Particularly, we integrate a Personality Specialization Loss (PSL) to better capture individual trait expressions, providing a more nuanced and psychologically grounded personality simulacrum. To facilitate research in this field, we curate OCEAN-Chat, a high-quality, human-verified dataset designed to train LLMs in expressing personality traits across diverse topics. Extensive experiments demonstrate the effectiveness of P-React in maintaining consistent and real personality.
Existing long-CoT reasoning models have achieved state-of-the-art performance in mathematical reasoning by generating reasoning trajectories with iterative self-verification and refinement. However, open-source long-CoT models depend only on natural language reasoning traces, making them computationally expensive and prone to errors without verification mechanisms. Although tool-aided reasoning provides greater efficiency and reliability for large-scale numerical computations through frameworks like OpenHands that integrate code interpreters, these agentic approaches struggle with abstract or conceptually complex reasoning problems. DualDistill Framework and Agentic-R1 Model Researchers from Carnegie Mellon University have proposed DualDistill, a distillation framework that combines trajectories from two complementary teachers to create a unified student model. The framework utilizes one reasoning-oriented teacher and one tool-augmented teacher to develop Agentic-R1, a model that learns to select the most appropriate strategy for each problem type dynamically. Agentic-R1 executes code for arithmetic and algorithmic tasks while employing natural language reasoning for abstract problems. DualDistill utilizes trajectory composition to distill knowledge from both complementary teachers, followed by self-distillation. Moreover, researchers used OpenHands as the agentic reasoning teacher, and DeepSeek-R1 as the text-based reasoning teacher. https://arxiv.org/abs/2507.05707 Evaluation and Benchmarks The proposed method is evaluated across multiple benchmarks like DeepMath-L and Combinatorics300 to test various aspects of mathematical reasoning. It is compared against the baselines DeepSeek-R1-Distill and Qwen-2.5-Instruct. The student model, Agentic-R1, shows great performance improvements that benefit from both agentic and reasoning strategies. It outperforms two similarly sized models, each specializing in tool-assisted (Qwen2.5-7B-Instruct) or pure reasoning (Deepseek-R1-Distill7B) strategies. Agentic-R1 outperforms tool-based models by intelligently using reasoning strategies when required, while maintaining greater efficiency compared to pure reasoning models on standard mathematical tasks. Qualitative Analysis and Tool Usage Patterns Qualitative examples show that Agentic-R1 exhibits intelligent tool usage patterns, activating code execution tools in 79.2% of computationally demanding Combinatorics300 problems, while reducing activation to 52.0% for the simpler AMC dataset problems. Agentic-R1 learns to invoke tools appropriately through supervised fine-tuning alone, without explicit instruction, effectively balancing computational efficiency and reasoning accuracy. Robustness to Imperfect Teachers The framework remains effective even when guided by imperfect teachers. For instance, the agentic teacher achieves only 48.4% accuracy on Combinatorics300, yet the student model improved from 44.7% to 50.9%, ultimately outperforming the teacher. Conclusion In summary, the DualDistill framework effectively combines the strengths of natural language reasoning and tool-assisted problem solving by distilling complementary knowledge from two specialized teacher models into a single versatile student model, Agentic-R1. Through trajectory composition and self-distillation, Agentic-R1 learns to dynamically select the most appropriate strategy for each problem, balancing precision and computational efficiency. Evaluations across diverse mathematical reasoning benchmarks demonstrate that Agentic-R1 outperforms both pure reasoning and tool-based models, even when learning from imperfect teachers. This work highlights a promising approach to building adaptable AI agents capable of integrating heterogeneous problem-solving strategies for more robust and efficient reasoning. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Meet the AI Dev Newsletter read by 40k+ Devs and Researchers from NVIDIA, OpenAI, DeepMind, Meta, Microsoft, JP Morgan Chase, Amgen, Aflac, Wells Fargo and 100s more [SUBSCRIBE NOW] The post DualDistill and Agentic-R1: How AI Combines Natural Language and Tool Use for Superior Math Problem Solving appeared first on MarkTechPost.
arXiv:2507.16802v2 Announce Type: replace Abstract: Large Language Models (LLMs) exhibit considerable promise in financial applications; however, prevailing models frequently demonstrate limitations when confronted with scenarios that necessitate sophisticated reasoning capabilities, stringent trustworthiness criteria, and efficient adaptation to domain-specific requirements. We introduce the Agentar-Fin-R1 series of financial large language models (8B and 32B parameters), specifically engineered based on the Qwen3 foundation model to enhance reasoning capabilities, reliability, and domain specialization for financial applications. Our optimization approach integrates a high-quality, systematic financial task label system with a comprehensive multi-layered trustworthiness assurance framework. This framework encompasses high-quality trustworthy knowledge engineering, multi-agent trustworthy data synthesis, and rigorous data validation governance. Through label-guided automated difficulty-aware optimization, tow-stage training pipeline, and dynamic attribution systems, we achieve substantial improvements in training efficiency. Our models undergo comprehensive evaluation on mainstream financial benchmarks including Fineva, FinEval, and FinanceIQ, as well as general reasoning datasets such as MATH-500 and GPQA-diamond. To thoroughly assess real-world deployment capabilities, we innovatively propose the Finova evaluation benchmark, which focuses on agent-level financial reasoning and compliance verification. Experimental results demonstrate that Agentar-Fin-R1 not only achieves state-of-the-art performance on financial tasks but also exhibits exceptional general reasoning capabilities, validating its effectiveness as a trustworthy solution for high-stakes financial applications. The Finova bench is available at https://github.com/antgroup/Finova.