YouZum

News

AI, Committee, News, Uncategorized

From Perception to Action: The Role of World Models in Embodied AI Systems

Introduction to Embodied AI Agents Embodied AI agents are systems that exist in physical or virtual forms, such as robots, wearables, or avatars, and can interact with their surroundings. Unlike static web-based bots, these agents perceive the world and act meaningfully within it. Their embodiment enhances physical interaction, human trust, and human-like learning. Recent advances in large language and vision-language models have powered more capable, autonomous agents that can plan, reason, and adapt to users’ needs. These agents understand context, retain memory, and can collaborate or request clarification when needed. Despite progress, challenges remain, especially with generative models that often prioritize detail over efficient reasoning and decision-making. World Modeling and Applications Researchers at Meta AI are exploring how embodied AI agents, such as avatars, wearables, and robots, can interact more naturally with users and their surroundings by sensing, learning, and acting within real or virtual environments. Central to this is “world modeling,” which combines perception, reasoning, memory, and planning to help agents understand both physical spaces and human intentions. These agents are reshaping industries such as healthcare, entertainment, and labor. The study highlights future goals, such as enhancing collaboration, social intelligence, and ethical safeguards, particularly around privacy and anthropomorphism, as these agents become increasingly integrated into our lives. Types of Embodied Agents Embodied AI agents come in three forms: virtual, wearable, and robotic, and are designed to interact with the world in much the same way as humans. Virtual agents, such as therapy bots or avatars in the metaverse, simulate emotions to foster empathetic interactions. Wearable agents, such as those in smart glasses, share the user’s view and assist with real-time tasks or provide cognitive support. Robotic agents operate in physical spaces, assisting with complex or high-risk tasks such as caregiving or disaster response. These agents not only enhance daily life but also push us closer to general AI by learning through real-world experience, perception, and physical interaction. Importance of World Models World models are crucial for embodied AI agents, enabling them to perceive, understand, and interact with their environment like humans. These models integrate various sensory inputs, such as vision, sound, and touch, with memory and reasoning capabilities to form a cohesive understanding of the world. This enables agents to anticipate outcomes, plan effective actions, and adapt to new situations. By incorporating both physical surroundings and user intentions, world models facilitate more natural and intuitive interactions between humans and AI agents, enhancing their ability to perform complex tasks autonomously. To enable truly autonomous learning in Embodied AI, future research must integrate passive observation (such as vision-language learning) with active interaction (like reinforcement learning). Passive systems excel at understanding structure from data but lack grounding in real-world actions. Active systems learn through doing, but are often inefficient. By combining both, AI can gain abstract knowledge and apply it through goal-driven behavior. Looking ahead, collaboration among multiple agents adds complexity, requiring effective communication, coordination, and conflict resolution. Strategies like emergent communication, negotiation, and multi-agent reinforcement learning will be key. Ultimately, the aim is to build adaptable, interactive AI that learns like humans through experience. Conclusion In conclusion, the study examines how embodied AI agents, such as virtual avatars, wearable devices, and robots, can interact with the world more like humans by perceiving, learning, and acting within their environments. Central to their success is building “world models” that help them understand context, predict outcomes, and plan effectively. These agents are already reshaping areas like therapy, entertainment, and real-time assistance. As they become more integrated into daily life, ethical issues such as privacy and human-like behavior require careful attention. Future work will focus on improving learning, collaboration, and social intelligence, aiming for more natural, intuitive, and responsible human-AI interaction. Check out the Paper here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter, and Youtube and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post From Perception to Action: The Role of World Models in Embodied AI Systems appeared first on MarkTechPost.

From Perception to Action: The Role of World Models in Embodied AI Systems Read Post »

AI, Committee, News, Uncategorized

Moonshot AI Releases Kimi K2: A Trillion-Parameter MoE Model Focused on Long Context, Code, Reasoning, and Agentic Behavior

Kimi K2, launched by Moonshot AI in July 2025, is a purpose-built, open-source Mixture-of-Experts (MoE) model—1 trillion total parameters, with 32 billion active parameters per token. It’s trained using the custom MuonClip optimizer on 15.5 trillion tokens, achieving stable training at this unprecedented scale without the typical instabilities seen in ultra-large models. Unlike traditional chatbots, K2 is architected specifically for agentic workflows. It features native Model Context Protocol (MCP) support and was trained on simulated multi-step tool interactions, enabling it to autonomously decompose tasks, execute tool sequences, write and debug code, analyze data, and orchestrate workflows—all with minimal human oversight. Why Agentic over Conversational? While advanced models like GPT-4 and Claude 4 Sonnet excel at language reasoning, Kimi K2 moves from reasoning to action. It doesn’t just respond—it executes. The core shift lies in enabling real-world workflows: Autonomous code execution Data analysis with charts and interfaces End-to-end web application development Orchestration of 17+ tools per session without human input K2’s training incorporated millions of synthetic dialogues, each rated by an LLM-based evaluator. These dialogues simulate realistic tool-use scenarios, giving K2 a practical edge in tool selection and multi-step execution. Architecture and Training Innovations K2’s technical design demonstrates several novel elements: MoE Transformer Design: 384 experts with routing to 8 active experts per token, plus 1 shared expert for global context. The model uses 64 attention heads and supports a 128K-token context window. MuonClip Optimizer: A modified version of Muon that stabilizes training at scale. It uses qk-clipping to constrain attention scores by rescaling Q/K matrices, effectively preventing instability in deep layers. Training Dataset: Over 15.5 trillion tokens from multilingual and multimodal sources, giving K2 robust generalization and tool-use reasoning across diverse domains. The model comes in two variants: Kimi-K2-Base, the foundational model ideal for fine-tuning and building customized solutions; and Kimi-K2-Instruct, the post-trained version optimized for immediate use in general-purpose chat and tool-using agentic tasks. Instruct is reflex-grade—optimized for fast, low-latency interaction rather than long-form deliberation. On benchmarks, Kimi K2 outperforms Claude Sonnet 4 and GPT-4.1 in coding and agentic reasoning, with 71.6% on SWE-bench, 65.8% on agentic tasks, and 53.7% on LiveCodeBench. Performance Benchmarks Kimi K2 not only matches but often surpasses closed-source models on key benchmarks: Benchmark Kimi K2 GPT‑4.1 Claude Sonnet 4 SWE-bench Verified 71.6 % 54.6 % ~72.7 % Agentic Coding (Tau2) 65.8 % 45.2 % ~61 % LiveCodeBench v6 (Pass@1) 53.7 % 44.7 % 47.4 % MATH-500 97.4 % 92.4 % – MMLU 89.5 % ~90.4 % ~92.9 % Its performance in agentic benchmarks like Tau2 and LiveCodeBench demonstrates its superior capacity to handle multi-step, real-world coding tasks—outperforming many proprietary models. Cost Efficiency Perhaps the most disruptive element is pricing: Claude 4 Sonnet: $3 input / $15 output per million tokens Gemini 2.5 Pro: $2.5 input / $15 output Kimi K2: $0.60 input / $2.50 output Kimi K2 is roughly 5x cheaper than Claude or Gemini while offering equal or better performance on several metrics. The cost advantage, combined with open access and support for local deployment, positions K2 as an economically viable alternative for developers, enterprises, and research teams. Strategic Shift: From Thinking to Acting Kimi K2 marks a pivotal moment in AI’s evolution—from thinking agents to acting systems. With native tool-use capabilities and built-in support for multi-agent protocols, it goes far beyond static chat interfaces. It is capable of triggering workflows, making decisions, executing API calls, and delivering tangible outputs autonomously. Moreover, its release comes at a time when most such capabilities are either locked behind expensive APIs or limited to research labs. K2 is: Open-source, requiring no subscription Globally accessible, not limited to US-based deployment Designed for developers, not just end-users Broader Implications Will agentic architecture become the norm? K2’s strong performance on tool use tasks could push proprietary players to rethink their architectures. Can open-source efforts from Asia compete at global scale? With K2, Moonshot AI joins others like DeepSeek in showing that top-tier performance doesn’t have to originate from Silicon Valley. What’s next in the agentic evolution? Future models may combine video, robotics, and embodied reasoning to further expand the scope of what agentic AI can accomplish. Conclusion Kimi K2 isn’t just a bigger model—it’s a blueprint for what comes after the reasoning race: execution-first AI. By combining trillion-parameter scale, low inference costs, and deeply integrated agentic capabilities, Kimi K2 opens the door for AI systems that do more than generate—they build, act, and solve autonomously. Check out the Models on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter, and Youtube and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Moonshot AI Releases Kimi K2: A Trillion-Parameter MoE Model Focused on Long Context, Code, Reasoning, and Agentic Behavior appeared first on MarkTechPost.

Moonshot AI Releases Kimi K2: A Trillion-Parameter MoE Model Focused on Long Context, Code, Reasoning, and Agentic Behavior Read Post »

AI, Committee, News, Uncategorized

A Survey on Latent Reasoning

arXiv:2507.06203v2 Announce Type: replace Abstract: Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, especially when guided by explicit chain-of-thought (CoT) reasoning that verbalizes intermediate steps. While CoT improves both interpretability and accuracy, its dependence on natural language reasoning limits the model’s expressive bandwidth. Latent reasoning tackles this bottleneck by performing multi-step inference entirely in the model’s continuous hidden state, eliminating token-level supervision. To advance latent reasoning research, this survey provides a comprehensive overview of the emerging field of latent reasoning. We begin by examining the foundational role of neural network layers as the computational substrate for reasoning, highlighting how hierarchical representations support complex transformations. Next, we explore diverse latent reasoning methodologies, including activation-based recurrence, hidden state propagation, and fine-tuning strategies that compress or internalize explicit reasoning traces. Finally, we discuss advanced paradigms such as infinite-depth latent reasoning via masked diffusion models, which enable globally consistent and reversible reasoning processes. By unifying these perspectives, we aim to clarify the conceptual landscape of latent reasoning and chart future directions for research at the frontier of LLM cognition. An associated GitHub repository collecting the latest papers and repos is available at: https://github.com/multimodal-art-projection/LatentCoT-Horizon/.

A Survey on Latent Reasoning Read Post »

AI, Committee, News, Uncategorized

Robust Multimodal Large Language Models Against Modality Conflict

arXiv:2507.07151v1 Announce Type: cross Abstract: Despite the impressive capabilities of multimodal large language models (MLLMs) in vision-language tasks, they are prone to hallucinations in real-world scenarios. This paper investigates the hallucination phenomenon in MLLMs from the perspective of modality conflict. Unlike existing works focusing on the conflicts between model responses and inputs, we study the inherent conflicts in inputs from different modalities that place MLLMs in a dilemma and directly lead to hallucinations. We formally define the modality conflict and construct a dataset named Multimodal Modality Conflict (MMMC) to simulate this phenomenon in vision-language tasks. Three methods based on prompt engineering, supervised fine-tuning, and reinforcement learning are proposed to alleviate the hallucination caused by modality conflict. Extensive experiments are conducted on the MMMC dataset to analyze the merits and demerits of these methods. Our results show that the reinforcement learning method achieves the best performance in mitigating the hallucination under modality conflict, while the supervised fine-tuning method shows promising and stable performance. Our work sheds light on the unnoticed modality conflict that leads to hallucinations and provides more insights into the robustness of MLLMs.

Robust Multimodal Large Language Models Against Modality Conflict Read Post »

AI, Committee, News, Uncategorized

PLAN-TUNING: Post-Training Language Models to Learn Step-by-Step Planning for Complex Problem Solving

arXiv:2507.07495v1 Announce Type: new Abstract: Recently, decomposing complex problems into simple subtasks–a crucial part of human-like natural planning–to solve the given problem has significantly boosted the performance of large language models (LLMs). However, leveraging such planning structures during post-training to boost the performance of smaller open-source LLMs remains underexplored. Motivated by this, we introduce PLAN-TUNING, a unified post-training framework that (i) distills synthetic task decompositions (termed “planning trajectories”) from large-scale LLMs and (ii) fine-tunes smaller models via supervised and reinforcement-learning objectives designed to mimic these planning processes to improve complex reasoning. On GSM8k and the MATH benchmarks, plan-tuned models outperform strong baselines by an average $sim7%$. Furthermore, plan-tuned models show better generalization capabilities on out-of-domain datasets, with average $sim10%$ and $sim12%$ performance improvements on OlympiadBench and AIME 2024, respectively. Our detailed analysis demonstrates how planning trajectories improves complex reasoning capabilities, showing that PLAN-TUNING is an effective strategy for improving task-specific performance of smaller LLMs.

PLAN-TUNING: Post-Training Language Models to Learn Step-by-Step Planning for Complex Problem Solving Read Post »

AI, Committee, News, Uncategorized

SAS: Simulated Attention Score

arXiv:2507.07694v1 Announce Type: new Abstract: The attention mechanism is a core component of the Transformer architecture. Various methods have been developed to compute attention scores, including multi-head attention (MHA), multi-query attention, group-query attention and so on. We further analyze the MHA and observe that its performance improves as the number of attention heads increases, provided the hidden size per head remains sufficiently large. Therefore, increasing both the head count and hidden size per head with minimal parameter overhead can lead to significant performance gains at a low cost. Motivated by this insight, we introduce Simulated Attention Score (SAS), which maintains a compact model size while simulating a larger number of attention heads and hidden feature dimension per head. This is achieved by projecting a low-dimensional head representation into a higher-dimensional space, effectively increasing attention capacity without increasing parameter count. Beyond the head representations, we further extend the simulation approach to feature dimension of the key and query embeddings, enhancing expressiveness by mimicking the behavior of a larger model while preserving the original model size. To control the parameter cost, we also propose Parameter-Efficient Attention Aggregation (PEAA). Comprehensive experiments on a variety of datasets and tasks demonstrate the effectiveness of the proposed SAS method, achieving significant improvements over different attention variants.

SAS: Simulated Attention Score Read Post »

AI, Committee, News, Uncategorized

Rethinking Verification for LLM Code Generation: From Generation to Testing

arXiv:2507.06920v2 Announce Type: replace Abstract: Large language models (LLMs) have recently achieved notable success in code-generation benchmarks such as HumanEval and LiveCodeBench. However, a detailed examination reveals that these evaluation suites often comprise only a limited number of homogeneous test cases, resulting in subtle faults going undetected. This not only artificially inflates measured performance but also compromises accurate reward estimation in reinforcement learning frameworks utilizing verifiable rewards (RLVR). To address these critical shortcomings, we systematically investigate the test-case generation (TCG) task by proposing multi-dimensional metrics designed to rigorously quantify test-suite thoroughness. Furthermore, we introduce a human-LLM collaborative method (SAGA), leveraging human programming expertise with LLM reasoning capability, aimed at significantly enhancing both the coverage and the quality of generated test cases. In addition, we develop a TCGBench to facilitate the study of the TCG task. Experiments show that SAGA achieves a detection rate of 90.62% and a verifier accuracy of 32.58% on TCGBench. The Verifier Accuracy (Verifier Acc) of the code generation evaluation benchmark synthesized by SAGA is 10.78% higher than that of LiveCodeBench-v6. These results demonstrate the effectiveness of our proposed method. We hope this work contributes to building a scalable foundation for reliable LLM code evaluation, further advancing RLVR in code generation, and paving the way for automated adversarial test synthesis and adaptive benchmark integration.

Rethinking Verification for LLM Code Generation: From Generation to Testing Read Post »

AI, Committee, News, Uncategorized

Breaking PEFT Limitations: Leveraging Weak-to-Strong Knowledge Transfer for Backdoor Attacks in LLMs

arXiv:2409.17946v4 Announce Type: replace-cross Abstract: Despite being widely applied due to their exceptional capabilities, Large Language Models (LLMs) have been proven to be vulnerable to backdoor attacks. These attacks introduce targeted vulnerabilities into LLMs by poisoning training samples and full-parameter fine-tuning (FPFT). However, this kind of backdoor attack is limited since they require significant computational resources, especially as the size of LLMs increases. Besides, parameter-efficient fine-tuning (PEFT) offers an alternative but the restricted parameter updating may impede the alignment of triggers with target labels. In this study, we first verify that backdoor attacks with PEFT may encounter challenges in achieving feasible performance. To address these issues and improve the effectiveness of backdoor attacks with PEFT, we propose a novel backdoor attack algorithm from the weak-to-strong based on Feature Alignment-enhanced Knowledge Distillation (FAKD). Specifically, we poison small-scale language models through FPFT to serve as the teacher model. The teacher model then covertly transfers the backdoor to the large-scale student model through FAKD, which employs PEFT. Theoretical analysis reveals that FAKD has the potential to augment the effectiveness of backdoor attacks. We demonstrate the superior performance of FAKD on classification tasks across four language models, four backdoor attack algorithms, and two different architectures of teacher models. Experimental results indicate success rates close to 100% for backdoor attacks targeting PEFT.

Breaking PEFT Limitations: Leveraging Weak-to-Strong Knowledge Transfer for Backdoor Attacks in LLMs Read Post »

en_US