YouZum

Committee

AI, Committee, Notizie, Uncategorized

Tencent Open Sources Hunyuan-A13B: A 13B Active Parameter MoE Model with Dual-Mode Reasoning and 256K Context

Tencent’s Hunyuan team has introduced Hunyuan-A13B, a new open-source large language model built on a sparse Mixture-of-Experts (MoE) architecture. While the model consists of 80 billion total parameters, only 13 billion are active during inference, offering a highly efficient balance between performance and computational cost. It supports Grouped Query Attention (GQA), 256K context length, and a dual-mode reasoning framework that toggles between fast and slow thinking. Designed for efficient deployment and robust reasoning, Hunyuan-A13B achieves top-tier performance across agentic benchmarks including BFCL-v3, τ-Bench, C3-Bench, and ComplexFuncBench, often outperforming larger models in tool-calling and long-context scenarios. Architecture: Sparse MoE with 13B Active Parameters At its core, Hunyuan-A13B follows a fine-grained MoE design comprising 1 shared expert and 64 non-shared experts, with 8 experts activated per forward pass. This architecture, backed by scaling experiments, ensures performance consistency while keeping inference costs low. The model includes 32 layers, uses SwiGLU activations, a vocabulary size of 128K, and integrates GQA for enhanced memory efficiency during long-context inference. The model’s MoE setup is paired with an optimized training curriculum: a 20T-token pretraining phase, followed by fast annealing and long-context adaptation. This last phase scales the context window first to 32K and then to 256K tokens using NTK-aware positional encoding, ensuring stable performance at large sequence lengths. Dual-Mode Reasoning: Fast and Slow Thinking A standout feature of Hunyuan-A13B is its dual-mode Chain-of-Thought (CoT) capability. It supports both a low-latency fast-thinking mode for routine queries and a more elaborate slow-thinking mode for multi-step reasoning. These modes are controlled through a simple tag system: /no think for fast inference and /think for reflective reasoning. This flexibility allows users to adapt computational cost to task complexity. Post-Training: Reinforcement Learning with Task-Specific Reward Models The post-training pipeline of Hunyuan-A13B includes multi-stage supervised fine-tuning (SFT) and reinforcement learning (RL) across both reasoning-specific and general tasks. The RL stages incorporate outcome-based rewards and tool-specific feedback, including sandbox execution environments for code and rule-based checks for agents. In the agent training phase, the team synthesized diverse tool-use scenarios with planner, checker, and tool roles, generating over 20,000 format combinations. This reinforced Hunyuan-A13B’s ability to execute real-world workflows such as spreadsheet processing, information search, and structured reasoning. Evaluation: State-of-the-Art Agentic Performance Hunyuan-A13B shows strong benchmark results across diverse NLP tasks: On MATH, CMATH, and GPQA, it scores on par or above larger dense and MoE models. It surpasses Qwen3-A22B and DeepSeek R1 in logical reasoning (BBH: 89.1; ZebraLogic: 84.7). In coding, it holds its own with 83.9 on MBPP and 69.3 on MultiPL-E. For agent tasks, it leads on BFCL-v3 (78.3) and ComplexFuncBench (61.2), validating its tool-usage capabilities. Long-context comprehension is another highlight. On PenguinScrolls, it scores 87.7—just shy of Gemini 2.5 Pro. On RULER, it sustains high performance (73.9) even at 64K–128K context, outperforming larger models like Qwen3-A22B and DeepSeek R1 in context resilience. Inference Optimization and Deployment Hunyuan-A13B is fully integrated with popular inference frameworks like vLLM, SGLang, and TensorRT-LLM. It supports precision formats such as W16A16, W8A8, and KV Cache FP8, along with features like Auto Prefix Caching and Chunk Prefill. It achieves up to 1981.99 tokens/sec throughput on a 32-batch input (2048 input, 14336 output length), making it practical for real-time applications. Open Source and Industry Relevance Available on Hugging Face and GitHub, Hunyuan-A13B is released with permissive open-source licensing. It’s engineered for efficient research and production use, especially in latency-sensitive environments and long-context tasks. By combining MoE scalability, agentic reasoning, and open-source accessibility, Tencent’s Hunyuan-A13B offers a compelling alternative to heavyweight LLMs, enabling broader experimentation and deployment without sacrificing capability. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Tencent Open Sources Hunyuan-A13B: A 13B Active Parameter MoE Model with Dual-Mode Reasoning and 256K Context appeared first on MarkTechPost.

Tencent Open Sources Hunyuan-A13B: A 13B Active Parameter MoE Model with Dual-Mode Reasoning and 256K Context Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

CTGT wins Best Presentation Style award at VB Transform 2025

San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization, won the Best Presentation Style award at VB Transform 2025 in San Francisco. Founded by 23-year-old Cyril Gorlla, the company showcased how its technology helps enterprises overcome AI trust barriers by directly modifying model featu…Read More

CTGT wins Best Presentation Style award at VB Transform 2025 Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Enhancing Automatic Term Extraction with Large Language Models via Syntactic Retrieval

arXiv:2506.21222v1 Announce Type: new Abstract: Automatic Term Extraction (ATE) identifies domain-specific expressions that are crucial for downstream tasks such as machine translation and information retrieval. Although large language models (LLMs) have significantly advanced various NLP tasks, their potential for ATE has scarcely been examined. We propose a retrieval-based prompting strategy that, in the few-shot setting, selects demonstrations according to emph{syntactic} rather than semantic similarity. This syntactic retrieval method is domain-agnostic and provides more reliable guidance for capturing term boundaries. We evaluate the approach in both in-domain and cross-domain settings, analyzing how lexical overlap between the query sentence and its retrieved examples affects performance. Experiments on three specialized ATE benchmarks show that syntactic retrieval improves F1-score. These findings highlight the importance of syntactic cues when adapting LLMs to terminology-extraction tasks.

Enhancing Automatic Term Extraction with Large Language Models via Syntactic Retrieval Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

FineWeb2: One Pipeline to Scale Them All — Adapting Pre-Training Data Processing to Every Language

arXiv:2506.20920v1 Announce Type: new Abstract: Pre-training state-of-the-art large language models (LLMs) requires vast amounts of clean and diverse text data. While the open development of large high-quality English pre-training datasets has seen substantial recent progress, training performant multilingual LLMs remains a challenge, in large part due to the inherent difficulty of tailoring filtering and deduplication pipelines to a large number of languages. In this work, we introduce a new pre-training dataset curation pipeline based on FineWeb that can be automatically adapted to support any language. We extensively ablate our pipeline design choices on a set of nine diverse languages, guided by a set of meaningful and informative evaluation tasks that were chosen through a novel selection process based on measurable criteria. Ultimately, we show that our pipeline can be used to create non-English corpora that produce more performant models than prior datasets. We additionally introduce a straightforward and principled approach to rebalance datasets that takes into consideration both duplication count and quality, providing an additional performance uplift. Finally, we scale our pipeline to over 1000 languages using almost 100 Common Crawl snapshots to produce FineWeb2, a new 20 terabyte (5 billion document) multilingual dataset which we release along with our pipeline, training, and evaluation codebases.

FineWeb2: One Pipeline to Scale Them All — Adapting Pre-Training Data Processing to Every Language Leggi l'articolo »

it_IT