YouZum

Uncategorized

AI, Committee, Actualités, Uncategorized

Anthrogen Introduces Odyssey: A 102B Parameter Protein Language Model that Replaces Attention with Consensus and Trains with Discrete Diffusion

Anthrogen has introduced Odyssey, a family of protein language models for sequence and structure generation, protein editing, and conditional design. The production models range from 1.2B to 102B parameters. The Anthrogen’s research team positions Odyssey as a frontier, multimodal model for real protein design workloads, and notes that an API is in early access. https://www.biorxiv.org/content/10.1101/2025.10.15.682677v1.full.pdf What problem does Odyssey target? Protein design couples amino acid sequence with 3D structure and with functional context. Many prior models adopt self attention, which mixes information across the entire sequence at once. Proteins follow geometric constraints, so long range effects travel through local neighborhoods in 3D. Anthrogen frames this as a locality problem and proposes a new propagation rule, called Consensus, that better matches the domain. https://www.biorxiv.org/content/10.1101/2025.10.15.682677v1.full.pdf Input representation and tokenization Odyssey is multimodal. It embeds sequence tokens, structure tokens, and lightweight functional cues, then fuses them into a shared representation. For structure, Odyssey uses a finite scalar quantizer, FSQ, to convert 3D geometry into compact tokens. Think of FSQ as an alphabet for shapes that lets the model read structure as easily as sequence. Functional cues can include domain tags, secondary structure hints, orthologous group labels, or short text descriptors. This joint view gives the model access to local sequence patterns and long range geometric relations in a single latent space. https://www.biorxiv.org/content/10.1101/2025.10.15.682677v1.full.pdf Backbone change, Consensus instead of self attention Consensus replaces global self attention with iterative, locality aware updates on a sparse contact or sequence graph. Each layer encourages nearby neighborhoods to agree first, then spreads that agreement outward across the chain and contact graph. This change alters compute. Self attention scales as O(L²) with sequence length L. Anthrogen reports that Consensus scales as O(L), which keeps long sequences and multi domain constructs affordable. The company also reports improved robustness to learning rate choices at larger scales, which reduces brittle runs and restarts. https://www.biorxiv.org/content/10.1101/2025.10.15.682677v1.full.pdf Training objective and generation, discrete diffusion Odyssey trains with discrete diffusion on sequence and structure tokens. The forward process applies masking noise that mimics mutation. The reverse time denoiser learns to reconstruct consistent sequence and coordinates that work together. At inference, the same reverse process supports conditional generation and editing. You can hold a scaffold, fix a motif, mask a loop, add a functional tag, and then let the model complete the rest while keeping sequence and structure in sync. Anthrogen reports matched comparisons where diffusion outperforms masked language modeling during evaluation. The page notes lower training perplexities for diffusion versus complex masking, and lower or comparable training perplexities versus simple masking. In validation, diffusion models outperform their masked counterparts, while a 1.2B masked model tends to overfit to its own masking schedule. The company argues that diffusion models the joint distribution of the full protein, which aligns with sequence plus structure co design. https://www.biorxiv.org/content/10.1101/2025.10.15.682677v1.full.pdf Key takeaways Odyssey is a multimodal protein model family that fuses sequence, structure, and functional context, with production models at 1.2B, 8B, and 102B parameters. Consensus replaces self attention with locality aware propagation that scales as O(L) and shows robust learning rate behavior at larger scales. FSQ converts 3D coordinates into discrete structure tokens for joint sequence and structure modeling. Discrete diffusion trains a reverse time denoiser and, in matched comparisons, outperforms masked language modeling during evaluation. Anthrogen reports better performance with about 10x less data than competing models, which addresses data scarcity in protein modeling. Editorial Comments Odyssey is impressive model because it operationalizes joint sequence and structure modeling with FSQ, Consensus, and discrete diffusion, enabling conditional design and editing under practical constraints. Odyssey scales to 102B parameters with O(L) complexity for Consensus, which lowers cost for long proteins and improves learning-rate robustness. Anthrogen reports diffusion outperforming masked language modeling in matched evaluations, which aligns with co-design objectives. The system targets multi-objective design, including potency, specificity, stability, and manufacturability. The research team emphasizes data efficiency near 10x versus competing models, which is material in domains with scarce labeled data. Check out the Paper, and Technical details. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well. The post Anthrogen Introduces Odyssey: A 102B Parameter Protein Language Model that Replaces Attention with Consensus and Trains with Discrete Diffusion appeared first on MarkTechPost.

Anthrogen Introduces Odyssey: A 102B Parameter Protein Language Model that Replaces Attention with Consensus and Trains with Discrete Diffusion Lire l’article »

AI, Committee, Actualités, Uncategorized

Learning to Interpret Weight Differences in Language Models

arXiv:2510.05092v3 Announce Type: replace-cross Abstract: Finetuning (pretrained) language models is a standard approach for updating their internal parametric knowledge and specializing them to new tasks and domains. However, the corresponding model weight changes (“weight diffs”) are not generally interpretable. While inspecting the finetuning dataset can give a sense of how the model might have changed, these datasets are often not publicly available or are too large to work with directly. Towards the goal of comprehensively understanding weight diffs in natural language, we introduce Diff Interpretation Tuning (DIT), a method that trains models to describe their own finetuning-induced modifications. Our approach uses synthetic, labeled weight diffs to train a DIT-adapter, which can be applied to a compatible finetuned model to make it describe how it has changed. We demonstrate in two proof-of-concept settings (reporting hidden behaviors and summarizing finetuned knowledge) that our method enables models to describe their finetuning-induced modifications using accurate natural language descriptions.

Learning to Interpret Weight Differences in Language Models Lire l’article »

AI, Committee, Actualités, Uncategorized

Explaining Large Language Models with gSMILE

arXiv:2505.21657v5 Announce Type: replace Abstract: Large Language Models (LLMs) such as GPT, LLaMA, and Claude achieve remarkable performance in text generation but remain opaque in their decision-making processes, limiting trust and accountability in high-stakes applications. We present gSMILE (generative SMILE), a model-agnostic, perturbation-based framework for token-level interpretability in LLMs. Extending the SMILE methodology, gSMILE uses controlled prompt perturbations, Wasserstein distance metrics, and weighted linear surrogates to identify input tokens with the most significant impact on the output. This process enables the generation of intuitive heatmaps that visually highlight influential tokens and reasoning paths. We evaluate gSMILE across leading LLMs (OpenAI’s gpt-3.5-turbo-instruct, Meta’s LLaMA 3.1 Instruct Turbo, and Anthropic’s Claude 2.1) using attribution fidelity, attribution consistency, attribution stability, attribution faithfulness, and attribution accuracy as metrics. Results show that gSMILE delivers reliable human-aligned attributions, with Claude 2.1 excelling in attention fidelity and GPT-3.5 achieving the highest output consistency. These findings demonstrate gSMILE’s ability to balance model performance and interpretability, enabling more transparent and trustworthy AI systems.

Explaining Large Language Models with gSMILE Lire l’article »

AI, Committee, Actualités, Uncategorized

ChronoPlay: A Framework for Modeling Dual Dynamics and Authenticity in Game RAG Benchmarks

arXiv:2510.18455v1 Announce Type: new Abstract: Retrieval Augmented Generation (RAG) systems are increasingly vital in dynamic domains like online gaming, yet the lack of a dedicated benchmark has impeded standardized evaluation in this area. The core difficulty lies in Dual Dynamics: the constant interplay between game content updates and the shifting focus of the player community. Furthermore, the necessity of automating such a benchmark introduces a critical requirement for player-centric authenticity to ensure generated questions are realistic. To address this integrated challenge, we introduce ChronoPlay, a novel framework for the automated and continuous generation of game RAG benchmarks. ChronoPlay utilizes a dual-dynamic update mechanism to track both forms of change, and a dual-source synthesis engine that draws from official sources and player community to ensure both factual correctness and authentic query patterns. We instantiate our framework on three distinct games to create the first dynamic RAG benchmark for the gaming domain, offering new insights into model performance under these complex and realistic conditions. Code is avaliable at: https://github.com/hly1998/ChronoPlay.

ChronoPlay: A Framework for Modeling Dual Dynamics and Authenticity in Game RAG Benchmarks Lire l’article »

AI, Committee, Actualités, Uncategorized

DelvePO: Direction-Guided Self-Evolving Framework for Flexible Prompt Optimization

arXiv:2510.18257v1 Announce Type: new Abstract: Prompt Optimization has emerged as a crucial approach due to its capabilities in steering Large Language Models to solve various tasks. However, current works mainly rely on the random rewriting ability of LLMs, and the optimization process generally focus on specific influencing factors, which makes it easy to fall into local optimum. Besides, the performance of the optimized prompt is often unstable, which limits its transferability in different tasks. To address the above challenges, we propose $textbf{DelvePO}$ ($textbf{D}$irection-Guid$textbf{e}$d Se$textbf{l}$f-E$textbf{v}$olving Framework for Fl$textbf{e}$xible $textbf{P}$rompt $textbf{O}$ptimization), a task-agnostic framework to optimize prompts in self-evolve manner. In our framework, we decouple prompts into different components that can be used to explore the impact that different factors may have on various tasks. On this basis, we introduce working memory, through which LLMs can alleviate the deficiencies caused by their own uncertainties and further obtain key insights to guide the generation of new prompts. Extensive experiments conducted on different tasks covering various domains for both open- and closed-source LLMs, including DeepSeek-R1-Distill-Llama-8B, Qwen2.5-7B-Instruct and GPT-4o-mini. Experimental results show that DelvePO consistently outperforms previous SOTA methods under identical experimental settings, demonstrating its effectiveness and transferability across different tasks.

DelvePO: Direction-Guided Self-Evolving Framework for Flexible Prompt Optimization Lire l’article »

AI, Committee, Actualités, Uncategorized

BrailleLLM: Braille Instruction Tuning with Large Language Models for Braille Domain Tasks

arXiv:2510.18288v1 Announce Type: new Abstract: Braille plays a vital role in education and information accessibility for visually impaired individuals. However, Braille information processing faces challenges such as data scarcity and ambiguities in mixed-text contexts. We construct English and Chinese Braille Mixed Datasets (EBMD/CBMD) with mathematical formulas to support diverse Braille domain research, and propose a syntax tree-based augmentation method tailored for Braille data. To address the underperformance of traditional fine-tuning methods in Braille-related tasks, we investigate Braille Knowledge-Based Fine-Tuning (BKFT), which reduces the learning difficulty of Braille contextual features. BrailleLLM employs BKFT via instruction tuning to achieve unified Braille translation, formula-to-Braille conversion, and mixed-text translation. Experiments demonstrate that BKFT achieves significant performance improvements over conventional fine-tuning in Braille translation scenarios. Our open-sourced datasets and methodologies establish a foundation for low-resource multilingual Braille research.

BrailleLLM: Braille Instruction Tuning with Large Language Models for Braille Domain Tasks Lire l’article »

AI, Committee, Actualités, Uncategorized

DVAGen: Dynamic Vocabulary Augmented Generation

arXiv:2510.17115v1 Announce Type: new Abstract: Language models trained with a fixed vocabulary struggle to generalize to novel or out-of-vocabulary words, limiting their flexibility in handling diverse token combinations. Existing dynamic vocabulary approaches attempt to address this limitation but face challenges such as fragmented codebases, lack of support for modern LLMs, and limited inference scalability. To overcome these issues, we introduce DVAGen, a fully open-source, unified framework designed for training, evaluation, and visualization of dynamic vocabulary-augmented language models. Our framework modularizes the pipeline for ease of customization, integrates seamlessly with open-source LLMs, and is the first to provide both CLI and WebUI tools for real-time result inspection. We validate the effectiveness of dynamic vocabulary methods on modern LLMs and demonstrate support for batch inference, significantly improving inference throughput.

DVAGen: Dynamic Vocabulary Augmented Generation Lire l’article »

AI, Committee, Actualités, Uncategorized

Max It or Miss It: Benchmarking LLM On Solving Extremal Problems

arXiv:2510.12997v2 Announce Type: replace-cross Abstract: Test-time scaling has enabled Large Language Models (LLMs) with remarkable reasoning capabilities, particularly in mathematical domains, through intermediate chain-of-thought (CoT) reasoning before generating final answers. However, the specific sources and mechanisms underlying these reasoning capabilities remain insufficiently understood. Optimization reasoning, i.e. finding extrema under constraints, represents a fundamental abstraction that underpins critical applications in planning, control, resource allocation, and prompt search. To systematically evaluate this capability, we introduce ExtremBench, a benchmark dataset for solving mathematical extremal problems, curated from inequality exercises used for Chinese Mathematical Olympiad and transformed into $93$ standardized extrema-finding problems. We conduct extensive evaluations across various state-of-the-art open-source model families, including the Qwen3, GPT-OSS, and DeepSeek. Our results reveal that LLMs’ extremal-solving reasoning capabilities do not always align with those of current mathematical benchmarks such as AIME25 and MATH-500, with some models showing strong general mathematical reasoning but poor extremal-solving skills, and vice versa. This discrepancy highlights a critical gap in current evaluation practices and suggests that existing benchmarks may not comprehensively capture the full spectrum of mathematical reasoning abilities.

Max It or Miss It: Benchmarking LLM On Solving Extremal Problems Lire l’article »

AI, Committee, Actualités, Uncategorized

DICE: Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction

arXiv:2510.09211v2 Announce Type: replace Abstract: When performing reasoning tasks with user-specific requirements, such as strict output formats, large language models (LLMs) often prioritize reasoning over adherence to detailed instructions. Fine-tuning LLMs on supervised datasets to address this is impractical due to high computational costs and limited parameter access. To tackle this, we propose DICE, a lightweight framework that guides small language models (SLMs) to refine LLMs’ outputs through chain-of-thought (CoT) correction. DICE decouples the process by first prompting LLMs to generate natural language responses, then using trained SLMs to analyze and refine these outputs to meet structured output specifications. This framework preserves LLMs’ broad knowledge and reasoning capabilities while ensuring the outputs conform to user demands. Specifically, DICE first constructs structured CoT adaptation datasets via a two-stage method and subsequently applies a dual-tuning strategy to fine-tune SLMs for generating structured outputs in an analyze-then-answer pattern. Experiments demonstrate that DICE improves the average format accuracy and content correctness of LLM outputs by 35.4% and 29.4%, respectively, achieving state-of-the-art (SOTA) performance over other competitive baselines.

DICE: Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction Lire l’article »

AI, Committee, Actualités, Uncategorized

Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning

arXiv:2507.00432v2 Announce Type: replace-cross Abstract: Math reasoning has become the poster child of progress in large language models (LLMs), with new models rapidly surpassing human-level performance on benchmarks like MATH and AIME. But as math leaderboards improve week by week, it is worth asking: do these gains reflect broader problem-solving ability or just narrow overfitting? To answer this question, we evaluate over 20 open-weight reasoning-tuned models across a broad suite of tasks, including math, scientific QA, agent planning, coding, and standard instruction-following. We surprisingly find that most models that succeed in math fail to transfer their gains to other domains. To rigorously study this phenomenon, we conduct controlled experiments on Qwen3-14B models using math-only data but different tuning methods. We find that reinforcement learning (RL)-tuned models generalize well across domains, while supervised fine-tuning (SFT)-tuned models often forget general capabilities. Latent-space representation and token-space distribution shift analyses reveal that SFT induces substantial representation and output drift, while RL preserves general-domain structure. Our results suggest a need to rethink standard post-training recipes, particularly the reliance on SFT-distilled data for advancing reasoning models.

Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning Lire l’article »

We use cookies to improve your experience and performance on our website. You can learn more at Politique de confidentialité and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
fr_FR