Actualités
Actualités
Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency
arXiv:2411.16525v2 Announce Type: replace-cross Abstract: We investigate the statistical and computational limits of prompt tuning...
Fun-tuning: Characterizing the Vulnerability of Proprietary LLMs to Optimization-based Prompt Injection Attacks via the Fine-Tuning Interface
arXiv:2501.09798v2 Announce Type: replace-cross Abstract: We surface a new threat to closed-weight Large Language Models...
Fun-ASR Technical Report
arXiv:2509.12508v3 Announce Type: replace Abstract: In recent years, automatic speech recognition (ASR) has witnessed transformative...
Fuel prices are soaring. Plastic could be next.
As the war in Iran continues to engulf the Middle East and the Strait of...
Frustratingly Easy Data Augmentation for Low-Resource ASR
arXiv:2509.15373v2 Announce Type: replace Abstract: This paper introduces three self-contained data augmentation methods for low-resource...
FrugalRAG: Learning to retrieve and reason for multi-hop QA
arXiv:2507.07634v2 Announce Type: replace Abstract: We consider the problem of answering complex questions, given access...
From XAI to Stories: A Factorial Study of LLM-Generated Explanation Quality
arXiv:2601.02224v2 Announce Type: replace Abstract: Explainable AI (XAI) methods like SHAP and LIME produce numerical...
From Text to Tables: Feature Engineering with LLMs for Tabular Data
While large language models (LLMs) are typically used for conversational purposes in use cases that...
From terabytes to insights: Real-world AI obervability architecture
GUEST: Consider maintaining and developing an e-commerce platform that processes millions of transactions every minute...
From Surveys to Narratives: Rethinking Cultural Value Adaptation in LLMs
arXiv:2505.16408v2 Announce Type: replace Abstract: Adapting cultural values in Large Language Models (LLMs) presents significant...
From Roots to Rewards: Dynamic Tree Reasoning with Reinforcement Learning
arXiv:2507.13142v3 Announce Type: replace-cross Abstract: Modern language models address complex questions through chain-of-thought (CoT) reasoning...
From RLHF to Direct Alignment: A Theoretical Unification of Preference Learning for Large Language Models
arXiv:2601.06108v1 Announce Type: cross Abstract: Aligning large language models (LLMs) with human preferences has become...

