Noticias
Noticias
Using NotebookLM as Your Machine Learning Study Guide
Learning machine learning can be challenging...
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
arXiv:2502.15082v2 Announce Type: replace-cross Abstract: User specifications or legal frameworks often require information to be...
Unplug and Play Language Models: Decomposing Experts in Language Models at Inference Time
arXiv:2404.11916v3 Announce Type: replace Abstract: Enabled by large-scale text corpora with huge parameters, pre-trained language...
Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning
arXiv:2506.23127v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across various...
Unique Hard Attention: A Tale of Two Sides
arXiv:2503.14615v2 Announce Type: replace-cross Abstract: Understanding the expressive power of transformers has recently attracted attention...
Understanding OpenAI Codex CLI Commands
We have seen a new era of agentic IDEs like Windsurf and Cursor AI...
Understanding In-context Learning of Addition via Activation Subspaces
arXiv:2505.05145v2 Announce Type: replace-cross Abstract: To perform in-context learning, language models must extract signals from...
Tutorial: Exploring SHAP-IQ Visualizations
In this tutorial, we’ll explore a range of SHAP-IQ visualizations that provide insights into how...
TUMS: Enhancing Tool-use Abilities of LLMs with Multi-structure Handlers
arXiv:2505.08402v1 Announce Type: new Abstract: Recently, large language models(LLMs) have played an increasingly important role...
TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs
arXiv:2506.23423v1 Announce Type: new Abstract: Past work has studied the effects of fine-tuning on large...
Trusted Uncertainty in Large Language Models: A Unified Framework for Confidence Calibration and Risk-Controlled Refusal
arXiv:2509.01455v1 Announce Type: new Abstract: Deployed language models must decide not only what to answer...
Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement Fine-Tuning of Large Language Models
arXiv:2505.17826v2 Announce Type: replace-cross Abstract: Trinity-RFT is a general-purpose, unified and easy-to-use framework designed for...