YouZum

Nachrichten

Nachrichten

Tutorial: Exploring SHAP-IQ Visualizations

In this tutorial, we’ll explore a range of SHAP-IQ visualizations that provide insights into how...

Turning Logic Against Itself : Probing Model Defenses Through Contrastive Questions

arXiv:2501.01872v5 Announce Type: replace Abstract: Large language models, despite extensive alignment with human values and...

TurkBench: A Benchmark for Evaluating Turkish Large Language Models

arXiv:2601.07020v2 Announce Type: replace Abstract: With the recent surge in the development of large language...

TUMS: Enhancing Tool-use Abilities of LLMs with Multi-structure Handlers

arXiv:2505.08402v1 Announce Type: new Abstract: Recently, large language models(LLMs) have played an increasingly important role...

TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs

arXiv:2506.23423v1 Announce Type: new Abstract: Past work has studied the effects of fine-tuning on large...

TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings

arXiv:2603.04772v1 Announce Type: new Abstract: Despite the exceptional reasoning capabilities of Multimodal Large Language Models...

Trustworthy Data-driven Chronological Age Estimation from Panoramic Dental Images

arXiv:2601.12960v1 Announce Type: new Abstract: Integrating deep learning into healthcare enables personalized care but raises...

Trusted Uncertainty in Large Language Models: A Unified Framework for Confidence Calibration and Risk-Controlled Refusal

arXiv:2509.01455v1 Announce Type: new Abstract: Deployed language models must decide not only what to answer...

Truncated Step-Level Sampling with Process Rewards for Retrieval-Augmented Reasoning

arXiv:2602.23440v3 Announce Type: replace Abstract: Reinforcement learning has emerged as an effective paradigm for training...

Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement Fine-Tuning of Large Language Models

arXiv:2505.17826v2 Announce Type: replace-cross Abstract: Trinity-RFT is a general-purpose, unified and easy-to-use framework designed for...

TRIM: Token-wise Attention-Derived Saliency for Data-Efficient Instruction Tuning

arXiv:2510.07118v2 Announce Type: replace Abstract: Instruction tuning is essential for aligning large language models (LLMs)...

Treating enterprise AI as an operating layer

There’s a fault line running through enterprise AI, and it’s not the one getting the...

We use cookies to improve your experience and performance on our website. You can learn more at Datenschutzrichtlinie and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
de_DE