YouZum

新闻

新闻

READoc: A Unified Benchmark for Realistic Document Structured Extraction

arXiv:2409.05137v3 Announce Type: replace Abstract: Document Structured Extraction (DSE) aims to extract structured content from...

Readers Prefer Outputs of AI Trained on Copyrighted Books over Expert Human Writers

arXiv:2510.13939v1 Announce Type: new Abstract: The use of copyrighted books for training AI models has...

Readability Reconsidered: A Cross-Dataset Analysis of Reference-Free Metrics

arXiv:2510.15345v1 Announce Type: new Abstract: Automatic readability assessment plays a key role in ensuring effective...

RAG-based Architectures for Drug Side Effect Retrieval in LLMs

arXiv:2507.13822v1 Announce Type: cross Abstract: Drug side effects are a major global health concern, necessitating...

RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance

arXiv:2311.18681v3 Announce Type: replace-cross Abstract: Conversational AI tools that can generate and discuss clinically correct...

RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs

TL;DR: A new research from Apple, formalizes what “mid-training” should do before reinforcement learning RL...

R-Zero: A Fully Autonomous AI Framework that Generates Its Own Training Data from Scratch

Large Language Models (LLMs) have revolutionized fields from natural language understanding to reasoning and code...

QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

Alibaba’s QwenLong-L1 helps LLMs deeply understand long documents, unlocking advanced reasoning for practical enterprise applications.Read...

Qwen3-ASR-Toolkit: An Advanced Open Source Python Command-Line Toolkit for Using the Qwen-ASR API Beyond the 3 Minutes/10 MB Limit

Qwen has released Qwen3-ASR-Toolkit, an MIT-licensed Python CLI that programmatically bypasses the Qwen3-ASR-Flash API’s 3-minute/10...

Quiet Feature Learning in Algorithmic Tasks

arXiv:2505.03997v1 Announce Type: cross Abstract: We train Transformer-based language models on ten foundational algorithmic tasks...

Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models

arXiv:2503.22879v3 Announce Type: replace-cross Abstract: State Space Models (SSMs) are emerging as a compelling alternative...

QFFT, Question-Free Fine-Tuning for Adaptive Reasoning

arXiv:2506.12860v1 Announce Type: new Abstract: Recent advancements in Long Chain-of-Thought (CoT) reasoning models have improved...

We use cookies to improve your experience and performance on our website. You can learn more at 隱私權政策 and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
zh_CN