ข่าว
ข่าว
Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training
Kyutai, an open AI research lab, has released a groundbreaking streaming Text-to-Speech (TTS) model with...
KoACD: The First Korean Adolescent Dataset for Cognitive Distortion Analysis
arXiv:2505.00367v1 Announce Type: new Abstract: Cognitive distortion refers to negative thinking patterns that can lead...
KARE-RAG: Knowledge-Aware Refinement and Enhancement for RAG
arXiv:2506.02503v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to access...
Just add humans: Oxford medical study underscores the missing link in chatbot testing
Patients using chatbots to assess their own medical conditions may end up with worse outcomes...
JTCSE: Joint Tensor-Modulus Constraints and Cross-Attention for Unsupervised Contrastive Learning of Sentence Embeddings
arXiv:2505.02366v2 Announce Type: replace Abstract: Unsupervised contrastive learning has become a hot research topic in...
JobHop: A Large-Scale Dataset of Career Trajectories
arXiv:2505.07653v1 Announce Type: new Abstract: Understanding labor market dynamics is essential for policymakers, employers, and...
JarvisArt: A Human-in-the-Loop Multimodal Agent for Region-Specific and Global Photo Editing
Bridging the Gap Between Artistic Intent and Technical Execution Photo retouching is a core aspect...
It’s pretty easy to get DeepSeek to talk dirty
AI companions like Replika are designed to engage in intimate exchanges, but people use general-purpose...
It’s the same but not the same: Do LLMs distinguish Spanish varieties?
arXiv:2504.20049v1 Announce Type: new Abstract: In recent years, large language models (LLMs) have demonstrated a...
Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering
arXiv:2502.13962v2 Announce Type: replace Abstract: Scaling the test-time compute of large language models has demonstrated...
Introspective Growth: Automatically Advancing LLM Expertise in Technology Judgment
arXiv:2505.12452v2 Announce Type: replace Abstract: Large language models (LLMs) increasingly demonstrate signs of conceptual understanding...
Internal Coherence Maximization (ICM): A Label-Free, Unsupervised Training Framework for LLMs
Post-training methods for pre-trained language models (LMs) depend on human supervision through demonstrations or preference...