ข่าว
ข่าว
IFDECORATOR: Wrapping Instruction Following Reinforcement Learning with Verifiable Rewards
arXiv:2508.04632v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) improves instruction following capabilities...
If you are a free user of ChatGPT, you can now use its new image generator 4o (again)
OpenAI has opened the floodgates after limiting the use of its new tool due to...
Identifying and Answering Questions with False Assumptions: An Interpretable Approach
arXiv:2508.15139v1 Announce Type: new Abstract: People often ask questions with false assumptions, a type of...
IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture
IBM has quietly built a strong presence in the open-source AI ecosystem, and its latest...
Human-like fleeting memory improves language learning but impairs reading time prediction in transformer language models
arXiv:2508.05803v1 Announce Type: new Abstract: Human memory is fleeting. As words are processed, the exact...
Human-Annotated NER Dataset for the Kyrgyz Language
arXiv:2509.19109v1 Announce Type: new Abstract: We introduce KyrgyzNER, the first manually annotated named entity recognition...
Human-Aligned Faithfulness in Toxicity Explanations of LLMs
arXiv:2506.19113v1 Announce Type: new Abstract: The discourse around toxicity and LLMs in NLP largely revolves...
Human-AI Narrative Synthesis to Foster Shared Understanding in Civic Decision-Making
arXiv:2509.19643v1 Announce Type: cross Abstract: Community engagement processes in representative political contexts, like school districts...
Hugging Face Releases SmolVLA: A Compact Vision-Language-Action Model for Affordable and Efficient Robotics
Despite recent progress in robotic control via large-scale vision-language-action (VLA) models, real-world deployment remains constrained...
Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with 24 Million Samples for Training Vision-Language Models (VLMs)
Hugging Face has just released FineVision, an open multimodal dataset designed to set a new...
Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level Optimization
Sparse large language models (LLMs) based on the Mixture of Experts (MoE) framework have gained...
Huawei CloudMatrix: A Peer-to-Peer AI Datacenter Architecture for Scalable and Efficient LLM Serving
LLMs have rapidly advanced with soaring parameter counts, widespread use of mixture-of-experts (MoE) designs, and...





