Noticias
Noticias
Innocence in the Crossfire: Roles of Skip Connections in Jailbreaking Visual Language Models
arXiv:2507.13761v1 Announce Type: new Abstract: Language models are highly sensitive to prompt formulations – small...
InfoFlood: Jailbreaking Large Language Models with Information Overload
arXiv:2506.12274v1 Announce Type: cross Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across various...
Inducing lexicons of in-group language with socio-temporal context
arXiv:2409.19257v3 Announce Type: replace Abstract: In-group language is an important signifier of group dynamics. This...
Improving the Robustness of Distantly-Supervised Named Entity Recognition via Uncertainty-Aware Teacher Learning and Student-Student Collaborative Learning
arXiv:2311.08010v3 Announce Type: replace Abstract: Distantly-Supervised Named Entity Recognition (DS-NER) is widely used in real-world...
Improving Social Determinants of Health Documentation in French EHRs Using Large Language Models
arXiv:2507.03433v1 Announce Type: new Abstract: Social determinants of health (SDoH) significantly influence health outcomes, shaping...
Impact of Stickers on Multimodal Sentiment and Intent in Social Media: A New Task, Dataset and Baseline
arXiv:2405.08427v2 Announce Type: replace Abstract: Stickers are increasingly used in social media to express sentiment...
If you are a free user of ChatGPT, you can now use its new image generator 4o (again)
OpenAI has opened the floodgates after limiting the use of its new tool due to...
Human-Aligned Faithfulness in Toxicity Explanations of LLMs
arXiv:2506.19113v1 Announce Type: new Abstract: The discourse around toxicity and LLMs in NLP largely revolves...
Hugging Face Releases SmolVLA: A Compact Vision-Language-Action Model for Affordable and Efficient Robotics
Despite recent progress in robotic control via large-scale vision-language-action (VLA) models, real-world deployment remains constrained...
Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level Optimization
Sparse large language models (LLMs) based on the Mixture of Experts (MoE) framework have gained...
HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities
AI institutions develop heterogeneous models for specific tasks but face data scarcity challenges during training...
How to Optimize Language Model Size for Deployment
The rise of language models, and more specifically large language models (LLMs), has been of...