YouZum

Uncategorized

AI, Committee, Actualités, Uncategorized

CTISum: A New Benchmark Dataset For Cyber Threat Intelligence Summarization

arXiv:2408.06576v2 Announce Type: replace Abstract: Cyber Threat Intelligence (CTI) summarization involves generating concise and accurate highlights from web intelligence data, which is critical for providing decision-makers with actionable insights to swiftly detect and respond to cyber threats in the cybersecurity domain. Despite that, the development of efficient techniques for summarizing CTI reports, comprising facts, analytical insights, attack processes, and more, has been hindered by the lack of suitable datasets. To address this gap, we introduce CTISum, a new benchmark dataset designed for the CTI summarization task. Recognizing the significance of understanding attack processes, we also propose a novel fine-grained subtask: attack process summarization, which aims to help defenders assess risks, identify security gaps, and uncover vulnerabilities. Specifically, a multi-stage annotation pipeline is designed to collect and annotate CTI data from diverse web sources, alongside a comprehensive benchmarking of CTISum using both extractive, abstractive and LLMs-based summarization methods. Experimental results reveal that current state-of-the-art models face significant challenges when applied to CTISum, highlighting that automatic summarization of CTI reports remains an open research problem. The code and example dataset can be made publicly available at https://github.com/pengwei-iie/CTISum.

CTISum: A New Benchmark Dataset For Cyber Threat Intelligence Summarization Lire l’article »

AI, Committee, Actualités, Uncategorized

Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning

arXiv:2506.23127v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, yet they face significant challenges in embodied task planning scenarios that require continuous environmental understanding and action generation. Existing approaches generate open-loop action scripts based on static knowledge, making it difficult to learn causal relationships between actions and environmental feedback, particularly in partially observable environments. We introduce Embodied Planner-R1, a novel outcome-driven reinforcement learning framework that enables LLMs to develop interactive capabilities through autonomous exploration with minimal supervision. Our framework incorporates three key innovations: (1) Without human annotations, we employ pure reinforcement learning with group rollout, incorporating in-environment interaction through parallel exploration; (2) completion-driven sparse reward; and (3) Interactive Policy Optimization (IPO) for efficient learning from grouped trajectories. Across two challenging text-based Embodied planning benchmarks, Embodied Planner-R1 achieves impressive completion rates of 97.78% on ALFWorld and 79.92% on ScienceWorld, surpassing prior methods by a large margin, and suffers only a -3.66% drop in previously unseen environments, evidencing strong generalization.

Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning Lire l’article »

AI, Committee, Actualités, Uncategorized

AutoMixer: Checkpoint Artifacts as Automatic Data Mixers

arXiv:2506.21910v1 Announce Type: new Abstract: In language model training, it is desirable to equip models with capabilities from various tasks. However, it is not clear how to directly obtain the right data mixtures for these capabilities as the relationship between data and tasks is difficult to be modeled. In this work, we observe that checkpoint models exhibit emerging capabilities at different points in the training trajectory. Often, the training process saves checkpoints as artifacts that are under-utilized as a source of in-training data signals. We identify these artifact models based on their respective capabilities on the benchmarks and leverage them as data mixers by using their aggregated first-order influence approximation over source data. We demonstrated on eight reasoning benchmarks that the proposed framework shows significant improvements in the pretraining setting, with performance improvements of up to 1.93%. Overall, this shows the potential of checkpoint models to enhance data quality and optimize data mixtures.

AutoMixer: Checkpoint Artifacts as Automatic Data Mixers Lire l’article »

AI, Committee, Actualités, Uncategorized

CitySim: Modeling Urban Behaviors and City Dynamics with Large-Scale LLM-Driven Agent Simulation

arXiv:2506.21805v1 Announce Type: cross Abstract: Modeling human behavior in urban environments is fundamental for social science, behavioral studies, and urban planning. Prior work often rely on rigid, hand-crafted rules, limiting their ability to simulate nuanced intentions, plans, and adaptive behaviors. Addressing these challenges, we envision an urban simulator (CitySim), capitalizing on breakthroughs in human-level intelligence exhibited by large language models. In CitySim, agents generate realistic daily schedules using a recursive value-driven approach that balances mandatory activities, personal habits, and situational factors. To enable long-term, lifelike simulations, we endow agents with beliefs, long-term goals, and spatial memory for navigation. CitySim exhibits closer alignment with real humans than prior work, both at micro and macro levels. Additionally, we conduct insightful experiments by modeling tens of thousands of agents and evaluating their collective behaviors under various real-world scenarios, including estimating crowd density, predicting place popularity, and assessing well-being. Our results highlight CitySim as a scalable, flexible testbed for understanding and forecasting urban phenomena.

CitySim: Modeling Urban Behaviors and City Dynamics with Large-Scale LLM-Driven Agent Simulation Lire l’article »

AI, Committee, Actualités, Uncategorized

MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents

arXiv:2506.21605v1 Announce Type: new Abstract: Recent works have highlighted the significance of memory mechanisms in LLM-based agents, which enable them to store observed information and adapt to dynamic environments. However, evaluating their memory capabilities still remains challenges. Previous evaluations are commonly limited by the diversity of memory levels and interactive scenarios. They also lack comprehensive metrics to reflect the memory capabilities from multiple aspects. To address these problems, in this paper, we construct a more comprehensive dataset and benchmark to evaluate the memory capability of LLM-based agents. Our dataset incorporates factual memory and reflective memory as different levels, and proposes participation and observation as various interactive scenarios. Based on our dataset, we present a benchmark, named MemBench, to evaluate the memory capability of LLM-based agents from multiple aspects, including their effectiveness, efficiency, and capacity. To benefit the research community, we release our dataset and project at https://github.com/import-myself/Membench.

MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents Lire l’article »

AI, Committee, Actualités, Uncategorized

VIDEE: Visual and Interactive Decomposition, Execution, and Evaluation of Text Analytics with Intelligent Agents

arXiv:2506.21582v1 Announce Type: new Abstract: Text analytics has traditionally required specialized knowledge in Natural Language Processing (NLP) or text analysis, which presents a barrier for entry-level analysts. Recent advances in large language models (LLMs) have changed the landscape of NLP by enabling more accessible and automated text analysis (e.g., topic detection, summarization, information extraction, etc.). We introduce VIDEE, a system that supports entry-level data analysts to conduct advanced text analytics with intelligent agents. VIDEE instantiates a human-agent collaroration workflow consisting of three stages: (1) Decomposition, which incorporates a human-in-the-loop Monte-Carlo Tree Search algorithm to support generative reasoning with human feedback, (2) Execution, which generates an executable text analytics pipeline, and (3) Evaluation, which integrates LLM-based evaluation and visualizations to support user validation of execution results. We conduct two quantitative experiments to evaluate VIDEE’s effectiveness and analyze common agent errors. A user study involving participants with varying levels of NLP and text analytics experience — from none to expert — demonstrates the system’s usability and reveals distinct user behavior patterns. The findings identify design implications for human-agent collaboration, validate the practical utility of VIDEE for non-expert users, and inform future improvements to intelligent text analytics systems.

VIDEE: Visual and Interactive Decomposition, Execution, and Evaluation of Text Analytics with Intelligent Agents Lire l’article »

AI, Committee, Actualités, Uncategorized

Hope Speech Detection in code-mixed Roman Urdu tweets: A Positive Turn in Natural Language Processing

arXiv:2506.21583v1 Announce Type: new Abstract: Hope is a positive emotional state involving the expectation of favorable future outcomes, while hope speech refers to communication that promotes optimism, resilience, and support, particularly in adverse contexts. Although hope speech detection has gained attention in Natural Language Processing (NLP), existing research mainly focuses on high-resource languages and standardized scripts, often overlooking informal and underrepresented forms such as Roman Urdu. To the best of our knowledge, this is the first study to address hope speech detection in code-mixed Roman Urdu by introducing a carefully annotated dataset, thereby filling a critical gap in inclusive NLP research for low-resource, informal language varieties. This study makes four key contributions: (1) it introduces the first multi-class annotated dataset for Roman Urdu hope speech, comprising Generalized Hope, Realistic Hope, Unrealistic Hope, and Not Hope categories; (2) it explores the psychological foundations of hope and analyzes its linguistic patterns in code-mixed Roman Urdu to inform dataset development; (3) it proposes a custom attention-based transformer model optimized for the syntactic and semantic variability of Roman Urdu, evaluated using 5-fold cross-validation; and (4) it verifies the statistical significance of performance gains using a t-test. The proposed model, XLM-R, achieves the best performance with a cross-validation score of 0.78, outperforming the baseline SVM (0.75) and BiLSTM (0.76), with gains of 4% and 2.63% respectively.

Hope Speech Detection in code-mixed Roman Urdu tweets: A Positive Turn in Natural Language Processing Lire l’article »

AI, Committee, Actualités, Uncategorized

Alibaba Qwen Team Releases Qwen-VLo: A Unified Multimodal Understanding and Generation Model

The Alibaba Qwen team has introduced Qwen-VLo, a new addition to its Qwen model family, designed to unify multimodal understanding and generation within a single framework. Positioned as a powerful creative engine, Qwen-VLo enables users to generate, edit, and refine high-quality visual content from text, sketches, and commands—in multiple languages and through step-by-step scene construction. This model marks a significant leap in multimodal AI, making it highly applicable for designers, marketers, content creators, and educators. Unified Vision-Language Modeling Qwen-VLo builds on Qwen-VL, Alibaba’s earlier vision-language model, by extending it with image generation capabilities. The model integrates visual and textual modalities in both directions—it can interpret images and generate relevant textual descriptions or respond to visual prompts, while also producing visuals based on textual or sketch-based instructions. This bidirectional flow enables seamless interaction between modalities, optimizing creative workflows. Key Features of Qwen-VLo Concept-to-Polish Visual Generation: Qwen-VLo supports generating high-resolution images from rough inputs, such as text prompts or simple sketches. The model understands abstract concepts and converts them into polished, aesthetically refined visuals. This capability is ideal for early-stage ideation in design and branding. On-the-Fly Visual Editing: With natural language commands, users can iteratively refine images, adjusting object placements, lighting, color themes, and composition. Qwen-VLo simplifies tasks like retouching product photography or customizing digital advertisements, eliminating the need for manual editing tools. Multilingual Multimodal Understanding: Qwen-VLo is trained with support for multiple languages, allowing users from diverse linguistic backgrounds to engage with the model. This makes it suitable for global deployment in industries such as e-commerce, publishing, and education. Progressive Scene Construction: Rather than rendering complex scenes in one pass, Qwen-VLo enables progressive generation. Users can guide the model step-by-step—adding elements, refining interactions, and adjusting layouts incrementally. This mirrors natural human creativity and improves user control over output. Architecture and Training Enhancements While details of the model architecture are not deeply specified in the public blog, Qwen-VLo likely inherits and extends the Transformer-based architecture from the Qwen-VL line. The enhancements focus on fusion strategies for cross-modal attention, adaptive fine-tuning pipelines, and integration of structured representations for better spatial and semantic grounding. The training data includes multilingual image-text pairs, sketches with image ground truths, and real-world product photography. This diverse corpus allows Qwen-VLo to generalize well across tasks like composition generation, layout refinement, and image captioning. Target Use Cases Design & Marketing: Qwen-VLo’s ability to convert text concepts into polished visuals makes it ideal for ad creatives, storyboards, product mockups, and promotional content. Education: Educators can visualize abstract concepts (e.g., science, history, art) interactively. Language support enhances accessibility in multilingual classrooms. E-commerce & Retail: Online sellers can use the model to generate product visuals, retouch shots, or localize designs per region. Social Media & Content Creation: For influencers or content producers, Qwen-VLo offers fast, high-quality image generation without relying on traditional design software. Key Benefits Qwen-VLo stands out in the current LMM (Large Multimodal Model) landscape by offering: Seamless text-to-image and image-to-text transitions Localized content generation in multiple languages High-resolution outputs suitable for commercial use Editable and interactive generation pipeline Its design supports iterative feedback loops and precision edits, which are critical for professional-grade content generation workflows. Conclusion Alibaba’s Qwen-VLo pushes forward the frontier of multimodal AI by merging understanding and generation capabilities into a cohesive, interactive model. Its flexibility, multilingual support, and progressive generation features make it a valuable tool for a wide array of content-driven industries. As the demand for visual and language content convergence grows, Qwen-VLo positions itself as a scalable, creative assistant ready for global adoption. Check out the Technical details and Try it here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Alibaba Qwen Team Releases Qwen-VLo: A Unified Multimodal Understanding and Generation Model appeared first on MarkTechPost.

Alibaba Qwen Team Releases Qwen-VLo: A Unified Multimodal Understanding and Generation Model Lire l’article »

fr_FR