YouZum

Committee

AI, Committee, News, Uncategorized

Bias Beware: The Impact of Cognitive Biases on LLM-Driven Product Recommendations

arXiv:2502.01349v2 Announce Type: replace Abstract: The advent of Large Language Models (LLMs) has revolutionized product recommenders, yet their susceptibility to adversarial manipulation poses critical challenges, particularly in real-world commercial applications. Our approach is the first one to tap into human psychological principles, seamlessly modifying product descriptions, making such manipulations hard to detect. In this work, we investigate cognitive biases as black-box adversarial strategies, drawing parallels between their effects on LLMs and human purchasing behavior. Through extensive evaluation across models of varying scale, we find that certain biases, such as social proof, consistently boost product recommendation rate and ranking, while others, like scarcity and exclusivity, surprisingly reduce visibility. Our results demonstrate that cognitive biases are deeply embedded in state-of-the-art LLMs, leading to highly unpredictable behavior in product recommendations and posing significant challenges for effective mitigation.

Bias Beware: The Impact of Cognitive Biases on LLM-Driven Product Recommendations Read Post »

AI, Committee, News, Uncategorized

MedPAIR: Measuring Physicians and AI Relevance Alignment in Medical Question Answering

arXiv:2505.24040v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on various medical question-answering (QA) benchmarks, including standardized medical exams. However, correct answers alone do not ensure correct logic, and models may reach accurate conclusions through flawed processes. In this study, we introduce the MedPAIR (Medical Dataset Comparing Physicians and AI Relevance Estimation and Question Answering) dataset to evaluate how physician trainees and LLMs prioritize relevant information when answering QA questions. We obtain annotations on 1,300 QA pairs from 36 physician trainees, labeling each sentence within the question components for relevance. We compare these relevance estimates to those for LLMs, and further evaluate the impact of these “relevant” subsets on downstream task performance for both physician trainees and LLMs. We find that LLMs are frequently not aligned with the content relevance estimates of physician trainees. After filtering out physician trainee-labeled irrelevant sentences, accuracy improves for both the trainees and the LLMs. All LLM and physician trainee-labeled data are available at: http://medpair.csail.mit.edu/.

MedPAIR: Measuring Physicians and AI Relevance Alignment in Medical Question Answering Read Post »

AI, Committee, News, Uncategorized

LegalEval-Q: A New Benchmark for The Quality Evaluation of LLM-Generated Legal Text

arXiv:2505.24826v1 Announce Type: new Abstract: As large language models (LLMs) are increasingly used in legal applications, current evaluation benchmarks tend to focus mainly on factual accuracy while largely neglecting important linguistic quality aspects such as clarity, coherence, and terminology. To address this gap, we propose three steps: First, we develop a regression model to evaluate the quality of legal texts based on clarity, coherence, and terminology. Second, we create a specialized set of legal questions. Third, we analyze 49 LLMs using this evaluation framework. Our analysis identifies three key findings: First, model quality levels off at 14 billion parameters, with only a marginal improvement of $2.7%$ noted at 72 billion parameters. Second, engineering choices such as quantization and context length have a negligible impact, as indicated by statistical significance thresholds above 0.016. Third, reasoning models consistently outperform base architectures. A significant outcome of our research is the release of a ranking list and Pareto analysis, which highlight the Qwen3 series as the optimal choice for cost-performance tradeoffs. This work not only establishes standardized evaluation protocols for legal LLMs but also uncovers fundamental limitations in current training data refinement approaches. Code and models are available at: https://github.com/lyxx3rd/LegalEval-Q.

LegalEval-Q: A New Benchmark for The Quality Evaluation of LLM-Generated Legal Text Read Post »

AI, Committee, News, Uncategorized

Enigmata’s Multi-Stage and Mix-Training Reinforcement Learning Recipe Drives Breakthrough Performance in LLM Puzzle Reasoning

Large Reasoning Models (LRMs), trained from LLMs using reinforcement learning (RL), demonstrated great performance in complex reasoning tasks, including mathematics, STEM, and coding. However, existing LRMs face challenges in completing various puzzle tasks that require purely logical reasoning skills, which are easy and obvious for humans. Current methods targeting puzzles focus only on designing benchmarks for evaluation, lacking the training methods and resources for modern LLMs to tackle this challenge. Current puzzle datasets lack diversity and scalability, covering limited puzzle types with little control over generation or difficulty. Moreover, due to the success of the “LLM+RLVR” paradigm, it has become crucial to obtain large, diverse, and challenging sets of verifiable puzzle prompts for training agents. Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a key method for improving models’ reasoning capabilities, removing the need for reward models by directly assigning rewards based on objectively verifiable answers. Puzzles are particularly well-suited for RLVR. However, most prior RLVR research has overlooked the puzzles’ potential for delivering effective reward signals. In puzzle reasoning of LLMs, existing benchmarks evaluate different types of reasoning, including abstract, deductive, and compositional reasoning. Few benchmarks support scalable generation and difficulty control but lack puzzle diversity. Moreover, the improvement of LLMs’ puzzle-solving abilities mainly falls into two categories: tool integration and RLVR. Researchers from ByteDance Seed, Fudan University, Tsinghua University, Nanjing University, and Shanghai Jiao Tong University have proposed Enigmata, the first comprehensive toolkit designed for improving LLMs with puzzle reasoning skills. It contains 36 tasks across seven categories, each featuring a generator that produces unlimited examples with controllable difficulty and a rule-based verifier for automatic evaluation. The researchers further developed Enigmata-Eval as a rigorous benchmark and created optimized multi-task RLVR strategies. Puzzle data from Enigmata enhances SoTA performance on advanced math and STEM reasoning tasks like AIME, BeyondAIME, and GPQA when trained on larger models like Seed1.5-Thinking. This shows the generalization benefits of Enigmata. The Enigmata-Data comprises 36 puzzle tasks organized into 7 primary categories, including Crypto, Arithmetic, Logic, Grid, Graph, Search, and Sequential Puzzle, making it the only dataset having multiple task categories with scalability, automatic verification, and public availability. The data construction follows a three-phase pipeline: Tasks Collection and Design, Auto-Generator and Verifier Development, and Sliding Difficulty Control. Moreover, the Enigmata-Eval is developed by systematically sampling from the broader dataset, aiming to extract 50 instances per difficulty level for each task. The final evaluation set contains 4,758 puzzle instances rather than the theoretical maximum of 5,400, due to inherent constraints, where some tasks generate fewer instances per difficulty level. The proposed model outperforms most public models on Enigmata-Eval with 32B parameters, showing the effectiveness of the dataset and training recipe. The model stands out on the challenging ARC-AGI benchmark, surpassing strong reasoning models such as Gemini 2.5 Pro, o3-mini, and o1. The Qwen2.5-32B-Enigmata shows outstanding performance in structured reasoning categories, outperforming in Crypto, Arithmetic, and Logic tasks, suggesting effective development of rule-based reasoning capabilities. The model shows competitive performance in search tasks that require strategic exploration and planning capabilities. Moreover, Crypto and Arithmetic tasks tend to provide the highest accuracy, while spatial and sequential tasks remain more difficult. In this paper, researchers introduced Enigmata, a comprehensive suite for equipping LLMs with advanced puzzle reasoning that integrates seamlessly with RL using verifiable rule-based rewards. The trained Enigmata-Model shows superior performance and robust generalization skills through RLVR training. Experiments reveal that when applied to larger models such as Seed1.5-Thinking (20B/200B parameters), synthetic puzzle data brings additional benefits in other domains, including mathematics and STEM reasoning over state-of-the-art models. Enigmata provides a solid foundation for the research community to advance reasoning model development, offering a unified framework that effectively bridges logical puzzle-solving with broader reasoning capabilities in LLMs. Check out the Paper, GitHub Page and Project Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. The post Enigmata’s Multi-Stage and Mix-Training Reinforcement Learning Recipe Drives Breakthrough Performance in LLM Puzzle Reasoning appeared first on MarkTechPost.

Enigmata’s Multi-Stage and Mix-Training Reinforcement Learning Recipe Drives Breakthrough Performance in LLM Puzzle Reasoning Read Post »

AI, Committee, News, Uncategorized

This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model Inference

Large language models (LLMs), with billions of parameters, power many AI-driven services across industries. However, their massive size and complex architectures make their computational costs during inference a significant challenge. As these models evolve, optimizing the balance between computational efficiency and output quality has become a crucial area of research. The core challenge lies in how LLMs handle inference. Every time an input is processed, the entire model is activated, which consumes extensive computational resources. This full activation is unnecessary for most tasks, as only a small subset of neurons contribute meaningfully to the final output. Existing sparse activation methods attempt to address this by selectively deactivating less important neurons. However, these approaches often focus only on the magnitude of hidden states while ignoring the critical role of weight matrices in propagating errors through the network. This oversight leads to high approximation errors and deteriorates model performance, particularly at higher sparsity levels. Sparse activation techniques have included methods like Mixture-of-Experts (MoE) used in models such as GPT-4 and Mistral, which rely on additional training to learn which experts to activate for each input. Other approaches, such as TEAL and CATS, aim to reduce computation by using the size of hidden activations to prune neurons, but they still leave room for improvement. These methods often struggle with balancing sparsity and accuracy, as they can mistakenly deactivate important neurons or retain those with minimal influence. Moreover, they require model-specific threshold tuning, making them less flexible across different architectures. Researchers from Microsoft, Renmin University of China, New York University, and the South China University of Technology proposed a new method called WINA (Weight Informed Neuron Activation) to address these issues. WINA introduces a training-free sparse activation technique that uses both hidden state magnitudes and column-wise ℓ2 norms of weight matrices to determine which neurons to activate during inference. By considering the combined impact of input magnitudes and weight importance, WINA creates a more effective sparsification strategy that adapts to different layers of the model without requiring retraining or fine-tuning. The WINA method is built on a simple yet powerful idea: neurons that have strong activations and large weight magnitudes are more likely to influence downstream computations. To operationalize this, WINA calculates the element-wise product of hidden states and weight norms, selecting the top-K components based on this combined metric. This strategy allows WINA to construct a sparse sub-network that preserves the most important signals while ignoring redundant activations. The method also includes a tensor transformation step that enforces column-wise orthogonality in weight matrices, ensuring theoretical error bounds translate effectively to real-world performance. By combining these steps, WINA maintains a tight approximation error while delivering significant computational savings. The research team evaluated WINA on several large language models, including Qwen-2.5-7B, LLaMA-2-7B, LLaMA-3-8B, and Phi-4-14B, across various tasks and sparsity levels. WINA outperformed TEAL and CATS across all tested models and sparsity settings. For example, on Qwen-2.5-7B at 65% sparsity, WINA achieved up to 2.94% higher average performance than TEAL and 1.41% better than TEAL-Transform. On LLaMA-3-8B, WINA delivered gains of 1.06% at 50% sparsity and 2.41% at 65% sparsity. Even at high sparsity levels, WINA retained stronger performance on reasoning-intensive tasks like GSM8K and ARC Challenge. WINA also delivered consistent computational savings, reducing floating-point operations by up to 63.7% on LLaMA-2-7B and 62.7% on Phi-4-14B. In summary, WINA offers a robust, training-free solution for sparse activation in large language models by combining hidden state magnitudes with weight matrix norms. This approach addresses the limitations of prior methods, such as TEAL, resulting in lower approximation errors, improved accuracy, and significant computational savings. The research team’s work represents an important step forward in developing more efficient LLM inference methods that can adapt to diverse models without requiring additional training. Check out the Paper and GitHub Page . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. The post This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model Inference appeared first on MarkTechPost.

This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model Inference Read Post »

AI, Committee, News, Uncategorized

Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation

Scientific research across fields like chemistry, biology, and artificial intelligence has long relied on human experts to explore knowledge, generate ideas, design experiments, and refine results. Yet, as problems grow more complex and data-intensive, discovery slows. While AI tools, such as language models and robotics, can handle specific tasks, like literature searches or code analysis, they rarely encompass the entire research cycle. Bridging the gap between idea generation and experimental validation remains a key challenge. For AI to autonomously advance science, it must propose hypotheses, design and execute experiments, analyze outcomes, and refine approaches in an iterative loop. Without this integration, AI risks producing disconnected ideas that depend on human supervision for validation. Before the introduction of a unified system, researchers relied on separate tools for each stage of the process. Large language models could help find relevant scientific papers, but they didn’t directly feed into experiment design or result analysis. Robotics can assist in automating physical experiments, and coding libraries like PyTorch can help build models; however, these tools operate independently of each other. There was no single system capable of handling the entire process, from forming ideas to verifying them through experiments. This led to bottlenecks, where researchers had to connect the dots manually, slowing progress and leaving room for errors or missed opportunities. The need for an integrated system that could handle the entire research cycle became clear. Researchers from the NovelSeek Team at the Shanghai Artificial Intelligence Laboratory developed NovelSeek, an AI system designed to run the entire scientific discovery process autonomously. NovelSeek comprises four main modules that work in tandem: a system that generates and refines research ideas, a feedback loop where human experts can interact with and refine these ideas, a method for translating ideas into code and experiment plans, and a process for conducting multiple rounds of experiments. What makes NovelSeek stand out is its versatility; it works across 12 scientific research tasks, including predicting chemical reaction yields, understanding molecular dynamics, forecasting time-series data, and handling functions like 2D semantic segmentation and 3D object classification. The team designed NovelSeek to minimize human involvement, expedite discoveries, and deliver consistent, high-quality results. The system behind NovelSeek involves multiple specialized agents, each focused on a specific part of the research workflow. The “Survey Agent” helps the system understand the problem by searching scientific papers and identifying relevant information based on keywords and task definitions. It adapts its search strategy by first doing a broad survey of papers, then going deeper by analyzing full-text documents for detailed insights. This ensures that the system captures both general trends and specific technical knowledge. The “Code Review Agent” examines existing codebases, whether user-uploaded or sourced from public repositories like GitHub, to understand how current methods work and identify areas for improvement. It checks how code is structured, looks for errors, and creates summaries that help the system build on past work. The “Idea Innovation Agent” generates creative research ideas, pushing the system to explore different approaches and refine them by comparing them to related studies and previous results. The system even includes a “Planning and Execution Agent” that turns ideas into detailed experiments, handles errors during the testing process, and ensures smooth execution of multi-step research plans. NovelSeek delivered impressive results across various tasks. In chemical reaction yield prediction, NovelSeek improved performance from a baseline of 24.2% (with a variation of ±4.2) to 34.8% (with a much smaller variation of ±1.1) in just 12 hours, progress that human researchers typically need months to achieve. In enhancer activity prediction, a key task in biology, NovelSeek raised the Pearson correlation coefficient from 0.65 to 0.79 within 4 hours. For 2D semantic segmentation, a task used in computer vision, precision improved from 78.8% to 81.0% in just 30 hours. These performance boosts, achieved in a fraction of the time typically needed, highlight the system’s efficiency. NovelSeek also successfully managed large, complex codebases with multiple files, demonstrating its ability to handle research tasks at a project level, not just in small, isolated tests. The team has made the code open-source, allowing others to use, test, and contribute to its improvement. Several Key Takeaways from the Research on NovelSeek include: NovelSeek supports 12 research tasks, including chemical reaction prediction, molecular dynamics, and 3D object classification. Reaction yield prediction accuracy improved from 24.2% to 34.8% in 12 hours. Enhancer activity prediction performance increased from 0.65 to 0.79 in 4 hours. 2D semantic segmentation precision improved from 78.8% to 81.0% in 30 hours. NovelSeek includes agents for literature search, code analysis, idea generation, and experiment execution. The system is open-source, enabling reproducibility and collaboration across scientific fields. In conclusion, NovelSeek demonstrates how combining AI tools into a single system can accelerate scientific discovery and reduce its dependence on human effort. It ties together the key steps, generating ideas, turning them into methods, and testing them through experiments, into one streamlined process. What once took researchers months or years can now be done in days or even hours. By linking every stage of research into a continuous loop, NovelSeek helps teams move from rough ideas to real-world results more quickly. This system highlights the power of AI not just to assist, but to drive scientific research in a way that could reshape how discoveries are made across many fields. Check out the Paper and GitHub Page . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. The post Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation appeared first on MarkTechPost.

Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation Read Post »

en_US