YouZum

Committee

AI, Committee, Notizie, Uncategorized

Google DeepMind Introduces Genie 3: A General Purpose World Model that can Generate an Unprecedented Diversity of Interactive Environments

Google DeepMind has announced Genie 3, a revolutionary AI system capable of generating interactive, physically consistent virtual worlds from simple text prompts. This marks a substantial leap in the field of world models—a class of AI designed to understand and simulate environments, not merely render them, but produce dynamic spaces you can move through and interact with like a game engine in real-time. Technical Overview World Model Fundamentals: A world model, in this context, refers to a deep neural network trained to generate and simulate visually rich, interactive virtual environments. Genie 3 leverages advances in generative modeling and large-scale multimodal AI to produce entire worlds at 720p resolution and 24 frames per second that are truly navigable and reactive to user input. Natural Language Prompting: With Genie 3, users provide a plain English description (such as “a beach at sunset, with interactive sandcastles”) and the model synthesizes an environment fitting that description. Unlike traditional generative video or image models, Genie 3’s outputs are not just visual—they’re interactive. Users can walk, jump, or even paint within the environment, and those actions persist and remain consistent even as you explore other regions.youtube World Consistency and Memory: A key innovation is “world memory.” Genie 3’s generated environments retain changes introduced by the user. For example, if you alter an object or leave a mark, returning to that area shows the environment unchanged since your last interaction. This temporal and spatial persistence is crucial for use in training AI agents and robots, and for creating immersive, interactive scenarios that feel stable and real. Performance and Capabilities Smooth real-time interaction: Genie 3 runs at 24fps and 720p, allowing seamless navigation through the generated world. Extensible interaction: While not full-featured like established game engines, it supports fundamental inputs (walking, looking, jumping, painting) and can incorporate dynamic events on the fly (like altering weather, adding characters, etc.). High diversity: Genie 3 can render environments ranging from realistic city streets and schools to entirely fantastical realms, all via simple prompts. Longer horizons: Environments are physically consistent for several minutes—significantly longer than previous models, enabling more sustained play and interaction. Impact and Applications Game Design and Prototyping Genie 3 offers tremendous utility as a tool for ideation and rapid prototyping. Designers can test new mechanics, environments, or artistic ideas in seconds, accelerating creative iteration. It opens up the potential for on-the-fly generation of game scenarios that, while rough, could inspire new genres or gameplay experiences. Robotics and Embodied AI World models like Genie 3 are critical for training robots and embodied AI agents, allowing for extensive simulation-based learning before deployment in the real world. The ability to continuously generate interactive, diverse, and physically plausible environments provides virtually unlimited data for agent training and curriculum development. Beyond Gaming: XR, Education, and Simulation The text-to-world paradigm democratizes the creation of immersive XR experiences, letting smaller teams or even individuals generate new simulations rapidly for education, training, or research. It also paves the way for participatory simulations, digital twins, and agent-based decision-making in areas like urban planning, crisis management, and beyond. Genie 3 and the Future In my opinion, Genie 3 does not aim to replace traditional game engines yet, as it lacks their predictability, precision tools, and collaborative workflows. However, it represents a bridge: future pipelines may involve bouncing between neural world models and conventional engines, using each for what they do best—rapid creative synthesis and fine-grained polish, respectively. World models like Genie 3 are a significant milestone toward Artificial General Intelligence (AGI); they enable richer agent simulation, broader transfer learning, and a step closer to AI systems that understand and reason about the world at a foundational level. Genie 3’s emergence signals an exciting new chapter for AI, simulation, game design, and robotics. Its further development and integration could drastically change both how we build digital experiences and how intelligent agents learn, plan, and interact within complex environments. Check out the Technical Blog. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Google DeepMind Introduces Genie 3: A General Purpose World Model that can Generate an Unprecedented Diversity of Interactive Environments appeared first on MarkTechPost.

Google DeepMind Introduces Genie 3: A General Purpose World Model that can Generate an Unprecedented Diversity of Interactive Environments Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Evaluating LLMs on Real-World Forecasting Against Expert Forecasters

arXiv:2507.04562v3 Announce Type: replace-cross Abstract: Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but their ability to forecast future events remains understudied. A year ago, large language models struggle to come close to the accuracy of a human crowd. I evaluate state-of-the-art LLMs on 464 forecasting questions from Metaculus, comparing their performance against top forecasters. Frontier models achieve Brier scores that ostensibly surpass the human crowd but still significantly underperform a group of experts.

Evaluating LLMs on Real-World Forecasting Against Expert Forecasters Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

PyLate: Flexible Training and Retrieval for Late Interaction Models

arXiv:2508.03555v1 Announce Type: cross Abstract: Neural ranking has become a cornerstone of modern information retrieval. While single vector search remains the dominant paradigm, it suffers from the shortcoming of compressing all the information into a single vector. This compression leads to notable performance degradation in out-of-domain, long-context, and reasoning-intensive retrieval tasks. Multi-vector approaches pioneered by ColBERT aim to address these limitations by preserving individual token embeddings and computing similarity via the MaxSim operator. This architecture has demonstrated superior empirical advantages, including enhanced out-of-domain generalization, long-context handling, and performance in complex retrieval scenarios. Despite these compelling empirical results and clear theoretical advantages, the practical adoption and public availability of late interaction models remain low compared to their single-vector counterparts, primarily due to a lack of accessible and modular tools for training and experimenting with such models. To bridge this gap, we introduce PyLate, a streamlined library built on top of Sentence Transformers to support multi-vector architectures natively, inheriting its efficient training, advanced logging, and automated model card generation while requiring minimal code changes to code templates users are already familiar with. By offering multi-vector-specific features such as efficient indexes, PyLate aims to accelerate research and real-world application of late interaction models, thereby unlocking their full potential in modern IR systems. Finally, PyLate has already enabled the development of state-of-the-art models, including GTE-ModernColBERT and Reason-ModernColBERT, demonstrating its practical utility for both research and production environments.

PyLate: Flexible Training and Retrieval for Late Interaction Models Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Pre-trained Transformer-Based Approach for Arabic Question Answering : A Comparative Study

arXiv:2111.05671v2 Announce Type: replace Abstract: Question answering(QA) is one of the most challenging yet widely investigated problems in Natural Language Processing (NLP). Question-answering (QA) systems try to produce answers for given questions. These answers can be generated from unstructured or structured text. Hence, QA is considered an important research area that can be used in evaluating text understanding systems. A large volume of QA studies was devoted to the English language, investigating the most advanced techniques and achieving state-of-the-art results. However, research efforts in the Arabic question-answering progress at a considerably slower pace due to the scarcity of research efforts in Arabic QA and the lack of large benchmark datasets. Recently many pre-trained language models provided high performance in many Arabic NLP problems. In this work, we evaluate the state-of-the-art pre-trained transformers models for Arabic QA using four reading comprehension datasets which are Arabic-SQuAD, ARCD, AQAD, and TyDiQA-GoldP datasets. We fine-tuned and compared the performance of the AraBERTv2-base model, AraBERTv0.2-large model, and AraELECTRA model. In the last, we provide an analysis to understand and interpret the low-performance results obtained by some models.

Pre-trained Transformer-Based Approach for Arabic Question Answering : A Comparative Study Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward

arXiv:2508.03686v1 Announce Type: new Abstract: Answer verification is crucial not only for evaluating large language models (LLMs) by matching their unstructured outputs against standard answers, but also serves as the reward model to guide LLM optimization. Most evaluation frameworks rely on regularized matching or employ general LLMs for answer verification, which demands extensive, repetitive customization for regex rules or evaluation prompts. Two fundamental limitations persist in current methodologies: 1) the absence of comprehensive benchmarks that systematically evaluate verification capabilities across different LLMs; and 2) the nascent stage of verifier development, where existing approaches lack both the robustness to handle complex edge cases and the generalizability across different domains. In this work, we develop CompassVerifier, an accurate and robust lightweight verifier model for evaluation and outcome reward. It demonstrates multi-domain competency spanning math, knowledge, and diverse reasoning tasks, with the capability to process various answer types, including multi-subproblems, formulas, and sequence answers, while effectively identifying abnormal/invalid responses. We introduce VerifierBench benchmark comprising model outputs collected from multiple data sources, augmented through manual analysis of metaerror patterns to enhance CompassVerifier. We anticipate that CompassVerifier and VerifierBench will facilitate answer verification, evaluation protocols, and reinforcement learning research. Code and dataset are available at https://github.com/open-compass/CompassVerifier.

CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Cross-lingual Opinions and Emotions Mining in Comparable Documents

arXiv:2508.03112v1 Announce Type: new Abstract: Comparable texts are topic-aligned documents in multiple languages that are not direct translations. They are valuable for understanding how a topic is discussed across languages. This research studies differences in sentiments and emotions across English-Arabic comparable documents. First, texts are annotated with sentiment and emotion labels. We apply a cross-lingual method to label documents with opinion classes (subjective/objective), avoiding reliance on machine translation. To annotate with emotions (anger, disgust, fear, joy, sadness, surprise), we manually translate the English WordNet-Affect (WNA) lexicon into Arabic, creating bilingual emotion lexicons used to label the comparable corpora. We then apply a statistical measure to assess the agreement of sentiments and emotions in each source-target document pair. This comparison is especially relevant when the documents originate from different sources. To our knowledge, this aspect has not been explored in prior literature. Our study includes English-Arabic document pairs from Euronews, BBC, and Al-Jazeera (JSC). Results show that sentiment and emotion annotations align when articles come from the same news agency and diverge when they come from different ones. The proposed method is language-independent and generalizable to other language pairs.

Cross-lingual Opinions and Emotions Mining in Comparable Documents Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Google AI Releases LangExtract: An Open Source Python Library that Extracts Structured Data from Unstructured Text Documents

In today’s data-driven world, valuable insights are often buried in unstructured text—be it clinical notes, lengthy legal contracts, or customer feedback threads. Extracting meaningful, traceable information from these documents is both a technical and practical challenge. Google AI’s new open-source Python library, LangExtract, is designed to address this gap directly, using LLMs like Gemini to deliver powerful, automated extraction with traceability and transparency at its core. Key Innovations of LangExtract 1. Declarative and Traceable Extraction LangExtract lets users define custom extraction tasks using natural language instructions and high-quality “few-shot” examples. This empowers developers and analysts to specify exactly which entities, relationships, or facts to extract, and in what structure. Crucially, every extracted piece of information is tied directly back to its source text—enabling validation, auditing, and end-to-end traceability. 2. Domain Versatility The library works not just in tech demos but in critical real-world domains—including health (clinical notes, medical reports), finance (summaries, risk documents), law (contracts), research literature, and even the arts (analyzing Shakespeare). Original use cases include automatic extraction of medications, dosages, and administration details from clinical documents, as well as relationships and emotions from plays or literature. 3. Schema Enforcement with LLMs Powered by Gemini and compatible with other LLMs, LangExtract enables enforcement of custom output schemas (like JSON), so results aren’t just accurate—they’re immediately usable in downstream databases, analytics, or AI pipelines. It solves traditional LLM weaknesses around hallucination and schema drift by grounding outputs to both user instructions and actual source text. 4. Scalability and Visualization Handles Large Volumes: LangExtract efficiently processes long documents by chunking, parallelizing, and aggregating results. Interactive Visualization: Developers can generate interactive HTML reports, viewing each extracted entity with context by highlighting its location in the original document—making auditing and error analysis seamless. Smooth Integration: Works in Google Colab, Jupyter, or as standalone HTML files, supporting a rapid feedback loop for developers and researchers. 5. Installation and Usage Install easily with pip: Copy CodeCopiedUse a different Browser pip install langextract Example Workflow (Extracting Character Info from Shakespeare): Copy CodeCopiedUse a different Browser import langextract as lx import textwrap # 1. Define your prompt prompt = textwrap.dedent(“”” Extract characters, emotions, and relationships in order of appearance. Use exact text for extractions. Do not paraphrase or overlap entities. Provide meaningful attributes for each entity to add context. “””) # 2. Give a high-quality example examples = [ lx.data.ExampleData( text=”ROMEO. But soft! What light through yonder window breaks? It is the east, and Juliet is the sun.”, extractions=[ lx.data.Extraction(extraction_class=”character”, extraction_text=”ROMEO”, attributes={“emotional_state”: “wonder”}), lx.data.Extraction(extraction_class=”emotion”, extraction_text=”But soft!”, attributes={“feeling”: “gentle awe”}), lx.data.Extraction(extraction_class=”relationship”, extraction_text=”Juliet is the sun”, attributes={“type”: “metaphor”}), ], ) ] # 3. Extract from new text input_text = “Lady Juliet gazed longingly at the stars, her heart aching for Romeo” result = lx.extract( text_or_documents=input_text, prompt_description=prompt, examples=examples, model_id=”gemini-2.5-pro” ) # 4. Save and visualize results lx.io.save_annotated_documents([result], output_name=”extraction_results.jsonl”) html_content = lx.visualize(“extraction_results.jsonl”) with open(“visualization.html”, “w”) as f: f.write(html_content) This results in structured, source-anchored JSON outputs, plus an interactive HTML visualization for easy review and demonstration. Specialized & Real-World Applications Medicine: Extracts medications, dosages, timing, and links them back to source sentences. Powered by insights from research conducted on accelerating medical information extraction, LangExtract’s approach is directly applicable to structuring clinical and radiology reports—improving clarity and supporting interoperability. Finance & Law: Automatically pulls relevant clauses, terms, or risks from dense legal or financial text, ensuring every output can be traced back to its context. Research & Data Mining: Streamlines high-throughput extraction from thousands of scientific papers. The team even provides a demonstration called RadExtract for structuring radiology reports—highlighting not just what was extracted, but exactly where the information appeared in the original input. How LangExtract Compares Feature Traditional Approaches LangExtract Approach Schema Consistency Often manual/error-prone Enforced via instructions & few-shot examples Result Traceability Minimal All output linked to input text Scaling to Long Texts Windowed, lossy Chunked + parallel extraction, then aggregation Visualization Custom, usually absent Built-in, interactive HTML reports Deployment Rigid, model-specific Gemini-first, open to other LLMs & on-premises In Summary LangExtract presents a new era for extracting structured, actionable data from text—delivering: Declarative, explainable extraction Traceable results backed by source context Instant visualization for rapid iteration Easy integration into any Python workflow Check out the GitHub Page and Technical Blog. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Google AI Releases LangExtract: An Open Source Python Library that Extracts Structured Data from Unstructured Text Documents appeared first on MarkTechPost.

Google AI Releases LangExtract: An Open Source Python Library that Extracts Structured Data from Unstructured Text Documents Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

CUPID: Evaluating Personalized and Contextualized Alignment of LLMs from Interactions

arXiv:2508.01674v1 Announce Type: new Abstract: Personalization of Large Language Models (LLMs) often assumes users hold static preferences that reflect globally in all tasks. In reality, humans hold dynamic preferences that change depending on the context. As users interact with an LLM in various contexts, they naturally reveal their contextual preferences, which a model must infer and apply in future contexts to ensure alignment. To assess this, we introduce CUPID, a benchmark of 756 human-curated interaction session histories between users and LLM-based chat assistants. In each interaction session, the user provides a request in a specific context and expresses their preference through multi-turn feedback. Given a new user request and prior interaction sessions, our benchmark assesses whether LLMs can infer the preference relevant to this request and generate a response that satisfies this preference. With CUPID, we evaluated 10 open and proprietary LLMs, revealing that state-of-the-art LLMs struggle to infer preferences from multi-turn interactions and fail to discern what previous context is relevant to a new request — under 50% precision and 65% recall. Our work highlights the need to advance LLM capabilities for more contextually personalized interactions and proposes CUPID as a resource to drive these improvements.

CUPID: Evaluating Personalized and Contextualized Alignment of LLMs from Interactions Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

FinCoT: Grounding Chain-of-Thought in Expert Financial Reasoning

arXiv:2506.16123v2 Announce Type: replace Abstract: This paper presents FinCoT, a structured chain-of-thought (CoT) prompting framework that embeds domain-specific expert financial reasoning blueprints to guide large language models’ behaviors. We identify three main prompting styles in financial NLP (FinNLP): (1) standard prompting (zero-shot), (2) unstructured CoT (free-form reasoning), and (3) structured CoT (with explicitly structured reasoning steps). Prior work has mainly focused on the first two, while structured CoT remains underexplored and lacks domain expertise incorporation. Therefore, we evaluate all three prompting approaches across ten CFA-style financial domains and introduce FinCoT as the first structured finance-specific prompting approach incorporating blueprints from domain experts. FinCoT improves the accuracy of a general-purpose model, Qwen3-8B-Base, from 63.2% to 80.5%, and boosts Fin-R1 (7B), a finance-specific model, from 65.7% to 75.7%, while reducing output length by up to 8.9x and 1.16x compared to structured CoT methods, respectively. We find that FinCoT proves most effective for models lacking financial post-training. Our findings show that FinCoT does not only improve performance and reduce inference costs but also yields more interpretable and expert-aligned reasoning traces.

FinCoT: Grounding Chain-of-Thought in Expert Financial Reasoning Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

arXiv:2505.07167v2 Announce Type: replace-cross Abstract: Large Language Models (LLMs) have been extensively used across diverse domains, including virtual assistants, automated code generation, and scientific research. However, they remain vulnerable to jailbreak attacks, which manipulate the models into generating harmful responses despite safety alignment. Recent studies have shown that current safety-aligned LLMs often undergo the shallow safety alignment, where the first few tokens largely determine whether the response will be harmful. Through comprehensive observations, we find that safety-aligned LLMs and various defense strategies generate highly similar initial tokens in their refusal responses, which we define as safety trigger tokens. Building on this insight, we propose texttt{D-STT}, a simple yet effective defense algorithm that identifies and explicitly decodes safety trigger tokens of the given safety-aligned LLM to trigger the model’s learned safety patterns. In this process, the safety trigger is constrained to a single token, which effectively preserves model usability by introducing minimum intervention in the decoding process. Extensive experiments across diverse jailbreak attacks and benign prompts demonstrate that ours significantly reduces output harmfulness while preserving model usability and incurring negligible response time overhead, outperforming ten baseline methods.

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models Leggi l'articolo »

We use cookies to improve your experience and performance on our website. You can learn more at Politica sulla privacy and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
it_IT