YouZum

Committee

AI, Committee, Noticias, Uncategorized

Here’s how we picked this year’s Innovators Under 35

Next week, we’ll publish our 2025 list of Innovators Under 35, highlighting smart and talented people who are working in many areas of emerging technology. This new class features 35 accomplished founders, hardware engineers, roboticists, materials scientists, and others who are already tackling tough problems and making big moves in their careers. All are under the age of 35.  One is developing a technology to reduce emissions from shipping, while two others are improving fertility treatments and creating new forms of contraception. Another is making it harder for people to maliciously share intimate images online. And quite a few are applying artificial intelligence to their respective fields in novel ways.  We’ll also soon reveal our 2025 Innovator of the Year, whose technical prowess is helping physicians diagnose and treat critically ill patients more quickly. What’s more (here’s your final hint), our winner even set a world record as a result of this work.  MIT Technology Review first published a list of Innovators Under 35 in 1999. It’s a grand tradition for us, and we often follow the work of various featured innovators for years, even decades, after they appear on the list. So before the big announcement, I want to take a moment to explain how we select the people we recognize each year.  Step 1: Call for nominations Our process begins with a call for nominations, which typically goes out in the final months of the previous year and is open to anyone, anywhere in the world. We encourage people to nominate themselves, which takes just a few minutes. This method helps us discover people doing important work that we might not otherwise encounter.  This year we had 420 nominations. Two-thirds of our candidates were put forward by someone else and one-third nominated themselves. We received nominations for people located in about 40 countries. Nearly 70% were based in the United States, with the UK, Switzerland, China, and the United Arab Emirates, respectively, having the next-highest concentrations.  After nominations close, a few editors then spend several weeks reviewing the nominees and selecting semifinalists. During this phase, we look for people who have developed practical solutions to societal issues or made important scientific advances that could translate into new technologies. Their work should have the potential for broad impact—it can’t be niche or incremental. And what’s unique about their approach must be clear.  Step 2: Semifinalist applications  This year, we winnowed our initial list of hundreds of nominees to 108 semifinalists. Then we asked those entrants for more information to help us get to know them better and evaluate their work.  We request three letters of reference and a résumé from each semifinalist, and we ask all of them to answer a few short questions about their work. We also give them the option to share a video or pass along relevant journal articles or other links to help us learn more about what they do. Step 3: Expert judges weigh in Next, we bring in dozens of experts to vet the semifinalists. This year, 38 judges evaluated and scored the applications. We match the contenders with judges who work in similar fields whenever possible. At least two judges review each entrant, though most are seen by three.  All these judges volunteer their time, and some return to help year after year. A few of our longtime judges include materials scientists Yet-Ming Chiang (MIT) and Julia Greer (Caltech), MIT neuroscientist Ed Boyden, and computer scientist Ben Zhao of the University of Chicago.  John Rogers, a materials scientist and biomedical engineer at Northwestern University, has been a judge for more than a decade (and was featured on our very first Innovators list, in 1999). Here’s what he had to say about why he stays involved: “This award is compelling because it recognizes young people with scientific achievements that are not only of fundamental interest but also of practical significance, at the highest levels.”  Step 4: Editors make the final calls  In a final layer of vetting, editors who specialize in covering biotechnology, climate and energy, and artificial intelligence review the semifinalists whom judges scored highly in their respective areas. Staff editors and reporters can also nominate people they’ve come across in their coverage, and we add them to the mix for consideration.  Last, a small team of senior editors reviews all the semifinalists and the judges’ scores, as well as our own staff’s recommendations, and selects 35 honorees. We aim for a good combination of people from a variety of disciplines working in different regions of the world. And we take a staff vote to pick an Innovator of the Year—someone whose work we particularly admire.  In the end, it’s impossible to include every deserving individual on our list. But by incorporating both external nominations and outside expertise from our judges, we aim to make the evaluation process as rigorous and open as possible.   So who made the cut this year? Come back on September 8 to find out.

Here’s how we picked this year’s Innovators Under 35 Leer entrada »

AI, Committee, Noticias, Uncategorized

NVIDIA AI Team Introduces Jetson Thor: The Ultimate Platform for Physical AI and Next-Gen Robotics

Last week, the NVIDIA robotics team released Jetson Thor that includes Jetson AGX Thor Developer Kit and the Jetson T5000 module, marking a significant milestone for real‑world AI robotics development. Engineered as a supercomputer for physical AI, Jetson Thor brings generative reasoning and multimodal sensor processing to power inference and decision-making at the edge. Architectural Highlights Compute Performance Jetson Thor delivers up to 2,070 FP4 teraflops (TFLOPS) of AI compute via its Blackwell‑based GPU—a leap of 7.5× over the previous Jetson Orin platform. This performance arrives in a 130‑watt power envelope, with configurable operation down to 40 W, balancing high throughput with energy efficiency—approximately 3.5× better than Orin. Compute Architecture At its core, Jetson Thor integrates a 2560‑core Blackwell GPU equipped with 96 fifth‑generation Tensor Cores and supports Multi‑Instance GPU (MIG), enabling flexible partitioning of GPU resources for parallel workloads. Complementing this is a 14‑core Arm® Neoverse‑V3AE CPU, with 1 MB L2 per core and 16 MB shared L3 cache. Memory and I/O The platform includes 128 GB LPDDR5X memory on a 256‑bit bus at 273 GB/s bandwidth. Storage features include a 1 TB NVMe M.2 slot, along with HDMI, DisplayPort, multiple USB, Gigabit Ethernet, CAN headers, and QSFP28 for up to four 25 GbE lanes—crucial for real-time sensor fusion. https://developer.nvidia.com/blog/introducing-nvidia-jetson-thor-the-ultimate-platform-for-physical-ai/ Software Ecosystem for Physical AI Jetson Thor supports a comprehensive NVIDIA software stack tailored for robotics and physical AI: Isaac (GR00T) for generative reasoning and humanoid control. Metropolis for vision AI. Holoscan for real-time, low-latency sensor processing and sensor-over-Ethernet (Holoscan Sensor Bridge). These components allow one system-on-module to execute multimodal AI workflows—vision, language, actuation—without offloading or combining multiple chips. https://developer.nvidia.com/blog/introducing-nvidia-jetson-thor-the-ultimate-platform-for-physical-ai/ Defining ‘Physical AI’ and Its Significance Generative Reasoning & Multimodal Processing Physical AI combines perception, reasoning, and action planning. Jetson Thor enables robots to “simulate possible sequences, anticipate consequences, and generate both high-level plans and low-level motion policies,” delivering adaptability akin to human reasoning. By supporting real-time inference over language and visual inputs, it transforms robots from simple automata into generalist agents. Applications Robots can better navigate unpredictable environments, manipulate objects, or follow complex instructions without reteaching. Use cases span manufacturing, logistics, healthcare, agriculture, and more. Developer Access and Pricing Jetson AGX Thor Developer Kit: priced at $3,499, now generally available. Jetson T5000 production modules: available through NVIDIA’s partners, with unit pricing around $2,999 for orders of 1,000. Pre-orders suggest wider availability soon, catering to both research and commercial robotics ecosystems. Conclusion NVIDIA Jetson Thor represents a pivotal shift in robotics compute—embedding server-grade, multimodal inference, and reasoning capabilities within a single, power-bounded module. Its combination of 2,070 FP4 TFLOPS, high-efficiency design, expansive I/O, and robust software stack positions it as a foundational platform for the next generation of physical AI systems. With early adoption among prominent robotics developers and ready availability, Jetson Thor brings the vision of adaptable, real-world AI agents closer to reality. Check out the FULL TECHNICAL DETAILS. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post NVIDIA AI Team Introduces Jetson Thor: The Ultimate Platform for Physical AI and Next-Gen Robotics appeared first on MarkTechPost.

NVIDIA AI Team Introduces Jetson Thor: The Ultimate Platform for Physical AI and Next-Gen Robotics Leer entrada »

AI, Committee, Noticias, Uncategorized

The Download: AI doppelgängers in the workplace, and using lidar to measure climate disasters

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.  Can an AI doppelgänger help me do my job? —James O’Donnell Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention.  But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that ‘thinks’ like you do. Could well-crafted clones serve as our stand-ins? I certainly feel stretched thin at work sometimes, wishing I could be in two places at once, and I bet you do too. To find out, I tried making a clone of myself. Read the full story to find out how it got on. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. How lidar measures the cost of climate disasters The wildfires that swept through Los Angeles County this January left an indelible mark on the Southern California landscape. The Eaton and Palisades fires raged for 24 days, killing 29 people and destroying 16,000 structures, with losses estimated at $60 billion. More than 55,000 acres were consumed, and the landscape itself was physically transformed. Now, researchers are using lidar (light detection and ranging) technology to precisely measure these changes in the landscape’s geometry—helping them understand and track the cascading effects of climate disasters. Read the full story.—Jon Keegan This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land. Here’s how we picked this year’s Innovators Under 35 Next Monday we’ll publish our 2025 list of Innovators Under 35. The list highlights smart and talented people working across many areas of emerging technology. This new class features 35 accomplished founders, hardware engineers, roboticists, materials scientists, and others who are already tackling tough problems and making big moves in their careers.  MIT Technology Review first published a list of Innovators Under 35 in 1999. It’s a grand tradition for us, and we often follow the work of various featured innovators for years, even decades, after they appear on the list. So before the big announcement, we’d like to take a moment to explain how we select the people we recognize each year. Read the full story. —Amy Nordrum The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta created flirty chatbots of celebrities without their permissionTo make matters worse, the bots generated risqué pictures on demand. (Reuters)+ Meta’s relationship with Scale AI appears to be under pressure. (TechCrunch)+ An AI companion site is hosting sexually charged conversations with underage celebrity bots. (MIT Technology Review) 2 The FTC has warned Big Tech not to comply with EU lawsIf they jeopardize the freedom of expression or safety of US citizens, at least. (Wired $) 3 Ukraine is using drones to drop supplies to its troops in trenchesThey’re delivering everything from cigarettes to roasted chicken. (WP $)+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review) 4 What the collapse of this AI company says about the wider industryBuilder.ai was an early industry darling. Its downfall is a dire warning. (NYT $) 5 US shoppers are racing to land an EV bargainFederal tax credits on the vehicles expire at the end of the month. (WSJ $)+ The US could really use an affordable electric truck. (MIT Technology Review) 6 A major new project will use AI to research vaccinesThe Oxford Vaccine Group hopes the jabs will protect against deadly pathogens. (FT $)+ Why US federal health agencies are abandoning mRNA vaccines. (MIT Technology Review) 7 A lot of people stop taking weight-loss drugs within one yearHow should doctors encourage the ones who need to stay on them? (Undark)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review) 8 Chatbots can be manipulated into breaking their own rulesIt turns out they’re susceptible to both flattery and peer pressure. (The Verge)+ Forcing LLMs to be evil during training can make them nicer in the long run. (MIT Technology Review) 9 Tennis is trying to reach a new generation of fans Through…the metaverse? (The Information $) 10 The age of cheap online shopping is endingAnd consumers are the ones paying the price. (The Atlantic $)+ AI is starting to shake up the digital shopping experience, too. (FT $)+ Your most important customer may be AI. (MIT Technology Review) Quote of the day “Stop being a clanker!” —How Jay Pinkert, a marketing manager, scolds ChatGPT when it isn’t fulfilling his requests, he tells the New York Times. One more thing The algorithms around us A metronome ticks. A record spins. And as a feel-good pop track plays, a giant compactor slowly crushes a Jenga tower of material creations. Paint cans burst. Chess pieces topple. Camera lenses shatter. An alarm clock shrills and then goes silent. A guitar neck snaps. But wait! The jaunty tune starts up again, and the jaws open to reveal … an iPad. Watching Apple’s now-infamous “Crush!” ad, it’s hard not to feel uneasy about the ways in which digitization is remaking human life. Sure, we’re happy for computers to take over tasks we don’t want to do or aren’t particularly good at, like shopping or navigating. But what does it mean when the things we hold dear and thought were uniquely ours—our friendships, our art, even our language and creativity—can be reduced to software? Read the full story. —Ariel Bleicher We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Minnesota’s Llama-Alpaca Costume Contest

The Download: AI doppelgängers in the workplace, and using lidar to measure climate disasters Leer entrada »

AI, Committee, Noticias, Uncategorized

Meet Elysia: A New Open-Source Python Framework Redefining Agentic RAG Systems with Decision Trees and Smarter Data Handling

If you’ve ever tried to build a agentic RAG system that actually works well, you know the pain. You feed it some documents, cross your fingers, and hope it doesn’t hallucinate when someone asks it a simple question. Most of the time, you get back irrelevant chunks of text that barely answer what was asked. Elysia is trying to fix this mess, and honestly, their approach is quite creative. Built by the folks at Weaviate, this open-source Python framework doesn’t just throw more AI at the problem – it completely rethinks how AI agents should work with your data. Note: Python 3.12 required What’s Actually Wrong with Most RAG Systems Here’s the thing that drives everyone crazy: traditional RAG systems are basically blind. They take your question, convert it to vectors, find some “similar” text, and hope for the best. It’s like asking someone to find you a good restaurant while they’re wearing a blindfold – they might get lucky, but probably not. Most systems also dump every possible tool on the AI at once, which is like giving a toddler access to your entire toolbox and expecting them to build a bookshelf. Elysia’s Three Pillars: 1) Decision Trees Instead of giving AI agents every tool at once, Elysia guides them through a structured nodes for decisions. Think of it like a flowchart that actually makes sense. Each step has context about what happened before and what options come next. The really cool part? The system shows you exactly which path the agent took and why, so when something goes wrong, you can actually debug it instead of just shrugging and trying again. When the AI realizes it can’t do something (like searching for car prices in a makeup database), it doesn’t just keep trying forever. It sets an “impossible flag” and moves on, which sounds obvious but apparently needed to be invented. 2) Smart Data Source Display Remember when every AI just spat out paragraphs of text? Elysia actually looks at your data and figures out how to show it properly. Got e-commerce products? You get product cards. GitHub issues? You get ticket layouts. Spreadsheet data? You get actual tables. The system examines your data structure first – the fields, the types, the relationships – then picks one of the seven formats that makes sense. 3) Data Expertise This might be the biggest difference. Before Elysia searches anything, it analyzes your database to understand what’s actually in there. It can summarize, generate metadata, and choose display types. It looks at: What kinds of fields you have What the data ranges look like How different pieces relate to each other What would make sense to search for How does it Work? Learning from Feedback Elysia remembers when users say “yes, this was helpful” and uses those examples to improve future responses. But it does this smartly – your feedback doesn’t mess up other people’s results, and it helps the system get better at answering your specific types of questions. This means you can use smaller, cheaper models that still give good results because they’re learning from actual success cases. Chunking That Makes Sense Most RAG systems chunk all your documents upfront, which uses tons of storage and often creates weird breaks. Elysia chunks documents only when needed. It searches full documents first, then if a document looks relevant but is too long, it breaks it down on the fly. This saves storage space and actually works better because the chunking decisions are informed by what the user is actually looking for. Model Routing Different tasks need different models. Simple questions don’t need GPT-4, and complex analysis doesn’t work well with tiny models. Elysia automatically routes tasks to the right model based on complexity, which saves money and improves speed. https://weaviate.io/blog/elysia-agentic-rag Getting Started The setup is quite simple: Copy CodeCopiedUse a different Browser pip install elysia-ai elysia start That’s it. You get both a web interface and the Python framework. For developers who want to customize things: Copy CodeCopiedUse a different Browser from elysia import tool, Tree tree = Tree() @tool(tree=tree) async def add(x: int, y: int) -> int: return x + y tree(“What is the sum of 9009 and 6006?”) If you have Weaviate data, it’s even simpler: Copy CodeCopiedUse a different Browser import elysia tree = elysia.Tree() response, objects = tree( “What are the 10 most expensive items in the Ecommerce collection?”, collection_names = [“Ecommerce”] ) Real-World Example: Glowe’s Chatbot The Glowe skincare chatbot platform uses Elysia to handle complex product recommendations. Users can ask things like “What products work well with retinol but won’t irritate sensitive skin?” and get intelligent responses that consider ingredient interactions, user preferences, and product availability.youtube This isn’t just keyword matching – it’s understanding context and relationship between ingredients, user history, and product characteristics in ways that would be really hard to code manually.youtube Summary Elysia represents Weaviate’s attempt to move beyond traditional ask-retrieve-generate RAG patterns by combining decision-tree agents, adaptive data presentation, and learning from user feedback. Rather than just generating text responses, it analyzes data structure beforehand and selects appropriate display formats while maintaining transparency in its decision-making process. As Weaviate’s planned replacement for their Verba RAG system, it offers a foundation for building more sophisticated AI applications that understand both what users are asking and how to present answers effectively, though whether this translates to meaningfully better real-world performance remains to be seen since it is still in beta. Check out the TECHNICAL DETAILS and GITHUB PAGE. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post Meet Elysia: A New Open-Source Python Framework Redefining Agentic RAG Systems with Decision Trees and Smarter Data Handling appeared first on MarkTechPost.

Meet Elysia: A New Open-Source Python Framework Redefining Agentic RAG Systems with Decision Trees and Smarter Data Handling Leer entrada »

AI, Committee, Noticias, Uncategorized

StepFun AI Releases Step-Audio 2 Mini: An Open-Source 8B Speech-to-Speech AI Model that Surpasses GPT-4o-Audio

The StepFun AI team has released Step-Audio 2 Mini, an 8B parameter speech-to-speech large audio language model (LALM) that delivers expressive, grounded, and real-time audio interaction. Released under the Apache 2.0 license, this open-source model achieves state-of-the-art performance across speech recognition, audio understanding, and speech conversation benchmarks—surpassing commercial systems such as GPT-4o-Audio. https://huggingface.co/stepfun-ai/Step-Audio-2-mini Key Features 1. Unified Audio–Text Tokenization Unlike cascaded ASR+LLM+TTS pipelines, Step-Audio 2 integrates Multimodal Discrete Token Modeling, where text and audio tokens share a single modeling stream. This enables: Seamless reasoning across text and audio. On-the-fly voice style switching during inference. Consistency in semantic, prosodic, and emotional outputs. 2. Expressive and Emotion-Aware Generation The model doesn’t just transcribe speech—it interprets paralinguistic features like pitch, rhythm, emotion, timbre, and style. This allows conversations with realistic emotional tones such as whispering, sadness, or excitement. Benchmarks on StepEval-Audio-Paralinguistic show Step-Audio 2 achieving 83.1% accuracy, far beyond GPT-4o Audio (43.5%) and Qwen-Omni (44.2%). 3. Retrieval-Augmented Speech Generation Step-Audio 2 incorporates multimodal RAG (Retrieval-Augmented Generation): Web search integration for factual grounding. Audio search—a novel capability that retrieves real voices from a large library and fuses them into responses, enabling voice timbre/style imitation at inference time. 4. Tool Calling and Multimodal Reasoning The system extends beyond speech synthesis by supporting tool invocation. Benchmarks show that Step-Audio 2 matches textual LLMs in tool selection and parameter accuracy, while uniquely excelling at audio search tool calls—a capability unavailable in text-only LLMs. Training and Data Scale Text + Audio Corpus: 1.356T tokens Audio Hours: 8M+ real and synthetic hours Speaker Diversity: ~50K voices across languages and dialects Pretraining Pipeline: multi-stage curriculum covering ASR, TTS, speech-to-speech translation, and emotion-labeled conversational synthesis. This large-scale training allows Step-Audio 2 Mini to retain strong text reasoning (via its Qwen2-Audio and CosyVoice foundation) while mastering fine-grained audio modeling. Performance Benchmarks https://huggingface.co/stepfun-ai/Step-Audio-2-mini https://arxiv.org/abs/2507.16632 Automatic Speech Recognition (ASR) English: Average WER 3.14% (beats GPT-4o Transcribe at an average 4.5%). Chinese: Average CER 3.08% (significantly lower than GPT-4o and Qwen-Omni). Robust across dialects and accents. Audio Understanding (MMAU Benchmark) Step-Audio 2: 78.0 average, outperforming Omni-R1 (77.0) and Audio Flamingo 3 (73.1). Strongest in sound and speech reasoning tasks. Speech Translation CoVoST 2 (S2TT): BLEU 39.26 (highest among open and closed models). CVSS (S2ST): BLEU 30.87, ahead of GPT-4o (23.68). Conversational Benchmarks (URO-Bench) Chinese Conversations: Best overall at 83.3 (basic) and 68.2 (pro). English Conversations: Competitive with GPT-4o (83.9 vs. 84.5), far ahead of other open models. Source: Marktechpost.com Conclusion Step-Audio 2 Mini makes advanced, multimodal speech intelligence accessible to the developers and research community. By combining Qwen2-Audio’s reasoning capacity with CosyVoice’s tokenization pipeline, and augmenting with retrieval-based grounding, StepFun has delivered one of the most capable open audio LLMs. Check out the PAPER and MODEL on HUGGING FACE. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. The post StepFun AI Releases Step-Audio 2 Mini: An Open-Source 8B Speech-to-Speech AI Model that Surpasses GPT-4o-Audio appeared first on MarkTechPost.

StepFun AI Releases Step-Audio 2 Mini: An Open-Source 8B Speech-to-Speech AI Model that Surpasses GPT-4o-Audio Leer entrada »

AI, Committee, Noticias, Uncategorized

Middo: Model-Informed Dynamic Data Optimization for Enhanced LLM Fine-Tuning via Closed-Loop Learning

arXiv:2508.21589v1 Announce Type: new Abstract: Supervised Fine-Tuning (SFT) Large Language Models (LLM) fundamentally rely on high-quality training data. While data selection and data synthesis are two common strategies to improve data quality, existing approaches often face limitations in static dataset curation that fail to adapt to evolving model capabilities. In this paper, we introduce Middo, a self-evolving Model-informed dynamic data optimization framework that uses model-aware data selection and context-preserving data refinement. Unlike conventional one-off filtering/synthesis methods, our framework establishes a closed-loop optimization system: (1) A self-referential diagnostic module proactively identifies suboptimal samples through tri-axial model signals – loss patterns (complexity), embedding cluster dynamics (diversity), and self-alignment scores (quality); (2) An adaptive optimization engine then transforms suboptimal samples into pedagogically valuable training points while preserving semantic integrity; (3) This optimization process continuously evolves with model capability through dynamic learning principles. Experiments on multiple benchmarks demonstrate that our method consistently enhances the quality of seed data and boosts LLM’s performance with improving accuracy by 7.15% on average while maintaining the original dataset scale. This work establishes a new paradigm for sustainable LLM training through dynamic human-AI co-evolution of data and models. Our datasets, models, and code are coming soon.

Middo: Model-Informed Dynamic Data Optimization for Enhanced LLM Fine-Tuning via Closed-Loop Learning Leer entrada »

AI, Committee, Noticias, Uncategorized

Going over Fine Web with a Fine-Tooth Comb: Technical Report of Indexing Fine Web for Problematic Content Search and Retrieval

arXiv:2508.21788v1 Announce Type: new Abstract: Large language models (LLMs) rely heavily on web-scale datasets like Common Crawl, which provides over 80% of training data for some modern models. However, the indiscriminate nature of web crawling raises challenges in data quality, safety, and ethics. Despite the critical importance of training data quality, prior research on harmful content has been limited to small samples due to computational constraints. This project presents a framework for indexing and analyzing LLM training datasets using an ElasticSearch-based pipeline. We apply it to SwissAI’s FineWeb-2 corpus (1.5TB, four languages), achieving fast query performance–most searches in milliseconds, all under 2 seconds. Our work demonstrates real-time dataset analysis, offering practical tools for safer, more accountable AI systems.

Going over Fine Web with a Fine-Tooth Comb: Technical Report of Indexing Fine Web for Problematic Content Search and Retrieval Leer entrada »

AI, Committee, Noticias, Uncategorized

Interpretable Mnemonic Generation for Kanji Learning via Expectation-Maximization

arXiv:2507.05137v2 Announce Type: replace Abstract: Learning Japanese vocabulary is a challenge for learners from Roman alphabet backgrounds due to script differences. Japanese combines syllabaries like hiragana with kanji, which are logographic characters of Chinese origin. Kanji are also complicated due to their complexity and volume. Keyword mnemonics are a common strategy to aid memorization, often using the compositional structure of kanji to form vivid associations. Despite recent efforts to use large language models (LLMs) to assist learners, existing methods for LLM-based keyword mnemonic generation function as a black box, offering limited interpretability. We propose a generative framework that explicitly models the mnemonic construction process as driven by a set of common rules, and learn them using a novel Expectation-Maximization-type algorithm. Trained on learner-authored mnemonics from an online platform, our method learns latent structures and compositional rules, enabling interpretable and systematic mnemonics generation. Experiments show that our method performs well in the cold-start setting for new learners while providing insight into the mechanisms behind effective mnemonic creation.

Interpretable Mnemonic Generation for Kanji Learning via Expectation-Maximization Leer entrada »

AI, Committee, Noticias, Uncategorized

Enhancing Robustness of Autoregressive Language Models against Orthographic Attacks via Pixel-based Approach

arXiv:2508.21206v1 Announce Type: new Abstract: Autoregressive language models are vulnerable to orthographic attacks, where input text is perturbed with characters from multilingual alphabets, leading to substantial performance degradation. This vulnerability primarily stems from the out-of-vocabulary issue inherent in subword tokenizers and their embeddings. To address this limitation, we propose a pixel-based generative language model that replaces the text-based embeddings with pixel-based representations by rendering words as individual images. This design provides stronger robustness to noisy inputs, while an extension of compatibility to multilingual text across diverse writing systems. We evaluate the proposed method on the multilingual LAMBADA dataset, WMT24 dataset and the SST-2 benchmark, demonstrating both its resilience to orthographic noise and its effectiveness in multilingual settings.

Enhancing Robustness of Autoregressive Language Models against Orthographic Attacks via Pixel-based Approach Leer entrada »

AI, Committee, Noticias, Uncategorized

Transforming Wearable Data into Personal Health Insights using Large Language Model Agents

arXiv:2406.06464v3 Announce Type: replace-cross Abstract: Deriving personalized insights from popular wearable trackers requires complex numerical reasoning that challenges standard LLMs, necessitating tool-based approaches like code generation. Large language model (LLM) agents present a promising yet largely untapped solution for this analysis at scale. We introduce the Personal Health Insights Agent (PHIA), a system leveraging multistep reasoning with code generation and information retrieval to analyze and interpret behavioral health data. To test its capabilities, we create and share two benchmark datasets with over 4000 health insights questions. A 650-hour human expert evaluation shows that PHIA significantly outperforms a strong code generation baseline, achieving 84% accuracy on objective, numerical questions and, for open-ended ones, earning 83% favorable ratings while being twice as likely to achieve the highest quality rating. This work can advance behavioral health by empowering individuals to understand their data, enabling a new era of accessible, personalized, and data-driven wellness for the wider population.

Transforming Wearable Data into Personal Health Insights using Large Language Model Agents Leer entrada »

We use cookies to improve your experience and performance on our website. You can learn more at Política de privacidad and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
es_ES