YouZum

Uncategorized

AI, Committee, Nachrichten, Uncategorized

Reinforcement Learning Teachers of Test Time Scaling

arXiv:2506.08388v2 Announce Type: replace-cross Abstract: Training reasoning language models (LMs) with reinforcement learning (RL) for one-hot correctness inherently relies on the LM being able to explore and solve its task with some chance at initialization. Furthermore, a key use case of reasoning LMs is to act as teachers for distilling new students and cold-starting future RL iterations rather than being deployed themselves. From these considerations, we introduce a new framework that avoids RL’s exploration challenge by training a new class of Reinforcement-Learned Teachers (RLTs) focused on yielding the most effective downstream distillation. RLTs are prompted with both the question and solution to each problem, and tasked to simply “connect-the-dots” with detailed explanations tailored for their students. We train RLTs with dense rewards obtained by feeding each explanation to the student and testing its understanding of the problem’s solution. In practice, the raw outputs of a 7B RLT provide higher final performance on competition and graduate-level tasks than existing distillation and cold-starting pipelines that collect and postprocess the reasoning traces of orders of magnitude larger LMs. Furthermore, RLTs maintain their effectiveness when training larger students and when applied zero-shot to out-of-distribution tasks, unlocking new levels of efficiency and re-usability for the RL reasoning framework.

Reinforcement Learning Teachers of Test Time Scaling Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Handling Numeric Expressions in Automatic Speech Recognition

arXiv:2408.00004v2 Announce Type: replace-cross Abstract: This paper addresses the problem of correctly formatting numeric expressions in automatic speech recognition (ASR) transcripts. This is challenging since the expected transcript format depends on the context, e.g., 1945 (year) vs. 19:45 (timestamp). We compare cascaded and end-to-end approaches to recognize and format numeric expressions such as years, timestamps, currency amounts, and quantities. For the end-to-end approach, we employed a data generation strategy using a large language model (LLM) together with a text to speech (TTS) model to generate adaptation data. The results on our test data set show that while approaches based on LLMs perform well in recognizing formatted numeric expressions, adapted end-to-end models offer competitive performance with the advantage of lower latency and inference cost.

Handling Numeric Expressions in Automatic Speech Recognition Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Large Language Models for Disease Diagnosis: A Scoping Review

arXiv:2409.00097v3 Announce Type: replace Abstract: Automatic disease diagnosis has become increasingly valuable in clinical practice. The advent of large language models (LLMs) has catalyzed a paradigm shift in artificial intelligence, with growing evidence supporting the efficacy of LLMs in diagnostic tasks. Despite the increasing attention in this field, a holistic view is still lacking. Many critical aspects remain unclear, such as the diseases and clinical data to which LLMs have been applied, the LLM techniques employed, and the evaluation methods used. In this article, we perform a comprehensive review of LLM-based methods for disease diagnosis. Our review examines the existing literature across various dimensions, including disease types and associated clinical specialties, clinical data, LLM techniques, and evaluation methods. Additionally, we offer recommendations for applying and evaluating LLMs for diagnostic tasks. Furthermore, we assess the limitations of current research and discuss future directions. To our knowledge, this is the first comprehensive review for LLM-based disease diagnosis.

Large Language Models for Disease Diagnosis: A Scoping Review Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Can structural correspondences ground real world representational content in Large Language Models?

arXiv:2506.16370v1 Announce Type: new Abstract: Large Language Models (LLMs) such as GPT-4 produce compelling responses to a wide range of prompts. But their representational capacities are uncertain. Many LLMs have no direct contact with extra-linguistic reality: their inputs, outputs and training data consist solely of text, raising the questions (1) can LLMs represent anything and (2) if so, what? In this paper, I explore what it would take to answer these questions according to a structural-correspondence based account of representation, and make an initial survey of this evidence. I argue that the mere existence of structural correspondences between LLMs and worldly entities is insufficient to ground representation of those entities. However, if these structural correspondences play an appropriate role – they are exploited in a way that explains successful task performance – then they could ground real world contents. This requires overcoming a challenge: the text-boundedness of LLMs appears, on the face of it, to prevent them engaging in the right sorts of tasks.

Can structural correspondences ground real world representational content in Large Language Models? Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Techniques for supercharging academic writing with generative AI

arXiv:2310.17143v4 Announce Type: replace-cross Abstract: Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement and their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human-AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, we describe effective prompting techniques for incorporating AI into the writing routine (outlining, drafting, and editing) as well as strategies for maintaining rigorous scholarship, adhering to varied journal policies, and avoiding overreliance on AI. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.

Techniques for supercharging academic writing with generative AI Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech Systems

arXiv:2506.16381v1 Announce Type: new Abstract: In modern speech synthesis, paralinguistic information–such as a speaker’s vocal timbre, emotional state, and dynamic prosody–plays a critical role in conveying nuance beyond mere semantics. Traditional Text-to-Speech (TTS) systems rely on fixed style labels or inserting a speech prompt to control these cues, which severely limits flexibility. Recent attempts seek to employ natural-language instructions to modulate paralinguistic features, substantially improving the generalization of instruction-driven TTS models. Although many TTS systems now support customized synthesis via textual description, their actual ability to interpret and execute complex instructions remains largely unexplored. In addition, there is still a shortage of high-quality benchmarks and automated evaluation metrics specifically designed for instruction-based TTS, which hinders accurate assessment and iterative optimization of these models. To address these limitations, we introduce InstructTTSEval, a benchmark for measuring the capability of complex natural-language style control. We introduce three tasks, namely Acoustic-Parameter Specification, Descriptive-Style Directive, and Role-Play, including English and Chinese subsets, each with 1k test cases (6k in total) paired with reference audio. We leverage Gemini as an automatic judge to assess their instruction-following abilities. Our evaluation of accessible instruction-following TTS systems highlights substantial room for further improvement. We anticipate that InstructTTSEval will drive progress toward more powerful, flexible, and accurate instruction-following TTS.

InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech Systems Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

GeoGuess: Multimodal Reasoning based on Hierarchy of Visual Information in Street View

arXiv:2506.16633v1 Announce Type: new Abstract: Multimodal reasoning is a process of understanding, integrating and inferring information across different data modalities. It has recently attracted surging academic attention as a benchmark for Artificial Intelligence (AI). Although there are various tasks for evaluating multimodal reasoning ability, they still have limitations. Lack of reasoning on hierarchical visual clues at different levels of granularity, e.g., local details and global context, is of little discussion, despite its frequent involvement in real scenarios. To bridge the gap, we introduce a novel and challenging task for multimodal reasoning, namely GeoGuess. Given a street view image, the task is to identify its location and provide a detailed explanation. A system that succeeds in GeoGuess should be able to detect tiny visual clues, perceive the broader landscape, and associate with vast geographic knowledge. Therefore, GeoGuess would require the ability to reason between hierarchical visual information and geographic knowledge. In this work, we establish a benchmark for GeoGuess by introducing a specially curated dataset GeoExplain which consists of panoramas-geocoordinates-explanation tuples. Additionally, we present a multimodal and multilevel reasoning method, namely SightSense which can make prediction and generate comprehensive explanation based on hierarchy of visual information and external knowledge. Our analysis and experiments demonstrate their outstanding performance in GeoGuess.

GeoGuess: Multimodal Reasoning based on Hierarchy of Visual Information in Street View Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

See stunning first images from the Vera C. Rubin Observatory

The first spectacular images taken by the Vera C. Rubin Observatory have been released for the world to peruse: a panoply of iridescent galaxies and shimmering nebulas. “This is the dawn of the Rubin Observatory,” says Meg Schwamb, a planetary scientist and astronomer at Queen’s University Belfast in Northern Ireland. Much has been written about the observatory’s grand promise: to revolutionize our understanding of the cosmos by revealing a once-hidden population of far-flung galaxies, erupting stars, interstellar objects, and elusive planets. And thanks to its unparalleled technical prowess, few doubted its ability to make good on that. But over the past decade, during its lengthy construction period, “everything’s been in the abstract,” says Schwamb. Today, that promise has become a staggeringly beautiful reality.  Rubin’s view of the universe is unlike any that preceded it—an expansive vision of the night sky replete with detail, including hazy envelopes of matter coursing around galaxies and star-paved bridges arching between them. “These images are truly stunning,” says Pedro Bernardinelli, an astronomer at the University of Washington. During its brief perusal of the night sky, Rubin even managed to spy more than 2,000 never-before-seen asteroids, demonstrating that it should be able to spotlight even the sneakiest denizens, and darkest corners, of our own solar system. Today’s reveal is a mere amuse-bouche compared with what’s to come: Rubin, funded by the US National Science Foundation and the Department of Energy, is set for at least 10 years of planned observations. But this moment, and these glorious inaugural images, are worth celebrating for what they represent: the culmination of over a decade of painstaking work.  “This is a direct demonstration that Rubin is no longer in the future,” says Bernardinelli. “It’s the present.” The observatory is named after the late Vera Rubin, an astronomer who uncovered strong evidence for dark matter, a mysterious and as-yet-undetected something that’s binding galaxies together more strongly than the gravity of ordinary, visible matter alone can explain. Trying to make sense of dark matter—and its equally mysterious, universe-stretching cousin, dubbed dark energy—is a monumental task, one that cannot be addressed by just one line of study or scrutiny of one type of cosmic object. That’s why Rubin was designed to document anything and everything that shifts or sparkles in the night sky. Sitting atop Chile’s Cerro Pachón mountain range, it boasts a 7,000-pound, 3,200-megapixel digital camera that can take detailed snapshots of a large patch of the night sky; a house-size cradle of mirrors that can drink up extremely distant and faint starlight; and a maze of joints and pistons that allow it to swivel about with incredible speed and precision. A multinational computer network permits its sky surveys to be largely automated, its images speedily processed, any new objects easily detected, and the relevant groups of astronomers quickly alerted. All that technical wizardry allows Rubin to take a picture of the entire visible night sky once every few days, filling in the shadowed gaps and unseen activity between galaxies. “The sky [isn’t] static. There are asteroids zipping by, and supernovas exploding,” says Yusra AlSayyad, Rubin’s overseer of image processing. By conducting a continuous survey over the next decade, the facility will create a three-dimensional movie of the universe’s ever-changing chaos that could help address all sorts of astronomic queries. What were the very first galaxies like? How did the Milky Way form? Are there planets hidden in our own solar system’s backyard? Rubin’s first glimpse of the firmament is predictably bursting with galaxies and stars. But the resolution, breadth, and depth of the images have taken astronomers aback. “I’m very impressed with these images. They’re really incredible,” says Christopher Conselice, an extragalactic astronomer at the University of Manchester in England. One shot, created from 678 individual exposures, showcases the Trifid and Lagoon nebulas—two oceans of luminescent gas and dust where stars are born. Others depict a tiny portion of Rubin’s view of the Virgo Cluster, a zoo of galaxies. Hues of blue are coming from relatively nearby whirlpools of stars, while red tints emanate from remarkably distant and primeval galaxies.  A small section of the Vera C. Rubin Observatory’s view of the Virgo Cluster. Three merging galaxies can be seen on the upper right. The view also includes two striking spiral galaxies (lower right), distant galaxies, and many Milky Way stars.NSF-DOE VERA C. RUBIN OBSERVATORY The rich detail in these images is already proving to be illuminating. “As galaxies merge and interact, the galaxies are pulling stars away from each other,” says Conselice. This behavior can be seen in plumes of diffuse light erupting from several galaxies, creating halos around them or illuminated bridges between them—records of these ancient galaxies’ pasts. Images like these are also likely to contain several supernovas, the explosive final moments of sizable stars. Not only do supernovas seed the cosmos with all the heavy elements that planets—and life—rely on, but they can also hint at how the universe has expanded over time.  Anais Möller, an astrophysicist at the Swinburne University of Technology in Melbourne, Australia, is a supernova hunter. “I search for exploding stars in very far away galaxies,” she says. Older sky surveys have found plenty, but they can lack context: You can see the explosion, but not what galaxy it’s from. Thanks to Rubin’s resolution—amply demonstrated by the Virgo Cluster set of images—astronomers can now “find where those exploding stars live,” says Möller. Another small section of the observatory’s view of the Virgo Cluster. The image includes many distant galaxies along with stars from our own Milky Way galaxy. NSF-DOE VERA C. RUBIN OBSERVATORY While taking these images of the distant universe, Rubin also discovered 2,104 asteroids flitting about in our own solar system—including seven whose orbits hew close to Earth’s own. This number may sound impressive, but it’s just par for the course for Rubin. In just a few months, it will find over a million new asteroids—doubling the current known tally. And over the course of its decadal survey, Rubin is

See stunning first images from the Vera C. Rubin Observatory Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Meta AI Researchers Introduced a Scalable Byte-Level Autoregressive U-Net Model That Outperforms Token-Based Transformers Across Language Modeling Benchmarks

Language modeling plays a foundational role in natural language processing, enabling machines to predict and generate text that resembles human language. These models have evolved significantly, beginning with statistical methods and progressing through neural architectures to today’s large-scale transformer-based systems. At the center of many applications, such as chatbots, translation tools, and text completion engines, language models interpret and generate sequences of words or bytes. Their effectiveness largely depends on the underlying architecture and the data representations used. As the demand for more efficient and scalable models grows, researchers continue to explore new structures and training methods to improve performance, handle longer contexts, and reduce computational load. Among these efforts, combining ideas from convolutional architectures with autoregressive prediction has emerged as an intriguing approach. Challenges with Tokenization and Transformer-Based Language Models One of the main issues with language modeling is the excessive use of token-based models and transformer models, which are computationally expensive and generally inefficient for processing at the byte level or even across languages. Techniques such as Byte Pair Encoding control sequence lengths but create inconsistencies between languages and domains. Transformers, although precise, lack scalability due to their quadratic complexity. Although competing approaches, such as sparse attention, attempt to solve this issue, they typically do so at the expense of simplicity or performance. Byte-level modeling with flat transformers has demonstrated only partial success, underscoring the need for new architectures that can process raw byte inputs without tokenization while achieving excellent performance. Introducing AU-Net: A Token-Free Byte-Level Language Model Researchers from FAIR at Meta, TAU, INRIA, and LISN, CNRS & Université Paris-Saclay, INSA Rouen Normandy, LITIS, Rouen, France, introduced a new Autoregressive U-Net (AU-Net). This model integrates the ideas of convolutional U-Net designs with autoregressive decoding processes. In contrast to transformer systems, AU-Net does not require tokenization and works directly on bytes. The architecture is designed to enable parallel and efficient generation, with the autonomy to incorporate autoregressive capabilities. It achieves this by hierarchically encoding down-sampled convolutions and then up-sampling stages, which restore the original sequence size. Notably, AU-Net presents a splitting mechanism that enables predictions to be performed over subsegments of the sequence, enhancing scalability. This design shift also ensures that the model’s complexity increases linearly with sequence length, rather than quadratically. The researchers deployed this model across several language modeling benchmarks and multilingual tasks to test its effectiveness in both low-resource and large-scale settings. AU-Net Architecture: Multi-Scale Encoding and Parallel Inference The AU-Net architecture is implemented with multiple scale stages that reduce and then reconstruct input sequences using convolutions with strides. During training, each segment of the input sequence is predicted in a masked fashion to maintain the autoregressive property. The model uses a learned splitting function to divide input sequences into non-overlapping groups, which are then predicted concurrently and combined into a full output. It supports both shallow and deep configurations, with models ranging from 3% to 75% of the training compute budget compared to standard baselines. For example, one configuration trained on 200B tokens with 8 billion parameters achieved highly competitive results. Another version, trained on 60 billion tokens with a one billion-parameter model, achieved a 35.7 BLEU score on standard translation tasks, outperforming baseline models trained on the same data. Additionally, AU-Net demonstrated faster generation speeds due to its parallel decoding, offering a significant benefit for latency-sensitive applications. Benchmark Results Show Competitive Edge Over Transformers The experimental results showed strong performance across a wide range of tasks. On Enwik8, a byte-level compression benchmark, AU-Net achieved 1.01 bits per byte, surpassing a transformer baseline that reached only 1.02 bits per byte. On PG-19, a long-context language modeling task, the model achieved 2.61 bits per byte compared to 2.75 from standard transformers. AU-Net also scaled effectively across compute budgets, achieving 43.3 BLEU on FLORES-200 translation with an 8B model size trained on 200B tokens. In multilingual evaluation using FLORES-200, the model outperformed token-based transformers across low-resource language pairs. It also demonstrated better cross-lingual generalization within language families, achieving a BLEU score of up to 33.0 in several configurations. When evaluated under equal compute and data budgets, AU-Net either matched or outperformed transformers, with generation speeds improving by 20% to 30% in certain settings. Key Contributions and Performance Insights from AU-Net AU-Net eliminates the need for tokenization by operating directly on raw byte inputs. On Enwik8, AU-Net scored 1.01 bpb, surpassing transformer baselines with 1.02 bpb. On PG-19, it achieved 2.61 bpb, improving over the 2.75 bpb of standard transformers. FLORES-200 multilingual evaluation showed up to 33.0 BLEU, outperforming token-based systems. Byte-level models trained with AU-Net maintained high performance across high-resource and low-resource settings. Generation speed improved by 20%–30 %, supporting fast, parallel inference. Scaling laws held; performance improved with increased model size and data. The model showed better cross-lingual generalization and robustness to noise. Efficient use of compute; AU-Net matched or exceeded transformer performance at lower compute budgets. AU-Net is a viable alternative for large-scale language modeling tasks, including multilingual and byte-level applications. Conclusion: AU-Net’s Practical Benefits and Scalability Potential In conclusion, the researchers provided detailed scaling analyses showing that AU-Net adheres to predictable hyperparameter scaling laws. It benefits from increased model size and training tokens in a manner consistent with the practices observed in transformer models. For example, under compute-matched training settings, AU-Net’s performance improved steadily with increased data-to-model ratio, matching the gains seen in transformer counterparts. Importantly, AU-Net was able to scale up to models with 8 billion parameters, demonstrating effective training and showing that the architecture is capable of supporting high-capacity systems. In extended evaluations, the model maintained its efficiency when applied to downstream tasks, showing strong performance in language generation, translation, and byte-level prediction benchmarks. AU-Net also proved to be easier to train and more robust under noisy input conditions compared to token-based models. Why This Research Matters? This research matters because it challenges the long-standing reliance on token-based language models by introducing AU-Net, a byte-level autoregressive architecture that eliminates tokenization overhead while achieving competitive or superior performance. By processing raw

Meta AI Researchers Introduced a Scalable Byte-Level Autoregressive U-Net Model That Outperforms Token-Based Transformers Across Language Modeling Benchmarks Beitrag lesen »

AI, Committee, Nachrichten, Uncategorized

Cloud quantum computing: A trillion-dollar opportunity with dangerous hidden risks

GUEST: Quantum computing (QC) brings with it a mix of groundbreaking possibilities and significant risks. Major tech players like IBM, Google, Microsoft and Amazon have already rolled out commercial QC cloud services, while specialized firms like Quantinuum and PsiQuantum have quickly achieved unicorn status. Experts predict that the global QC mark…Read More

Cloud quantum computing: A trillion-dollar opportunity with dangerous hidden risks Beitrag lesen »

de_DE