Nachrichten
Nachrichten
AIG Essay #7: Co-conspiratorial Optimization
The Anatomy of Language Models’ Obsession with Questions Continue reading on Medium »...
AI-Generated NFTs and Their Impact on the Digital Art Market
The intersection of Artificial Intelligence (AI) and blockchain technology, particularly through Non-Fungible Tokens (NFTs), is...
AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles
arXiv:2507.11764v1 Announce Type: new Abstract: This paper presents AI Wizards’ participation in the CLEF 2025...
AI That Teaches Itself: Tsinghua University’s ‘Absolute Zero’ Trains LLMs With Zero External Data
LLMs have shown advancements in reasoning capabilities through Reinforcement Learning with Verifiable Rewards (RLVR), which...
AI Agents Now Write Code in Parallel: OpenAI Introduces Codex, a Cloud-Based Coding Agent Inside ChatGPT
OpenAI has introduced Codex, a cloud-native software engineering agent integrated into ChatGPT, signaling a new...
AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows
Mixus’s “colleague-in-the-loop” model blends automation with human judgment for safe deployment of AI agents.Read More...
Agent-based computing is outgrowing the web as we know it
AI agents are moving from passive assistants to active participants. Today, we ask them to...
AegisLLM: Scaling LLM Security Through Adaptive Multi-Agent Systems at Inference Time
The Growing Threat Landscape for LLMs LLMs are key targets for fast-evolving attacks, including prompt...
Advancing Single and Multi-task Text Classification through Large Language Model Fine-tuning
arXiv:2412.08587v2 Announce Type: replace Abstract: Both encoder-only models (e.g., BERT, RoBERTa) and large language models...
Adopting agentic AI? Build AI fluency, redesign workflows, don’t neglect supervision
How can organizations decide how to use human-in-the-loop mechanisms and collaborative frameworks with AI agents?Read...
Achieving Tokenizer Flexibility in Language Models through Heuristic Adaptation and Supertoken Learning
arXiv:2505.09738v1 Announce Type: new Abstract: Pretrained language models (LLMs) are often constrained by their fixed...
AbstRaL: Teaching LLMs Abstract Reasoning via Reinforcement to Boost Robustness on GSM Benchmarks
Recent research indicates that LLMs, particularly smaller ones, frequently struggle with robust reasoning. They tend...