YouZum

ニュース

AI, Committee, ニュース, Uncategorized

There is no nature anymore

When people talk about “nature,” they’re generally talking about things that aren’t made by human beings. Rocks. Reefs. Red wolves. But while there is plenty of God’s creation to go around, it is hard to think of anything on Earth that human hands haven’t affected. In the Brazilian rainforest, scientists have found microplastics in the bellies of animals ranging from red howler monkeys to manatees. In remotest Yakutia, where much of the earth remains untrodden by human feet, the carbon in the sky above melts the permafrost below. In the Arctic Ocean, artificial light from ship traffic—on the rise as the polar ice cap melts away—now disrupts the nightly journey of zooplankton to the ocean surface, one of the largest animal migrations on the planet. The remote mountain lakes of the Alps are contaminated with all kinds of synthetic chemicals. Polar bears are full of flame retardants. Cesium-137, fallout from nuclear bomb explosions, lightly rimes the entire planet.  These examples are mostly pollution—nuclear, carbon, chemical, light—but I raise them not to highlight the ways human industry and technology degrade the environment but to note how the things humans build change it. Nobody really knows what the exact effects of all that will be, but my point is that no part of the globe is free of human fingerprints. We have literally changed the world.   We’ve changed ourselves as well. Humans are especially adept at bending human nature. Everything about us is up for grabs—appearance, health, our very thoughts. Pharmaceuticals, surgeries, vaccines, and hormones give us longer lives, take away our pain, ease our anxiety and depression, make us faster, stronger, more resilient. We’re getting glimpses of technologies that will let us change who our children will become before they’re even born. Electrodes implanted in people’s brains let them control computers and translate thoughts into speech. Prosthetics and exoskeletons straight out of comic books restore and enhance physical abilities, while gene-­editing technologies like CRISPR are rewriting our very DNA. And meanwhile, people have taken the sum total of all the information we have ever written down and poured it into vast calculating machines in an effort—at least by some—to build an intelligence greater than our own.  So what even is nature, or natural, in this context? Is it “environmentalist,” in the conventional sense, to try to preserve what one could argue no longer exists? Should we employ technology to try to make the world more “natural”?   Those questions led us to approach this Nature issue with humility. We try to grapple with them all the time—MIT Technology Review is, after all, a review of how people have altered and built upon nature. And it’s a place to think about how we might repair it. Take solar geoengineering, for example—a subject we have covered with increasing frequency over the past few years. The basic idea of geoengineering is to find a technological fix for a problem technology caused: Burning ­petrochemicals to fuel the Industrial Revolution turned Earth’s atmosphere into a heat sink, fundamentally breaking the climate. Some geoengineers think that releasing particulate matter into the stratosphere would reflect sunlight back into space, thus reducing global temperatures. After years of theoretical discussions, some companies have begun to actively experiment with such technologies. This might seem like a great way to restore the world to a more natural state. It’s also fraught with controversy and peril. It could, for example, benefit some nations while harming others. It may give us license to continue burning fossil fuels and releasing greenhouse gases. The list goes on.  Nature isn’t easy.  In our May/June issue, we have attempted to take a hard look at nature in our unnatural world. We have stories about birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice and within ourselves—and in the far future, on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer. I don’t know if any of that will answer the questions I’ve been asking here—but we can’t help but try. It’s in our nature. 

There is no nature anymore 投稿を読む »

AI, Committee, ニュース, Uncategorized

Los Angeles is finally going underground

Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-­packed stretch of the city will be, if all goes well, a 25-minute train ride. The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town. These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-­balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day.  A Metro train pulls into La Cienega station Art by Susan Silton at the Fairfax station Art by Eamon Ore-Giron at the La Brea station Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above. Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.

Los Angeles is finally going underground 投稿を読む »

AI, Committee, ニュース, Uncategorized

AI needs a strong data fabric to deliver business value

Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey. But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it.  Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics.  “AI is incredibly good at producing results,” he says. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.” In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation.  Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure. Losing context is a critical AI problem Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost.  Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions.  Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says.  “Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.” In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed. This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems. Don’t consolidate, integrate The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic. The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan.  The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful. “It empowers confident, consistent decisions, and when these elements all come together, AI just doesn’t analyze and interpret the data — it drives smarter, faster decisions that really create business impact,” he says. “This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity.” Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently. Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation. Beyond data isolation and dashboards In the emerging era of agentic AI, the responsibility for monitoring, analyzing,

AI needs a strong data fabric to deliver business value 投稿を読む »

AI, Committee, ニュース, Uncategorized

The Download: introducing the 10 Things That Matter in AI Right Now

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: 10 Things That Matter in AI Right Now What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now. The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world. We’ll be unpacking one item from the list each day here in The Download, explaining what it means and why it matters. Read the full rundown now—and stay tuned for the days ahead. MIT Technology Review Narrated: desalination plants in the Middle East are increasingly vulnerable As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region. President Donald Trump recently threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why. —Casey Crownhart This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 An unauthorized group has reportedly accessed Anthropic’s MythosUsers in a private online forum may have gained access. (Bloomberg $)+ Anthropic said the model was too dangerous for a full release. (Axios)+ Mozilla used it to find 271 security vulnerabilities in Firefox. (Wired $) 2 Meta will track workers’ clicks and keystrokes for AI trainingTracking software is being installed on workers’ computers.(Reuters $)+ Employees are up in arms about the program. (Business Insider)+ LLMs could supercharge mass surveillance in the US. (MIT Technology Review) 3 ChatGPT allegedly advised the Florida State shooterAbout when and where to strike, and which ammunition to use. (Washington Post $)+ Florida’s attorney general is probing ChatGPT’s role in the shooting. (Ars Technica)+ Does AI cause delusions or just amplify them? (MIT Technology Review) 4 SpaceX has secured the option to buy AI startup Cursor for $60 billionOr pay $10 billion for the work they’re doing together. (The Verge)+ SpaceX made the deal as it prepares to go public. (NYT $)+ Musk’s endgame for the company may be a land grab in space. (The Atlantic $)5 The Pentagon wants $54 billion for dronesThat would rank among the top 10 military budgets for entire nations. (Ars Technica)+ Shoplifters could soon be chased down by drones. (MIT Technology Review) 6 Apple’s new chief hardware officer signals a sprint to build in-house chipsApple silicon lead Johny Srouji has been promoted to the role. (CNBC) 7 China’s government is tightening its grip on AI firms that try to leaveIt’s doing all it can to stop firms like Manus sending talent and research overseas. (Washington Post $) 8 The FBI is probing the deaths of scientists tied to sensitive researchIncluding a nuclear physicist and MIT professor shot outside his home. (CNN) 9 The US is accelerating research into psychedelic medical treatmentIncluding the mysterious ibogaine. (Nature)+ But psychedelics are (still) falling short in clinical trials. (MIT Technology Review) 10 The first retail boutique run by an AI agent has opened—and it’s chaosThe San Francisco shop is reassuringly mismanaged. (NYT $) Quote of the day “I was very impressed with myself to have the head of Apple calling to ‘kiss my ass’.”  —Donald Trump pays a classy tribute to Tim Cook on Truth Social. One More Thing JOHN F. MALTA This researcher wants to replace your brain, little by little A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death. His idea? Replace your body parts. All of them. Even your brain.  Jean Hébert, a program manager at the US Advanced Research Projects Agency for Health (ARPA-H), believes we can beat aging by adding youthful tissue to people’s brains. Read the full story on his futuristic plan to extend human life.  —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + A Lego set was sent to the edge of space—and survived.+ Go behind the scenes with Werner Herzog as he guides a new generation of filmmakers.+ This video about enshittification perfectly captures the frustration of the degrading internet.+ NASA’s latest deep-space capture offers a rare view of planetary systems in their absolute infancy.

The Download: introducing the 10 Things That Matter in AI Right Now 投稿を読む »

AI, Committee, ニュース, Uncategorized

HiRAS: A Hierarchical Multi-Agent Framework for Paper-to-Code Generation and Execution

arXiv:2604.17745v1 Announce Type: new Abstract: Recent advances in large language models have highlighted their potential to automate computational research, particularly reproducing experimental results. However, existing approaches still use fixed sequential agent pipelines with weak global coordination, which limits their robustness and overall performance. In this work, we propose Hierarchical Research Agent System (HiRAS), a hierarchical multi-agent framework for end-to-end experiment reproduction that employs supervisory manager agents to coordinate specialised agents across fine-grained stages. We also identify limitations in the reference-free evaluation of the Paper2Code benchmark and introduce Paper2Code-Extra (P2C-Ex), a refined protocol that incorporates repository-level information and better aligns with the original reference-based metric. We conduct extensive evaluation, validating the effectiveness and robustness of our proposed methods, and observing improvements, including >10% relative performance gain beyond the previous state-of-the-art using open-source backbone models and significantly reduced hallucination in evaluation. Our work is available on GitHub: https://github.com/KOU-199024/HiRAS.

HiRAS: A Hierarchical Multi-Agent Framework for Paper-to-Code Generation and Execution 投稿を読む »

AI, Committee, ニュース, Uncategorized

A Coding Implementation on Qwen 3.6-35B-A3B Covering Multimodal Inference, Thinking Control, Tool Calling, MoE Routing, RAG, and Session Persistence

In this tutorial, we build an end-to-end implementation around Qwen 3.6-35B-A3B and explore how a modern multimodal MoE model can be used in practical workflows. We begin by setting up the environment, loading the model adaptively based on available GPU memory, and creating a reusable chat framework that supports both standard responses and explicit thinking traces. From there, we work through important capabilities such as thinking-budget control, streamed generation with separated reasoning and answers, vision input handling, tool calling, structured JSON generation, MoE routing inspection, benchmarking, retrieval-augmented generation, and session persistence. Through this process, we run the model for inference and also examine how to design a robust application layer on top of Qwen 3.6 for real experimentation and advanced prototyping. Copy CodeCopiedUse a different Browser import subprocess, sys def _pip(*a): subprocess.check_call([sys.executable, “-m”, “pip”, “install”, “-q”, *a]) _pip(“–upgrade”, “pip”) _pip(“–upgrade”, “transformers>=4.48.0”, “accelerate>=1.2.0”, “bitsandbytes>=0.44.0”, “pillow”, “requests”, “sentencepiece”, “qwen-vl-utils[decord]”, “sentence-transformers”, “jsonschema”) import torch, os, json, time, re, gc, io, threading, textwrap, warnings from collections import Counter from typing import Any, Optional warnings.filterwarnings(“ignore”) assert torch.cuda.is_available(), “GPU required. Switch runtime to A100 / L4.” p = torch.cuda.get_device_properties(0) VRAM_GB = p.total_memory / 1e9 print(f”GPU: {p.name} | VRAM: {VRAM_GB:.1f} GB | CUDA {torch.version.cuda} | torch {torch.__version__}”) if VRAM_GB >= 75: LOAD_MODE = “bf16” elif VRAM_GB >= 40: LOAD_MODE = “int8” else: LOAD_MODE = “int4” try: import flash_attn ATTN_IMPL = “flash_attention_2” except Exception: ATTN_IMPL = “sdpa” print(f”-> mode={LOAD_MODE} attn={ATTN_IMPL}”) from transformers import ( AutoModelForImageTextToText, AutoProcessor, BitsAndBytesConfig, TextIteratorStreamer, StoppingCriteria, StoppingCriteriaList, ) MODEL_ID = “Qwen/Qwen3.6-35B-A3B” kwargs = dict(device_map=”auto”, trust_remote_code=True, low_cpu_mem_usage=True, attn_implementation=ATTN_IMPL, torch_dtype=torch.bfloat16) if LOAD_MODE == “int8”: kwargs[“quantization_config”] = BitsAndBytesConfig(load_in_8bit=True) elif LOAD_MODE == “int4”: kwargs[“quantization_config”] = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type=”nf4″, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True) print(“Loading processor…”) processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True) print(f”Loading model in {LOAD_MODE} (first run downloads ~70GB) …”) t0 = time.time() model = AutoModelForImageTextToText.from_pretrained(MODEL_ID, **kwargs); model.eval() print(f”Loaded in {time.time()-t0:.0f}s | VRAM used: {torch.cuda.memory_allocated()/1e9:.1f} GB”) SAMPLING = { “thinking_general”: dict(temperature=1.0, top_p=0.95, top_k=20, presence_penalty=1.5), “thinking_coding”: dict(temperature=0.6, top_p=0.95, top_k=20, presence_penalty=0.0), “instruct_general”: dict(temperature=0.7, top_p=0.80, top_k=20, presence_penalty=1.5), “instruct_reason”: dict(temperature=1.0, top_p=1.00, top_k=40, presence_penalty=2.0), } THINK_OPEN, THINK_CLOSE = “<think>”, “</think>” def split_thinking(text: str): if THINK_OPEN in text and THINK_CLOSE in text: a = text.index(THINK_OPEN) + len(THINK_OPEN); b = text.index(THINK_CLOSE) return text[a:b].strip(), text[b + len(THINK_CLOSE):].strip() if THINK_CLOSE in text: b = text.index(THINK_CLOSE) return text[:b].strip(), text[b + len(THINK_CLOSE):].strip() return “”, text.strip() We set up the full environment required to run Qwen 3.6-35B-A3B in Google Colab and installed all supporting libraries for quantization, multimodal processing, retrieval, and schema validation. We then probe the available GPU, dynamically select the loading mode based on VRAM, and configure the attention backend so the model runs as efficiently as possible on the given hardware. After that, we load the processor and model from Hugging Face and define the core sampling presets and the thinking-splitting utility, which lay the foundation for all later interactions. Copy CodeCopiedUse a different Browser class QwenChat: def __init__(self, model, processor, system=None, tools=None): self.model, self.processor = model, processor self.tokenizer = processor.tokenizer self.history: list[dict] = [] if system: self.history.append({“role”: “system”, “content”: system}) self.tools = tools def user(self, content): self.history.append({“role”:”user”,”content”:content}); return self def assistant(self, content, reasoning=””): m = {“role”:”assistant”,”content”:content} if reasoning: m[“reasoning_content”] = reasoning self.history.append(m); return self def tool_result(self, name, result): self.history.append({“role”:”tool”,”name”:name, “content”: result if isinstance(result, str) else json.dumps(result)}) return self def _inputs(self, enable_thinking, preserve_thinking): return self.processor.apply_chat_template( self.history, tools=self.tools, tokenize=True, add_generation_prompt=True, return_dict=True, return_tensors=”pt”, enable_thinking=enable_thinking, preserve_thinking=preserve_thinking, ).to(self.model.device) def generate(self, *, enable_thinking=True, preserve_thinking=False, max_new_tokens=2048, preset=”thinking_general”, stopping_criteria=None, append_to_history=True): inp = self._inputs(enable_thinking, preserve_thinking) cfg = SAMPLING[preset] gk = dict(**inp, max_new_tokens=max_new_tokens, do_sample=True, temperature=cfg[“temperature”], top_p=cfg[“top_p”], top_k=cfg[“top_k”], repetition_penalty=1.0, pad_token_id=self.tokenizer.pad_token_id or self.tokenizer.eos_token_id) if stopping_criteria is not None: gk[“stopping_criteria”] = stopping_criteria with torch.inference_mode(): out = self.model.generate(**gk) raw = self.tokenizer.decode(out[0, inp[“input_ids”].shape[-1]:], skip_special_tokens=True) think, ans = split_thinking(raw) if append_to_history: self.assistant(ans, reasoning=think) return think, ans def stream(self, *, enable_thinking=True, preserve_thinking=False, max_new_tokens=2048, preset=”thinking_general”, on_thinking=None, on_answer=None): inp = self._inputs(enable_thinking, preserve_thinking) cfg = SAMPLING[preset] streamer = TextIteratorStreamer(self.tokenizer, skip_prompt=True, skip_special_tokens=True) gk = dict(**inp, streamer=streamer, max_new_tokens=max_new_tokens, do_sample=True, temperature=cfg[“temperature”], top_p=cfg[“top_p”], top_k=cfg[“top_k”], pad_token_id=self.tokenizer.pad_token_id or self.tokenizer.eos_token_id) t = threading.Thread(target=self.model.generate, kwargs=gk); t.start() buf, in_think = “”, enable_thinking think_text, answer_text = “”, “” for piece in streamer: buf += piece if in_think: if THINK_CLOSE in buf: close_at = buf.index(THINK_CLOSE) resid = buf[:close_at] if on_thinking: on_thinking(resid[len(think_text):]) think_text = resid buf = buf[close_at + len(THINK_CLOSE):] in_think = False if buf and on_answer: on_answer(buf) answer_text = buf; buf = “” else: if on_thinking: on_thinking(piece) think_text += piece else: if on_answer: on_answer(piece) answer_text += piece t.join() self.assistant(answer_text.strip(), reasoning=think_text.strip()) return think_text.strip(), answer_text.strip() def save(self, path): with open(path, “w”) as f: json.dump({“history”: self.history, “tools”: self.tools}, f, indent=2) @classmethod def load(cls, model, processor, path): with open(path) as f: data = json.load(f) c = cls(model, processor, tools=data.get(“tools”)) c.history = data[“history”]; return c class ThinkingBudget(StoppingCriteria): def __init__(self, tokenizer, budget: int): self.budget = budget self.open_ids = tokenizer.encode(THINK_OPEN, add_special_tokens=False) self.close_ids = tokenizer.encode(THINK_CLOSE, add_special_tokens=False) self.start = None def _find(self, seq, needle): n = len(needle) for i in range(len(seq)-n+1): if seq[i:i+n] == needle: return i return None def __call__(self, input_ids, scores, **kwargs): seq = input_ids[0].tolist() if self.start is None: idx = self._find(seq, self.open_ids) if idx is not None: self.start = idx + len(self.open_ids) return False if self._find(seq[self.start:], self.close_ids) is not None: return False return (len(seq) – self.start) >= self.budget TOOL_CALL_RE = re.compile(r”<tool_call>s*({.*?})s*</tool_call>”, re.S) def run_calculate(expr: str) -> str: if any(c not in “0123456789+-*/().% ” for c in expr): return json.dumps({“error”:”illegal chars”}) try: return json.dumps({“result”: eval(expr, {“__builtins__”: {}}, {})}) except Exception as e: return json.dumps({“error”: str(e)}) _DOCS = { “qwen3.6”: “Qwen3.6-35B-A3B is a 35B MoE with 3B active params and 262k native context.”, “deltanet”: “Gated DeltaNet is a linear-attention variant used in Qwen3.6’s hybrid layers.”, “moe”: “Qwen3.6 uses 256 experts with 8 routed + 1 shared per token.”, } def run_search_docs(q): hits = [v for k,v in _DOCS.items() if k in q.lower()] return json.dumps({“results”: hits or [“no hits”]}) def run_get_time(): import datetime as dt return json.dumps({“iso”: dt.datetime.utcnow().isoformat()+”Z”}) TOOL_FNS = { “calculate”: lambda a: run_calculate(a[“expression”]), “search_docs”: lambda a: run_search_docs(a[“query”]), “get_time”: lambda a: run_get_time(), } TOOLS_SCHEMA = [ {“type”:”function”,”function”:{“name”:”calculate”,”description”:”Evaluate arithmetic.”, “parameters”:{“type”:”object”,”properties”:{“expression”:{“type”:”string”}},”required”:[“expression”]}}}, {“type”:”function”,”function”:{“name”:”search_docs”,”description”:”Search internal docs.”, “parameters”:{“type”:”object”,”properties”:{“query”:{“type”:”string”}},”required”:[“query”]}}}, {“type”:”function”,”function”:{“name”:”get_time”,”description”:”Get current UTC time.”, “parameters”:{“type”:”object”,”properties”:{}}}}, ] We build the main QwenChat conversation manager, which handles message history, tool messages, chat template formatting, standard generation, streaming generation, and session persistence. We also define the ThinkingBudget stopping criterion to

A Coding Implementation on Qwen 3.6-35B-A3B Covering Multimodal Inference, Thinking Control, Tool Calling, MoE Routing, RAG, and Session Persistence 投稿を読む »

AI, Committee, ニュース, Uncategorized

Digging for clues about the North Pole’s past

In the past, even with an icebreaker and during peak melt season, getting to the North Pole wasn’t a sure bet. It took favorable winds to crack the frozen ocean surface, and ships had to fight through ice that had grown many meters thick over several winters. In the summer of 2025, though, Jochen Knies from the Arctic University of Norway, Tromsø, and his team met little resistance on their way to 90 degrees North with the research vessel Kronprins Haakon. The geologist “didn’t hear the usual grinding of ice” against the hull that he remembered from 1996, when he first reached the pole by ship. Instead, thin floes and large stretches of open water made for an easy, quiet passage. To him, it was “a reminder of how quickly the Arctic is changing.” Since the late 1970s, when satellite observations of the polar seas began, summer ice cover of the Arctic Ocean has declined by more than 40%. In less than half a century, a frozen area the size of the Mediterranean Sea has turned into blue open water with the rapid warming of the high northern latitudes. If this trend continues, there could soon be summers at the North Pole with no sea ice whatsoever. The last time this happened may have been some 120,000 years ago. But no one knows for certain. That’s why Knies and his colleagues, a team of researchers from Norway and Germany, set out from Svalbard to the central Arctic last August. The aim of their five-week mission was to determine whether this region had been ice-free in recent Earth history—and if so, when. As part of a €12.5 million project financed by the European Union, they also came to answer some questions about the future of the Arctic and beyond: How does the loss of sea ice affect the marine ecosystem? What are the consequences for ocean circulation and global climate? In search of clues, the expedition collected sediment cores up to 22 meters in length at different locations across the Arctic seafloor. Marine sediments are valuable climate archives that give scientists a window into bygone eras. Like diligent record keepers, they can log past water temperatures, sea-ice coverage, and the strength of ocean currents. These data are encrypted in the chemical and physical properties of the plankton remains and weathered rock deposited on the seabed.  The ship’s crew and researchers recover the sediment corer, a 25-meter-long steel pipe that is driven into the seafloor using a top weight of more than three metric tons.TIM KALVELAGE Together, the scientists pull out long plastic pipes filled with precious deep-sea mud.TIM KALVELAGE The pipes are cut into shorter pieces and split in half before being processed in the ship’s laboratories. Each of these one-meter sections covers several tens of thousands of years of Earth’s history.TIM KALVELAGE While sediment cores several meters long had been recovered on earlier expeditions in the central Arctic, there is no scientific consensus on how old the deposits actually are or whether sea ice ever completely disappeared in summer.  To decode the Arctic’s climate archive, Knies brought a team of experts from various disciplines onboard the Kronprins Haakon to dig deeper and obtain fresh samples they could subject to the latest analytical techniques.    Samples await paleomagnetic dating. Like tiny compass needles, iron-rich particles align with Earth’s shifting magnetic field as they settle on the seabed. By measuring their orientation, researchers can estimate the age of the different sediment layers.TIM KALVELAGE Under the microscope, PhD student Paulina Romel picks shells of unicellular foraminifera from a sample. The chemical composition of these microfossils can give clues about the age of the sediment and the surface water temperature when the organisms were still alive. “These are really cool creatures!” says Romel.TIM KALVELAGE Agathe Ollive, a geochemist from the Alfred Wegener Institute in Germany, takes water samples from a CTD rosette, an instrument package that measures conductivity (salinity) and temperature at various depths. She uses certain elements to trace the inflow of fresh water and seawater from rivers and adjacent ocean basins into the Arctic. “I didn’t expect there to be so little ice up here,” Ollive says. She is worried about how the Arctic will look 20 years from now.TIM KALVELAGE Some of this work was done while the researchers were still at sea. Now, at their home laboratories, they are finalizing their analysis of the seafloor samples. One important task is dating the sediments, which may be up to 2 million years old. The team uses a combination of methods to do this, including measuring magnetization, the decay of radioactive elements, and the exposure of mineral grains to sunlight before sinking to the depths. Once they can place them on a timeline, the materials in the cores will help researchers paint a picture of what the Arctic Ocean looked like in times that were warmer than today. For example, the presence or absence of the molecule IP25, which is produced exclusively by ice algae, could tell them how far the sea ice receded at a given time.  Toward the end of the expedition, the Kronprins Haakon passes this iceberg near the northeast coast of Greenland.TIM KALVELAGE At the end of the study, the team hopes to have data that could improve climate projections for a future ice-free “blue Arctic,” helping us understand how it could affect marine life and carbon storage, Atlantic Ocean circulation, or extreme weather events in Europe and North America.  Tim Kalvelage is a freelance science reporter based in Bremen, Germany, who focuses on climate, ocean, and polar research. He has been to the North Pole twice.

Digging for clues about the North Pole’s past 投稿を読む »

AI, Committee, ニュース, Uncategorized

The Download: turning down human noise, and LA’s stunning subway upgrade

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The noise we make is hurting animals. Can we learn to shut up? As human society has expanded, animals have started struggling to hear one another. For many birds, the noise has grown so loud that they’ve begun to sing with faster trills. Now, their mating calls aren’t as effective.  The growing hubbub can also increase bird-on-bird conflict, and entire species that can’t handle urban clamor simply leave town for good. But there are technological solutions to the noises hurting animals—and they could help humans, too. Read the full story. —Clive Thompson Los Angeles is finally going underground In May, a new subway segment will connect downtown Los Angeles to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-­packed stretch of the city will be, if all goes well, a 25-minute train ride. The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. Find out why. —Adam Rogers Both of these stories are from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands tomorrow. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Apple’s Tim Cook is stepping down as CEOHardware chief John Ternus will take over from him in September. (CNN)+ Ternus’ defining challenge may be fixing Apple’s AI strategy. (CNBC)+ How does Cook compare with Apple’s other CEOs through the years? (NYT $) 2 Anthropic’s new Amazon deal escalates the compute war with OpenAIAnthropic will spend more than $100 billion on Amazon compute.(Axios $)+ OpenAI touted its compute advantage over Anthropic two weeks ago. (Bloomberg $)+ Here’s why the AI compute explosion has only just begun. (MIT Technology Review) 3 Silicon Valley is trying to get into the news businessThe latest addition is Andreessen Horowitz’s MTS. (The Information $)+ OpenAI recently bought a business talk show. (NPR)+ They join Elon Musk’s X and a new Peter Thiel-backed startup. (Axios) 4 The banking industry is scrambling to get access to Anthropic’s MythosAs regulators review the risks to financial services. (Reuters $)+ Germany’s central bank has called for wider access to Mythos. (Bloomberg $) 5 War memes are turning conflict into contentFueled by recommendation systems designed to keep you hooked. (Wired $)+ AI is turning the Iran conflict into theater. (MIT Technology Review) 6 AI is boosting worker productivity, but not their paychecksEmployees aren’t financially benefiting from their extra efficiency. (Quartz)+ New data sheds light on the current state of AI. (MIT Technology Review) 7 Amazon’s ambition to rival Starlink has hit a setbackAfter a Blue Origin rocket was grounded. (FT $) 8 Jeff Bezos’s AI lab has neared a $38 billion valuationIn an imminent $10 billion fundraising deal from investors. (FT $)+ The startup focuses on AI for engineering ‌and ⁠manufacturing. (Reuters $) 9 Scientific AI agents have got their own social networkWhere they share, debate, and discuss research papers. (Nature) 10 A Mars rover has discovered new “origin-of-life” moleculesThey suggest Mars wasn’t always a lifeless red desert. (Gizmodo) Quote of the day “He’s been a transformational Apple CEO that’s always had a steady hand at the wheel. I think that will be his legacy. He had massive shoes to step into, and he was the ​right person for the job. That’s the ​way he’ll be remembered.” One More Thing MIKE MCQUADE The race to save our online lives from a digital dark age There is more stuff being created now than at any time in history, but our data is more fragile than ever. One day in the future, YouTube’s videos may permanently disappear. Facebook—and your uncle’s holiday posts—will vanish.  For many archivists, alarm bells are ringing. Across the world, they’re scraping up defunct websites, saving at-risk data collections, and developing data storage technologies that could last thousands of years.  Their work raises complex questions. What is important to us? How do we decide what to keep—and what do we let go? Read our story on the thorny problems of digital preservation. —Niall Firth We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Apple’s forgotten co-founder recently shared his story of the company’s early days.+ Witness a rare underwater volcanic eruption in the Solomon Islands.+ Learn what makes Shakespeare’s writing so effective in this masterful analysis.+ An Artemis II astronaut shared a stunning iPhone video showing Earth disappear behind the Moon at 8x zoom.

The Download: turning down human noise, and LA’s stunning subway upgrade 投稿を読む »

We use cookies to improve your experience and performance on our website. You can learn more at プライバシーポリシー and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
ja