Python Decorators for Production Machine Learning Engineering
You’ve probably written a decorator or two in your Python career.
Python Decorators for Production Machine Learning Engineering Read Post »
You’ve probably written a decorator or two in your Python career.
Python Decorators for Production Machine Learning Engineering Read Post »
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram Inside a money-laundering center in Cambodia, an employee opens a banking app on his phone. It asks for a photo linked to the account, so he uploads a picture of a 30-something Asian man. The app then requests a video “liveness” check. The scammer holds up a static image of a woman who doesn’t match the account. After 90 seconds, he’s in. The exploit relies on illicit hacking services sold on Telegram that break “Know Your Customer” (KYC) facial scans. MIT Technology Review found 22 channels and groups advertising these services. This is what we discovered. —Fiona Kelliher Is carbon removal in trouble? —Casey Crownhart Last week, news emerged that Microsoft was pausing carbon removal purchases. It was a bombshell—Microsoft effectively is the carbon removal market, single-handedly purchasing around 80% of all contracted carbon removal. The report sparked fear across the industry, raising questions about the future of carbon removal and the role of Big Tech. Read the full story. This story is from The Spark, our weekly newsletter exploring the technology that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday. The quest to measure our relationship with nature —Emma Marris Humans have done some destructive things to the ecosystems around us. But conservationists are learning that we can also be a force for good. To understand how we work best with nature, a group of scientists, authors, and philosophers have developed new measurements of human-nonhuman relationships. Now, a team in the United Nations is continuing the work. Find out why—and what they hope to achieve. This story is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Ukraine says Russian troops have surrendered to robots They claim a fully automated attack captured army positions for the first time in history. (404 Media) + Europe’s vision for future wars is full of drones. (MIT Technology Review) 2 Monkeys with BCIs are navigating virtual worlds using only their thoughts The research could help people with paralysis. (New Scientist) + But these implants still face a critical test. (MIT Technology Review) 3 NASA wants to put nuclear reactors on the Moon They could power lunar bases and extend spaceflight. (Wired $) + NASA is also building a nuclear-powered spacecraft. (MIT Technology Review) 4 Plans for online age verification in the US are raising red flags Experts warn of compliance issues and potential data breaches. (NBC News) + In the EU, an age verification app is about to launch. (Reuters $) 5 An AI chip boom just pushed Taiwan’s stock market past the UK’s It’s risen past $4 trillion to become the world’s seventh largest. (FT $) + Future AI chips could be built on glass. (MIT Technology Review) 6 The public backlash against data centers is intensifying in the US Protests and litigation are blocking projects. (CNBC) + One potential solution? Putting them in space. (MIT Technology Review) 7 Five-minute EV charging is becoming a reality China’s BYD has started rolling it out. (Gizmodo) + “Extended-range electric vehicles” are about to hit US streets. (Atlantic $) 8 Stealth signals are bypassing Iran’s internet blackout Files hidden in satellite TV broadcasts keep information flowing. (IEEE) 9 Shoe brand Allbirds made a shock pivot to AI, sending stock up 700% No bubble to see here, folks. (CNBC) + What even is the AI bubble? (MIT Technology Review) 10 The largest ever map of the universe is complete It captures 47 million galaxies and quasars. (Space.com) Quote of the day “I like the internet as much as anybody, but we’ve got to go on an internet diet. We don’t need to pay for corporations to do their internet stuff.” —Sylvia Whitt, a 78-year-old retiree based in Virginia, tells the Washington Post why they’re protesting against data centers. One More Thing ISRAEL VARGAS AI and the future of sex Some Republican lawmakers want to criminalize porn and arrest its creators. But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes a secondary issue. The primary concern will be what it means for porn to be “real”—and what the answer demands from all of us. Technological advances could even remove the “messy humanity” from sex itself. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. Read the full story. —Leo Herrera We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + An animator turned his son’s drawings into epic anime characters. + Hundreds of baby green sea turtles made a spectacular first journey to the ocean. + You can now track rocket launches from take-off to orbit in real time. + These musical mistakes prove that even the classics aren’t perfect.
The Download: cyberscammers’ banking bypasses, and carbon removal troubles Read Post »
The AI boom has hit across industries, and public sector organizations are facing pressure to accelerate adoption. At the same time, government institutions face distinct constraints around security, governance, and operations that set them apart from their business counterparts. For this reason, purpose-built small language models (SLMs) offer a promising path to operationalize AI in these environments. A Capgemini study found that 79 percent of public sector executives globally are wary about AI’s data security, an understandable figure given the heightened sensitivity of government data and the legal obligations surrounding its use. As Han Xiao, vice president of AI at Elastic, says, “Government agencies must be very restricted about what kind of data they send to the network. This sets a lot of boundaries on how they think about and manage their data.” The fundamental need for control over sensitive information is one of many factors complicating AI deployment, particularly when compared against the private sector’s standard operational assumptions. Unique operational challenges When private-sector entities expand AI, they typically assume certain conditions will be in place, including continuous connectivity to the cloud, reliance on centralized infrastructure, acceptance of incomplete model transparency, and limited restrictions on data movement. For many state institutions, however, accepting these conditions could be anything from dangerous to impossible. Government agencies must ensure that their data stays under their control, that information can be checked and verified, and that operational disruptions are kept to an absolute minimum. At the same time, they often have to run their systems in environments where internet connectivity is limited, unreliable, or unavailable. These complexities prevent many promising public sector AI pilots from moving beyond experimentation. “Many people undervalue the operating challenge of AI,” Xiao says. “The public sector needs AI to perform reliably on all kinds of data, and then to be able to grow without breaking. Continuity of operations is often underestimated.” An Elastic survey of public sector leaders found that 65 percent struggle to use data continuously in real time and at scale. Infrastructure constraints compound the problem. Government organizations may also struggle to obtain the graphics processing units (GPUs) used to train and access complex AI models. As Xiao points out, “Government doesn’t often purchase GPUs, unlike the private sector—they’re not used to managing GPU infrastructure. So accessing a GPU to run the model is a bottleneck for much of the public sector.” A smaller, more practical model The many nonnegotiable requirements in the public sector make large language models (LLMs) untenable. But SLMs can be housed locally, offering greater security and control. SLMs are specialized AI models that typically use billions rather than hundreds of billions of parameters, making them far less computationally demanding than the largest LLMs. The public sector does not need to build ever-larger models housed in offsite, centralized locations. An empirical study found that SLMs performed as well or better than LLMs. SLMs allow sensitive information to be used effectively and efficiently while avoiding the operational complexity of maintaining large models. Xiao puts it this way: “It is easy to use ChatGPT to do proofreading. It’s very difficult to run your own large language models just as smoothly in an environment with no network access.” SLMs are purpose-built for the needs of the department or agency that will use them. The data is stored securely outside the model, and is only accessed when queried. Carefully engineered prompts ensure that only the most relevant information is retrieved, providing more accurate responses. Using methods such as smart retrieval, vector search, and verifiable source grounding, AI systems can be built that cater to public sector needs. Thus, the next phase of AI adoption in the public sector may be to bring the AI tool to the data, rather than sending the data out into the cloud. Gartner predicts that by 2027, small, specialized AI models will be used three times more than LLMs. Superior search capabilities “When people in the public sector hear AI, they probably think about ChatGPT. But we can be much more ambitious,” says Xiao. “AI can revolutionize how the government searches and manages the large amounts of data they have.” Looking beyond chatbots reveals one of AI’s most immediate opportunities: dramatically improved search. Like many organizations, the public sector has mountains of unstructured data—including technical reports, procurement documents, minutes, and invoices. Today’s AI, however, can deliver results sourced from mixed media, like readable PDFs, scans, images, spreadsheets, and recordings, and in multiple languages. All of this can be indexed by SLM-powered systems to provide tailored responses and to draft complex texts in any language, while ensuring outputs are legally compliant. “The public sector has a lot of data, and they don’t always know how to use this data. They don’t know what the possibilities are,” says Xiao. Even more powerful, AI can help government employees interpret the data they access. “Today’s AI can provide you with a completely new view of how to harness that data,” says Xiao. A well-trained SLM can interpret legal norms, extract insights from public consultations, support data-driven executive decision-making, and improve public access to services and administrative information. This can contribute to dramatic improvements in how the public sector conducts its operations. The small-language promise Focusing on SLMs shifts the conversation from how comprehensive the model can be to how efficient it is. LLMs incur significant performance and computational costs and require specialized hardware that many public entities cannot afford. Despite requiring some capital expenses, SLMs are less resource-intensive than LLMs, so they tend to be cheaper and reduce environmental impact. Public sector agencies often face stringent audit requirements, and SLM algorithms can be documented and certified as transparent. Some countries, particularly in Europe, also have privacy regulations such as GDPR that SLMs can be designed to meet. Tailored training data produces more targeted results, reducing errors, bias, and hallucinations that AI is prone to. As Xiao puts it, “Large language models generate text based on what they were trained on, so there is a cut-off
Making AI operational in constrained public sector environments Read Post »
There’s a fault line running through enterprise AI, and it’s not the one getting the most attention. The public conversation still tracks foundation models and benchmarks — GPT versus Gemini, reasoning scores, and marginal capability gains. But in practice, the more durable advantage is structural: who owns the operating layer where intelligence is applied, governed, and improved. One model treats AI as an on-demand utility; the other embeds it as an operating layer—the combination of workflow software, data capture, feedback loops and governance that sits between models and real work— that compounds with use. Model providers like OpenAI and Anthropic sell intelligence as a service: you have a problem, you call an API, you get an answer. That intelligence is general-purpose, largely stateless, and only loosely connected to the day-to-day workflow where decisions are made. It’s highly capable and increasingly interchangeable. The distinction that matters is whether intelligence resets on every prompt or accumulates over time. Incumbent organizations, by contrast, can treat AI as an operating layer: instrumentation across workflows, feedback loops from human decisions, and governance that turns individual tasks into reusable policy. In that setup, every exception, correction, and approval becomes a chance to learn—and intelligence can improve as the platform absorbs more of the organization’s work. The organizations most likely to shape the enterprise AI era are those that can embed intelligence directly into operational platforms and instrument those platforms so work generates usable signals. The prevailing narrative says nimble startups will out-innovate incumbents by building AI-native from scratch. If AI is primarily a model problem, that story holds. But in many enterprise domains, AI is a systems problem — integrations, permissions, evaluation, and change management — where advantage accrues to whomever already sits inside high-volume, high-stakes workflows and converts that position into learning and automation. The inversion: AI executes, humans adjudicate Traditional services organizations are built on a simple architecture: humans use software to do expert work. Operators log into systems, navigate workflows, make decisions, and process cases. Technology is the medium. Human judgment is the product. An AI-native platform inverts this. It ingests a problem, applies accumulated domain knowledge, executes autonomously what it can with high confidence, and routes targeted sub-tasks to human experts when the situation demands judgment that the system can’t yet reliably provide. But inverting human-AI interaction isn’t just a UI redesign — it requires raw material. It’s only possible when the platform is built on a foundation of domain expertise, behavioral data, and operational knowledge accumulated over years. The three compounding assets incumbents already own AI-native startups begin with a clean architectural slate and can move quickly. What they can’t easily manufacture is the raw material that makes domain AI defensible at scale: Proprietary operational data A large workforce of domain experts whose day-to-day decisions generate training signals Accumulated tacit knowledge about how complex work actually gets done Services companies already have all three. But these ingredients aren’t moats on their own. They become an advantage only when a company can systematically convert messy operations into AI-ready signals and institutional knowledge — then feed the results back into the workflow so the system keeps improving. Codifying expertise into reusable signals In most services organizations, expertise is tacit and perishable. The best operators know things they cannot easily articulate: heuristics developed over the years, edge-case intuitions, and pattern recognition that operate below the level of conscious reasoning. At Ensemble, the strategy for addressing this challenge is knowledge distillation. The systematic conversion of expert judgment and operational decisions into machine-readable training signals. In health-care revenue cycle management, for example, systems can be seeded with explicit domain knowledge and then deepen their coverage through structured daily interaction with operators. In Ensemble’s implementation, the system identifies gaps, formulates targeted questions, and cross-checks answers across multiple experts to capture both consensus and edge-case nuance. It then synthesizes these inputs into a living knowledge base that reflects the situational reasoning behind expert-level performance. Turning decisions into a learning flywheel Once a system is constrained enough to be trusted, the next question is how it gets better without waiting for annual model upgrades. Every time a skilled operator makes a decision, they generate more than a completed task. They generate a potential labeled example—context paired with an expert action (and sometimes an outcome). At scale, across thousands of operators and millions of decisions, that stream can power supervised learning, evaluation, and targeted forms of reinforcement—teaching systems to behave more like experts in real conditions. For example, if an organization processes 50,000 cases a week and captures just three high-quality decision points per case, that’s 150,000 labeled examples every week without creating a separate data-collection program. A more advanced human-in-the-loop design places experts inside the decision process, so systems learn not just what the right answer was, but how ambiguity gets resolved. Practically, humans intervene at branch points—selecting from AI-generated options, correcting assumptions, and redirecting the workflow. Each intervention becomes a high-value training signal. When the platform detects an edge case or a deviation from the expected process, it can prompt for a brief, structured rationale, capturing decision factors without requiring lengthy free-form reasoning logs. Building toward expertise amplification The goal is to permanently embed the accumulated expertise of thousands of domain experts—their knowledge, decisions, and reasoning—into an AI platform that amplifies what every operator can accomplish. Done well, this produces a quality of execution that neither humans nor AI achieve independently: higher consistency, improved throughput, and measurable operational gains. Operators can focus on more consequential work, supported by an AI that has already completed the analytical groundwork across thousands of analogous prior cases. The broader implication for enterprise leaders is straightforward. Advantages in AI won’t be determined by access to general-purpose models alone. It will come from an organization’s ability to capture, refine, and compound what it knows, its data, decisions, and operational judgment, while building the controls required for high-stakes environments. As AI shifts from experimentation to infrastructure, the most durable edge may belong to the
Treating enterprise AI as an operating layer Read Post »
Google DeepMind research team introduced Gemini Robotics-ER 1.6, a significant upgrade to its embodied reasoning model designed to serve as the ‘cognitive brain’ of robots operating in real-world environments. The model specializes in reasoning capabilities critical for robotics, including visual and spatial understanding, task planning, and success detection — acting as the high-level reasoning model for a robot, capable of executing tasks by natively calling tools like Google Search, vision-language-action models (VLAs), or any other third-party user-defined functions. Here is the key architectural idea to understand: Google DeepMind takes a dual-model approach to robotics AI. Gemini Robotics 1.5 is the vision-language-action (VLA) model — it processes visual inputs and user prompts and directly translates them into physical motor commands. Gemini Robotics-ER, on the other hand, is the embodied reasoning model: it specializes in understanding physical spaces, planning, and making logical decisions, but does not directly control robotic limbs. Instead, it provides high-level insights to help the VLA model decide what to do next. Think of it as the difference between a strategist and an executor — Gemini Robotics-ER 1.6 is the strategist. https://deepmind.google/blog/gemini-robotics-er-1-6/? What’s New in Gemini Robotics-ER 1.6 Gemini Robotics-ER 1.6 shows significant improvement over both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash, specifically enhancing spatial and physical reasoning capabilities such as pointing, counting, and success detection. But the key addition is a capability that did not exist in prior versions at all: instrument reading. Pointing as a Foundation for Spatial Reasoning Pointing — the model’s ability to identify precise pixel-level locations in an image — is far more powerful than it sounds. Points can be used to express spatial reasoning (precision object detection and counting), relational logic (making comparisons such as identifying the smallest item in a set, or defining from-to relationships like ‘move X to location Y’), motion reasoning (mapping trajectories and identifying optimal grasp points), and constraint compliance (reasoning through complex prompts like “point to every object small enough to fit inside the blue cup”). https://deepmind.google/blog/gemini-robotics-er-1-6/? In internal benchmarks, Gemini Robotics-ER 1.6 demonstrates a clear advantage over its predecessor. Gemini Robotics-ER 1.6 correctly identifies the number of hammers, scissors, paintbrushes, pliers, and garden tools in a scene, and does not point to requested items that are not present in the image — such as a wheelbarrow and Ryobi drill. In comparison, Gemini Robotics-ER 1.5 fails to identify the correct number of hammers or paintbrushes, misses scissors altogether, and hallucinates a wheelbarrow. For AI Robotics professionals this matters because hallucinated object detections in robotic pipelines can cause cascading downstream failures — a robot that ‘sees’ an object that isn’t there will attempt to interact with empty space. Success Detection and Multi-View Reasoning In robotics, knowing when a task is finished is just as important as knowing how to start it. Success detection serves as a critical decision-making engine that allows an agent to intelligently choose between retrying a failed attempt or progressing to the next stage of a plan. This is a harder problem than it looks. Most modern robotics setups include multiple camera views such as an overhead and wrist-mounted feed. This means a system needs to understand how different viewpoints combine to form a coherent picture at each moment and across time. Gemini Robotics-ER 1.6 advances multi-view reasoning, enabling it to better fuse information from multiple camera streams, even in occluded or dynamically changing environments. Instrument Reading: A Real-World Breakthrough The genuinely new capability in Gemini Robotics-ER 1.6 is instrument reading — the ability to interpret analog gauges, pressure meters, sight glasses, and digital readouts in industrial settings. This task stems from facility inspection needs, a critical focus area for Boston Dynamics. Spot, a Boston Dynamics robot, is able to visit instruments throughout a facility and capture images of them for Gemini Robotics-ER 1.6 to interpret. Instrument reading requires complex visual reasoning: one must precisely perceive a variety of inputs — including the needles, liquid level, container boundaries, tick marks, and more — and understand how they all relate to each other. In the case of sight glasses, this involves estimating how much liquid fills the sightglass while accounting for distortion from the camera perspective. Gauges typically have text describing the unit, which must be read and interpreted, and some have multiple needles referring to different decimal places that need to be combined. https://deepmind.google/blog/gemini-robotics-er-1-6/? Gemini Robotics-ER 1.6 achieves its instrument readings by using agentic vision (a capability that combines visual reasoning with code execution, introduced with Gemini 3.0 Flash and extended in Gemini Robotics-ER 1.6). The model takes intermediate steps: first zooming into an image to get a better read of small details in a gauge, then using pointing and code execution to estimate proportions and intervals, and ultimately applying world knowledge to interpret meaning. Gemini Robotics-ER 1.5 achieves a 23% success rate on instrument reading, Gemini 3.0 Flash reaches 67%, Gemini Robotics-ER 1.6 reaches 86%, and Gemini Robotics-ER 1.6 with agentic vision hits 93%. One important caveat: Gemini Robotics-ER 1.5 was evaluated without agentic vision because it does not support that capability. The other three models were evaluated with agentic vision enabled for the instrument reading task, making the 23% baseline less a performance gap and more a fundamental architectural difference. For AI developers evaluating model generations, this distinction matters — you are not comparing apples to apples across the full benchmark column. Key Takeaways Gemini Robotics-ER 1.6 is a reasoning model, not an action model: It acts as the high-level ‘brain’ of a robot — handling spatial understanding, task planning, and success detection — while the separate VLA model (Gemini Robotics 1.5) handles the actual physical motor commands. Pointing is more powerful than it looks: Gemini Robotics-ER 1.6’s pointing capability goes far beyond simple object detection — it enables relational logic, motion trajectory mapping, grasp point identification, and constraint-based reasoning, all of which are foundational to reliable robotic manipulation. Instrument reading is the biggest new capability: Built in collaboration with Boston Dynamics’ Spot robot for industrial facility inspection, Gemini Robotics-ER 1.6 can
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work? Just before Artemis II began its historic slingshot around the moon, NASA revealed an even grander space travel plan. By the end of 2028, the agency aims to fly a nuclear reactor-powered interplanetary spacecraft to Mars. A successful mission would herald a new era in spaceflight—and might just give the US the edge in the race against China. But the project remains shrouded in mystery. MIT Technology Review picked the brains of nuclear power and propulsion experts to find out how the nuclear-powered spacecraft might work. Here’s what we discovered. —Robin George Andrews This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. Coming soon: our 10 Things That Matter in AI Right Now Each year, we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will change the world. Our 2026 list, however, was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in! That got us thinking: what if we made an entirely new list all about AI? Before we knew it, we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now. On April 21, we’ll unveil the list on stage at our signature AI conference, EmTech AI, and then publish it online later that day. If you want to be among the first to see it, join us at EmTech AI or become a subscriber to livestream the announcement. Find out more about the list’s methodology and aims here. —Niall Firth & Amy Nordrum MIT Technology Review Narrated: this company is developing gene therapies for muscle growth, erectile dysfunction, and “radical longevity” In January, a handful of volunteers were injected with two experimental gene therapies as part of an unusual clinical trial. Its long-term goal? To achieve radical human life extension. The therapies are designed to support muscle growth. The company behind them, Unlimited Bio, also plans to trial similar therapies in the scalp (for baldness) and penis (for erectile dysfunction). But some experts are concerned about the plans. Find out why the trial has divided opinion. —Jessica Hamzelou This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google, Microsoft, and Meta track users even when they opt out According to an independent audit, they may be racking up billions in fines. (404 Media) + How our digital devices put our privacy at risk. (Ars Technica) + Privacy’s next frontier is AI “memories.” (MIT Technology Review) 2 OpenAI has a new cybersecurity model—and strategy GPT-5.4-Cyber is designed specifically for defensive cybersecurity work. (Reuters $) + OpenAI has joined Anthropic in focusing on cybersecurity recently. (Wired $) + Like Anthopic, its latest model is only available to verified testers. (NYT $) + AI is already making online crimes easier. It could get much worse. (MIT Technology Review) 3 Amazon is buying satellite firm Globalstar in a bid to rival Starlink The $11.6 billion deal targets the lucrative satellite internet market. (WSJ $) + Apple has chosen Amazon satellites for iPhone. (Ars Technica) 4 What it’s like to live with an experimental brain implant Early BCI users explain what the technology gives—and takes. (IEEE) + A patient with Neuralink got a boost from generative AI. (MIT Technology Review) 5 Dozens of AI disease-prediction models were trained on dubious data A few might already have been used on patients. (Nature) 6 Uber is breaking from its gig economy model to avoid robotaxi disruption It’s spending $10 billion to buy thousands of autonomous vehicles. (FT $) 7 xAI is being sued over data center pollution Musk’s AI venture stands accused by the NAACP of violating the Clean Air Act. (Engadget) + No one wants a data center in their backyard. (MIT Technology Review) 8 Apple could win the AI race without running It may reap the rewards of everyone else’s spending. (Axios) 9 How 4chan set a precedent for AI’s reasoning abilities The notorious forum tested a feature called “chain of thought.” (The Atlantic $) 10 The surprising emotional toll of wearing Meta’s AI sunglasses Their shortcomings are making users sad. (NYT $) Quote of the day “Everything got a whole lot worse once they rolled out AI.” —A copywriter tells the Guardian that they’re drowning in “workslop” — AI-generated work that seems polished but has major flaws One More Thing GETTY IMAGES How refrigeration ruined fresh food Bananas may not be chilled in the grocery store, but they’re the ultimate refrigerated fruit. It’s only thanks to a network of thermal control that they’ve become a global commodity. And that salad bag on the shelf? It’s not just a bag but a highly engineered respiratory apparatus. According to Nicola Twilley—a contributor to the New Yorker and cohost of the podcast Gastropod—refrigeration has wrecked our food system. Thankfully, there are promising alternative preservation methods. Read the full story on her research. —Allison Arieff We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Spotify only shows 10 popular songs per artist. This tool lists them all. + These GIF animations are mesmerizing loops of nostalgia. + This site beautifully visualizes Curiosity’s 13 years on Mars. + A retro-futurist designer has turned a NES console into a working synthesizer.
The Download: NASA’s nuclear spacecraft and unveiling our AI 10 Read Post »
From inside a money-laundering center in Cambodia, an employee opens a popular Vietnamese banking app on his phone. The app asks him to upload a photo associated with the account, so he clicks on a picture of a 30-something Asian man. Next, the app requests to open the camera for a video “liveness” check. The scammer holds up a static image of a woman bearing no resemblance to the man who owns the account. After a 90-second wait—as the app tells him to readjust the face inside the frame—he’s in. The exploit he’s demonstrating, in a video shared with me by a cyberscam researcher named Hieu Minh Ngo, is possible thanks to one of a growing range of illicit hacking services, readily available for purchase on Telegram, that are designed to break “Know Your Customer” (KYC) facial scans. These banking and crypto safeguards are supposed to confirm that an account belongs to a real person, and that the user’s face matches the identity documents that were provided to open the account. But scammers are bypassing them in order to open mule accounts and launder money. Rather than using a live phone camera feed for a liveness check, the hacks typically deploy a tool known as a virtual camera. Users can replace the video stream with other videos or photos—depicting a real or deepfake person or even an object. As financial institutions enact enhanced security measures aimed at stopping cyberscammers, these workarounds are the latest round in the cat-and-mouse game between criminal operators and the financial services industry. Over the course of a two-month investigation earlier this year, MIT Technology Review identified 22 Chinese-, Vietnamese-, and English-language public Telegram channels and groups advertising bypass kits and stolen biometric data. The software kits use a variety of methods to compromise phone operating systems and banking applications, claiming to enable users to get around the compliance checks imposed by financial institutions ranging from major crypto exchanges such as Binance to name-brand banks like Spain’s BBVA. “Specializing in bank services—handling dirty money,” reads the since-deleted Telegram bio of the program used by the Cambodian launderer, complete with a thumbs-up emoji. “Secure. Professional. High quality.” Some of the channels and groups had thousands of subscribers or members, and many posted bullet points listing their services (“All kinds of KYC verification services”; “It’s all smooth and seamless”) alongside videos purporting to show successful hacks. Telegram says that after reviewing the accounts, it removed them for violating its terms of service. But such online marketplaces proliferate easily, and multiple channels and groups advertising similar tools remain active. Banks and butchers The rise in KYC bypasses has occurred alongside an expansion of a global industry in “pig-butchering” cyberscams. Crypto platforms and banks around the world are facing increasing scrutiny over the flow of illegally obtained money, including profits from such scams, through their platforms. This has prompted tightened banking regulations in countries such as Vietnam and Thailand, where governments have increased customer verification and fraud monitoring requirements and are pushing for stronger anti-money-laundering safeguards in the crypto industry. Chainalysis, a US blockchain analysis firm, estimates that around $17 billion was stolen in 2025 in crypto scams and fraud, up from $13 billion in 2024. The United Nations Office on Drugs and Crime, meanwhile, warned in a recent report that the expansion of Asian scam syndicates in Africa and the Pacific has helped the industry “dramatically scale up profits.” That combination of factors—more scrutiny, but also more revenue—has vaulted KYC bypasses to the center of the online marketplace for cyberscam and casino money launderers. Although estimates vary, cybersecurity researchers say these kinds of attacks are rising: The biometrics verification company iProov estimated that virtual-camera attacks were more than 25 times as common worldwide 2024 than in 2023, while Sumsub, a company providing KYC services, reported that “sophisticated” or multi-step fraud attempts, including virtual-camera bypasses, almost tripled last year among its clients. Three financial institutions that were named as targets on such Telegram channels—the world’s largest crypto exchange, Binance, as well as BBVA and UK-based Revolut—told me they’re aware of such bypasses and emphasize that they’re an industry-wide challenge. A spokesperson from Binance said it has “observed attempts of this nature to circumvent our controls,” adding that “we have successfully prevented such attacks and remain confident in our systems.” BBVA and Revolut also declined to comment on whether their safeguards had been breached. It’s difficult to estimate success rates, because companies may not be aware of bypasses—or report them—until later. “What’s important is what we don’t see,” Artem Popov, Sumsub’s head of fraud prevention products, told me, referring to attacks that go undetected. “There’s always part of the story where it might be completely hidden from our eyes, and from the eyes of any company in the industry, using any type of KYC provider.” How criminals navigate a compliance maze Advertisements for the exploits appear simple enough, but on the back end, building a successful bypass is complex and often involves multiple methods. Some channels offer to jailbreak a physical phone so that scammers can trigger the use of a virtual camera (VCam) instead of the built-in one whenever they’d like. Other hacks inject code known as a “hooking framework” into a financial institution’s app that triggers the VCam to open. Either way, VCams can be used to dupe KYC safeguards with images or videos that replace genuine, live video of the account’s owner. Sergiy Yakymchuk, CEO of Talsec, a cybersecurity company that primarily serves financial institutions, reviewed details from the Telegram channels identified by MIT Technology Review and says they are consistent with successful tactics used against his banking and crypto clients. His team received help requests from banks and exchanges for roughly 30 VCam-based hacks over the past year, up from fewer than 10 in 2023. Increasingly, hackers compromise both the phone itself and the code of the financial institutions’ apps before feeding the virtual camera a mix of stolen biometrics and deepfakes, Yakymchuk says. “Some time
Cyberscammers are bypassing banks’ security with illicit tools sold on Telegram Read Post »
For four days in February 2019, some 30 synthetic biologists and ethicists hunkered down at a conference center in Northern Virginia to brainstorm high-risk, cutting-edge, irresistibly exciting ideas that the National Science Foundation should fund. By the end of the meeting, they’d landed on a compelling contender: making “mirror” bacteria. Should they come to be, the lab-created microbes would be structured and organized like ordinary bacteria, with one important exception: Key biological molecules like proteins, sugars, and lipids would be the mirror images of those found in nature. DNA, RNA, and many other components of living cells are chiral, which means they have a built-in rotational structure. Their mirrors would twist in the opposite direction. Researchers thrilled at the prospect. “Everybody—everybody—thought this was cool,” says John Glass, a synthetic biologist at the J. Craig Venter Institute in La Jolla, California, who attended the 2019 workshop and is a pioneer in developing synthetic cells. It was “an incredibly difficult project that would tell us potentially new things about how to design and build cells, or about the origin of life on Earth.” The group saw enormous potential for medicine, too. Mirror microbes might be engineered as biological factories, producing mirror molecules that could form the basis for new kinds of drugs. In theory, such therapeutics could perform the same functions as their natural counterparts, but without triggering unwelcome immune responses. After the meeting, the biologists recommended NSF funding for a handful of research groups to develop tools and carry out preliminary experiments, the beginnings of a path through the looking glass. The excitement was global. The National Natural Science Foundation of China funded major projects in mirror biology, as did the German Federal Ministry of Research, Technology, and Space. By five years later, in 2024, many researchers involved in that NSF meeting had reversed course. They’d become convinced that in the worst of all possible futures, mirror organisms could trigger a catastrophic event threatening every form of life on Earth; they’d proliferate without predators and evade the immune defenses of people, plants, and animals. “I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened.” Kate Adamala, synthetic biologist, University of Minnesota Over the past two years, they’ve been ringing alarm bells. They published an article in Science in December 2024, accompanied by a 299-page technical report addressing feasibility and risks. They’ve written essays and convened panels and cofounded the Mirror Biology Dialogues Fund (MBDF), a broadly funded nonprofit charged with supporting work on understanding and addressing the risk. The issue has received a blaze of media attention and ignited dialogues among not only chemists and synthetic biologists but also bioethicists and policymakers. What’s received less attention, however, is how we got here and what uncertainties still remain about any potential threat. Creating a mirror-life organism would be tremendously complicated and expensive. And although the scientific community is taking the alarm seriously, some scientists doubt whether it’s even possible to create a mirror organism anytime soon. “The hypothetical creation of mirror-image organisms lies far beyond the reach of present-day science,” says Ting Zhu, a molecular biologist at Westlake University, in China, whose lab focuses on synthesizing mirror-image peptides and other molecules. He and others have urged colleagues not to let speculation and anxiety guide decision-making and argued that it’s premature to call for a broad moratorium on early-stage research, which they say could have medical benefits. But the researchers who are raising flags describe a pathway, even multiple pathways, to bringing mirror life into existence—and they say we urgently need guardrails to figure out what kinds of mirror-biology research might still be safe. That means they’re facing a question that others have encountered before, multiple times over the last several decades and with mixed results—one that doesn’t have a neat home in the scientific method. What should scientists do when they see the shadow of the end of the world in their own research? Looking-glass life The French chemist and microbiologist Louis Pasteur was the first to recognize that biological molecules had built-in handedness. In the late 19th century, he described all living species as “functions of cosmic asymmetry.” What would happen, he mused, if one could replace these chiral components with their mirror opposites? Scientists now recognize that chirality is central to life itself, though no one knows why. In humans, 19 of the 20 so-called “standard” amino acids that make up proteins are chiral, and all in the same way. (The outlier, glycine, is symmetrical.) The functions of proteins are intricately tied to their shapes, and they mostly interact with other molecules through chiral structures. Almost all receptors on the surface of a cell are chiral. During an infection, the immune system’s sentinels use chirality to detect and bind to antigens—substances that trigger an immune response—and to start the process of building antibodies. By the late 20th century, researchers had begun to explore the idea of reversing chirality. In 1992, one team reported having synthesized the first mirror-image protein. That, in turn, set off the first clarion call about the risk: In response to the discovery, chemists at Purdue University pointed out, briefly, that mirror-life organisms, if they escaped from a lab, would be immune to any attack by “normal” life. A 2010 story in Wired highlighting early findings in the area noted that if a such a microbe developed the ability to photosynthesize, it could obliterate life as we know it. The synthetic biology community didn’t seriously weigh those threats then, says David Relman, a specialist who bridges infectious disease and microbiology at Stanford University and a trailblazer in studying the gut and oral microbiomes. The idea of a mirror microbe seemed too far beyond the actual progress on proteins. “This was almost a solely theoretical argument 20 years ago,” he says. Now the research landscape has changed. Scientists are quickly making progress on mirror images of the machinery cells use to make proteins
No one’s sure if synthetic mirror life will kill us all Read Post »
We use cookies to improve your experience and performance on our website. You can learn more at นโยบายความเป็นส่วนตัว and manage your privacy settings by clicking Settings.