YouZum

AI

AI, Committee, Notizie, Uncategorized

The Download: how humans make decisions, and Moderna’s “vaccine” word games

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. You have no choice in reading this article—maybe How do humans make decisions? The question has been on Uri Maoz’s mind since he read an article in his early twenties suggesting that… maybe they didn’t.   Had he even had a choice about whether to read that article in the first place? How would he ever know if he was truly responsible for making any decisions? “After that, there was no turning back,” says Maoz, now a professor of computational neuroscience at Chapman University.  Today, Maoz is a central figure in efforts to understand how desires and beliefs turn into actions. He’s also uncovered new wrinkles in the debate. Read the full story on his discoveries. —Sarah Scoles This article is from the next issue of our print magazine, packed with stories all about nature. Subscribe now to read the full thing when it lands on Wednesday, April 22. What’s in a name? Moderna’s “vaccine” vs. “therapy” dilemma  Moderna, the covid-19 shot maker, is using its mRNA technology to destroy tumors through a very, very promising technique known as a cancer vacc—  “It’s not a vaccine,” a spokesperson for Merck said before the V-word could be uttered. “It’s an individualized neoantigen therapy.”  Oh, but it is a vaccine, and it looks like a possible breakthrough. But it’s been rebranded to avoid vaccine fearmongering—and not everyone is happy about the word game. Read the full story.  —Antonio Regalado This article is from The Checkup, our weekly newsletter covering the latest in biotech. Sign up to receive it in your inbox every Thursday.  The must reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Sam Altman’s home has been attacked twice in two days A driver reportedly fired a gun at his property on Sunday. (SF Standard) + A Molotov cocktail was thrown at his home on Friday. (NBC News) + The suspect wrote essays warning AI would end humanity. (SF Chronicle) + The attacks expose growing divides in opinion on AI. (Axios)  2 AI weapons are ushering in a new kind of arms race Countries are racing to deploy AI in military systems. (NYT $) + The Pentagon wants AI firms to train on classified data. (MIT Technology Review) + Where OpenAI’s technology could show up in Iran. (MIT Technology Review)  3 Artemis II was a success Astronauts did an array of experiments that will be crucial to the future of both the program itself and deep-space missions. (Guardian) + But next steps for the Artemis missions are uncertain. (Ars Technica)  4 OpenAI and Elon Musk are heading toward a massive courtroom clashThe company has accused Musk of a “legal ambush.” (Engadget) + He’s lost a streak of cases ahead of the showdown. (FT $)  5 AI job fears in China are fueling a viral “ability harvester” project It claims to turn human skills into AI tools. (SCMP) + Hustlers are cashing in on China’s OpenClaw AI craze. (MIT Technology Review)  6 Governments are hiding information about the Iran war online Through restrictions on internet access and satellite imagery. (NPR)   7 Apple is testing four smart glasses that could rival Meta Ray-Bans They’re part of a broader wearables strategy. (Bloomberg $)  8 Meta is building an AI version of Mark Zuckerberg to interact with staffIt’s being trained on his mannerisms, voice, and statements. (FT $)  9 Anthropic is asking Christian leaders for guidance It’s seeing advice on building moral machines. (WP $) + AI agents have spread their own religions. (MIT Technology Review)  10 A dancer with MND is performing again through an avatar Her brainwaves powered the digital dancer. (BBC)  Quote of the day “Earth was this lifeboat hanging in the universe.” —Artemis II astronaut Christina Koch describes her view of Earth from space, the Guardian reports. One more thing RAVEN JIANG How AI and Wikipedia have sent vulnerable languages into a doom spiral When Kenneth Wehr started managing the Greenlandic-language version of Wikipedia, he discovered that almost every article had been written by people who didn’t speak the language.   A growing number of them had been copy-pasted into Wikipedia from machine translators—and were riddled with elementary mistakes. This is beginning to cause a wicked problem.  AI systems, from Google Translate to ChatGPT, learn new languages by scraping text from Wikipedia. This could push the most vulnerable languages on Earth toward the precipice.  Read the full story on what happens when AI gets trained on junk pages.  —Jacob Judah  We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Hungary’s next health minister can throw some serious shapes.  + Here’s a welcome route to an AI-free Google search. + Movievia eschews endless scrolling to find the right film for your needs+ A photography trick has turned a giant glacier into a tiny, living diorama.

The Download: how humans make decisions, and Moderna’s “vaccine” word games Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Want to understand the current state of AI? Check out these charts.

If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise.  Despite predictions that AI development may hit a wall, the report says that the top models just keep getting better. People are adopting AI faster than they picked up the personal computer or the internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies meant to govern it, and the job market are struggling to keep up. AI is sprinting, and the rest of us are trying to find our shoes. All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. Annual water use from running OpenAI’s GPT-4o alone may exceed the drinking water needs of 12 million people. At the same time, the supply chain for chips is alarmingly fragile. The US hosts most of the world’s AI data centers, and one company in Taiwan, TSMC, fabricates almost every leading AI chip.  The data reveals a technology evolving faster than we can manage. Here’s a look at some of the key points from this year’s report.  The US and China are nearly tied In a long, heated race with immense geopolitical stakes, the US and China are almost neck and neck on AI model performance, according to Arena, a community-driven ranking platform that allows users to compare the outputs of large language models on identical prompts. In early 2023, OpenAI had a lead with ChatGPT, but this gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model built by the Chinese lab DeepSeek, briefly matched the top US model, ChatGPT. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. With the best AI models separated in the rankings by razor-thin margins, they’re now competing on cost, reliability, and real-world usefulness.  The index notes that the US and China have different AI advantages. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (more than 10 times as many as any other country), China leads in AI research publications, patents, and robotics.  As competition intensifies, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter counts, or data-set sizes. “We don’t know a lot of things about predicting model behaviors,” says Yolanda Gil, a computer scientist at the University of Southern California who coauthored the report. This lack of transparency makes it difficult for independent researchers to study how to make AI models safer, she says. AI models are advancing super fast Despite predictions that development will plateau, AI models keep getting better and better. By some measures, they now meet or exceed the performance of human experts on tests that aim to measure PhD-level science, math, and language understanding. SWE-bench Verified, a software engineering benchmark for AI models, saw top scores jump from around 60% in 2024 to almost 100% in 2025. In 2025, an AI system produced a weather forecast on its own.   “I am stunned that this technology continues to improve, and it’s just not plateauing in any way,” says Gil. However, AI still struggles in plenty of other areas. Because the models learn by processing enormous amounts of text and images rather than by experiencing the physical world, AI exhibits “jagged intelligence.” Robots are still in their early days and succeed in only 12% of household tasks. Self-driving cars are farther along: Waymos are now roaming across five US cities, and Baidu’s Apollo Go vehicles are shuttling riders around in China. AI is also expanding into professional domains like law and finance, but no model dominates the field yet.  But the way we test AI is broken These reports of progress should be taken with a grain of salt. The benchmarks designed to track AI progress are struggling to keep up as models quickly blow past their ceilings, the Stanford report says. Some are poorly constructed—a popular benchmark that tests a model’s math abilities has a 42% error rate. Others can be gamed: when models are trained on benchmark test data, for example, they can learn to score well without getting smarter.  Because AI is rarely used the same way it’s tested, strong benchmark performance doesn’t always translate to real-world performance. And for complex, interactive technologies such as AI agents and robots, benchmarks barely exist yet.  AI companies are also sharing less about how their models are trained, and independent testing sometimes tells a different story from what they report. “A lot of companies are not releasing how their models do in certain benchmarks, particularly the responsible-AI benchmarks,” says Gil. “The absence of how your model is doing on a benchmark maybe says something.”  AI is starting to affect jobs Within three years of going mainstream, AI is now used by more than half of people around the world, a rate of adoption faster than the personal computer or the internet. An estimated 88% of organizations now use AI, and four in five university students use it.  It’s early days for deployment, and AI’s impact on jobs is hard to measure. Still, some studies suggest AI is beginning to affect young workers in certain professions. According to a 2025 study by economists at Stanford, employment for software developers aged 22 to 25 has fallen nearly 20% since 2022. The decline might not be pinned on AI alone, as broader macroeconomic conditions could be to blame, but AI appears to be playing

Want to understand the current state of AI? Check out these charts. Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Large Language Model Post-Training: A Unified View of Off-Policy and On-Policy Learning

arXiv:2604.07941v1 Announce Type: new Abstract: Post-training has become central to turning pretrained large language models (LLMs) into aligned and deployable systems. Recent progress spans supervised fine-tuning (SFT), preference optimization, reinforcement learning (RL), process supervision, verifier-guided methods, distillation, and multi-stage pipelines. Yet these methods are often discussed in fragmented ways, organized by labels or objective families rather than by the behavioral bottlenecks they address. This survey argues that LLM post-training is best understood as structured intervention on model behavior. We organize the field first by trajectory provenance, which defines two primary learning regimes: off-policy learning on externally supplied trajectories, and on-policy learning on learner-generated rollouts. We then interpret methods through two recurring roles — effective support expansion, which makes useful behaviors more reachable, and policy reshaping, which improves behavior within already reachable regions — together with a complementary systems-level role, behavioral consolidation, which preserves, transfers, and amortizes behavior across stages and model transitions. This perspective yields a unified reading of major paradigms. SFT may serve either support expansion or policy reshaping, whereas preference-based methods are usually off-policy reshaping. On-policy RL often improves behavior on learner-generated states, though under stronger guidance it can also make hard-to-reach reasoning paths reachable. Distillation is often best understood as consolidation rather than only compression, and hybrid pipelines emerge as coordinated multi-stage compositions. Overall, the framework helps diagnose post-training bottlenecks and reason about stage composition, suggesting that progress in LLM post-training increasingly depends on coordinated system design rather than any single dominant objective.

Large Language Model Post-Training: A Unified View of Off-Policy and On-Policy Learning Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Symbiotic-MoE: Unlocking the Synergy between Generation and Understanding

arXiv:2604.07753v1 Announce Type: cross Abstract: Empowering Large Multimodal Models (LMMs) with image generation often leads to catastrophic forgetting in understanding tasks due to severe gradient conflicts. While existing paradigms like Mixture-of-Transformers (MoT) mitigate this conflict through structural isolation, they fundamentally sever cross-modal synergy and suffer from capacity fragmentation. In this work, we present Symbiotic-MoE, a unified pre-training framework that resolves task interference within a native multimodal Mixture-of-Experts (MoE) Transformers architecture with zero-parameter overhead. We first identify that standard MoE tuning leads to routing collapse, where generative gradients dominate expert utilization. To address this, we introduce Modality-Aware Expert Disentanglement, which partitions experts into task-specific groups while utilizing shared experts as a multimodal semantic bridge. Crucially, this design allows shared experts to absorb fine-grained visual semantics from generative tasks to enrich textual representations. To optimize this, we propose a Progressive Training Strategy featuring differential learning rates and early-stage gradient shielding. This mechanism not only shields pre-trained knowledge from early volatility but eventually transforms generative signals into constructive feedback for understanding. Extensive experiments demonstrate that Symbiotic-MoE achieves rapid generative convergence while unlocking cross-modal synergy, boosting inherent understanding with remarkable gains on MMLU and OCRBench.

Symbiotic-MoE: Unlocking the Synergy between Generation and Understanding Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

HyperMem: Hypergraph Memory for Long-Term Conversations

arXiv:2604.08256v1 Announce Type: new Abstract: Long-term memory is essential for conversational agents to maintain coherence, track persistent tasks, and provide personalized interactions across extended dialogues. However, existing approaches as Retrieval-Augmented Generation (RAG) and graph-based memory mostly rely on pairwise relations, which can hardly capture high-order associations, i.e., joint dependencies among multiple elements, causing fragmented retrieval. To this end, we propose HyperMem, a hypergraph-based hierarchical memory architecture that explicitly models such associations using hyperedges. Particularly, HyperMem structures memory into three levels: topics, episodes, and facts, and groups related episodes and their facts via hyperedges, unifying scattered content into coherent units. Leveraging this structure, we design a hybrid lexical-semantic index and a coarse-to-fine retrieval strategy, supporting accurate and efficient retrieval of high-order associations. Experiments on the LoCoMo benchmark show that HyperMem achieves state-of-the-art performance with 92.73% LLM-as-a-judge accuracy, demonstrating the effectiveness of HyperMem for long-term conversations.

HyperMem: Hypergraph Memory for Long-Term Conversations Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model

Complex prediction problems often lead to ensembles because combining multiple models improves accuracy by reducing variance and capturing diverse patterns. However, these ensembles are impractical in production due to latency constraints and operational complexity. Instead of discarding them, Knowledge Distillation offers a smarter approach: keep the ensemble as a teacher and train a smaller student model using its soft probability outputs. This allows the student to inherit much of the ensemble’s performance while being lightweight and fast enough for deployment. In this article, we build this pipeline from scratch — training a 12-model teacher ensemble, generating soft targets with temperature scaling, and distilling it into a student that recovers 53.8% of the ensemble’s accuracy edge at 160× the compression. What is Knowledge Distillation? Knowledge distillation is a model compression technique in which a large, pre-trained “teacher” model transfers its learned behavior to a smaller “student” model. Instead of training solely on ground-truth labels, the student is trained to mimic the teacher’s predictions—capturing not just final outputs but the richer patterns embedded in its probability distributions. This approach enables the student to approximate the performance of complex models while remaining significantly smaller and faster. Originating from early work on compressing large ensemble models into single networks, knowledge distillation is now widely used across domains like NLP, speech, and computer vision, and has become especially important in scaling down massive generative AI models into efficient, deployable systems. Knowledge Distillation: From Ensemble Teacher to Lean Student Setting up the dependencies Copy CodeCopiedUse a different Browser pip install torch scikit-learn numpy Copy CodeCopiedUse a different Browser import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader, TensorDataset from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import numpy as np Copy CodeCopiedUse a different Browser torch.manual_seed(42) np.random.seed(42) Creating the dataset This block creates and prepares a synthetic dataset for a binary classification task (like predicting whether a user clicks an ad). First, make_classification generates 5,000 samples with 20 features, of which some are informative and some redundant to simulate real-world data complexity. The dataset is then split into training and testing sets to evaluate model performance on unseen data. Next, StandardScaler normalizes the features so they have a consistent scale, which helps neural networks train more efficiently. The data is then converted into PyTorch tensors so it can be used in model training. Finally, a DataLoader is created to feed the data in mini-batches (size 64) during training, improving efficiency and enabling stochastic gradient descent. Copy CodeCopiedUse a different Browser X, y = make_classification( n_samples=5000, n_features=20, n_informative=10, n_redundant=5, random_state=42 ) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # Convert to tensors X_train_t = torch.tensor(X_train, dtype=torch.float32) y_train_t = torch.tensor(y_train, dtype=torch.long) X_test_t = torch.tensor(X_test, dtype=torch.float32) y_test_t = torch.tensor(y_test, dtype=torch.long) train_loader = DataLoader( TensorDataset(X_train_t, y_train_t), batch_size=64, shuffle=True ) Model Architecture This section defines two neural network architectures: a TeacherModel and a StudentModel. The teacher represents one of the large models in the ensemble—it has multiple layers, wider dimensions, and dropout for regularization, making it highly expressive but computationally expensive during inference. The student model, on the other hand, is a smaller and more efficient network with fewer layers and parameters. Its goal is not to match the teacher’s complexity, but to learn its behavior through distillation. Importantly, the student still retains enough capacity to approximate the teacher’s decision boundaries—too small, and it won’t be able to capture the richer patterns learned by the ensemble. Copy CodeCopiedUse a different Browser class TeacherModel(nn.Module): “””Represents one heavy model inside the ensemble.””” def __init__(self, input_dim=20, num_classes=2): super().__init__() self.net = nn.Sequential( nn.Linear(input_dim, 256), nn.ReLU(), nn.Dropout(0.3), nn.Linear(256, 128), nn.ReLU(), nn.Dropout(0.3), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, num_classes) ) def forward(self, x): return self.net(x) class StudentModel(nn.Module): “”” The lean production model that learns from the ensemble. Two hidden layers — enough capacity to absorb distilled knowledge, still ~30x smaller than the full ensemble. “”” def __init__(self, input_dim=20, num_classes=2): super().__init__() self.net = nn.Sequential( nn.Linear(input_dim, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, num_classes) ) def forward(self, x): return self.net(x) Helpers This section defines two utility functions for training and evaluation. train_one_epoch handles one full pass over the training data. It puts the model in training mode, iterates through mini-batches, computes the loss, performs backpropagation, and updates the model weights using the optimizer. It also tracks and returns the average loss across all batches to monitor training progress. evaluate is used to measure model performance. It switches the model to evaluation mode (disabling dropout and gradients), makes predictions on the input data, and computes the accuracy by comparing predicted labels with true labels. Copy CodeCopiedUse a different Browser def train_one_epoch(model, loader, optimizer, criterion): model.train() total_loss = 0 for xb, yb in loader: optimizer.zero_grad() loss = criterion(model(xb), yb) loss.backward() optimizer.step() total_loss += loss.item() return total_loss / len(loader) def evaluate(model, X, y): model.eval() with torch.no_grad(): preds = model(X).argmax(dim=1) return (preds == y).float().mean().item() Training the Ensemble This section trains the teacher ensemble, which serves as the source of knowledge for distillation. Instead of a single model, 12 teacher models are trained independently with different random initializations, allowing each one to learn slightly different patterns from the data. This diversity is what makes ensembles powerful. Each teacher is trained for multiple epochs until convergence, and their individual test accuracies are printed. Once all models are trained, their predictions are combined using soft voting—by averaging their output logits rather than taking a simple majority vote. This produces a stronger, more stable final prediction, giving you a high-performing ensemble that will act as the “teacher” in the next step. Copy CodeCopiedUse a different Browser print(“=” * 55) print(“STEP 1: Training the 12-model Teacher Ensemble”) print(” (this happens offline, not in production)”) print(“=” * 55) NUM_TEACHERS = 12 teachers = [] for i in range(NUM_TEACHERS): torch.manual_seed(i) # different init per teacher model = TeacherModel() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) criterion = nn.CrossEntropyLoss() for epoch in range(30): # train until convergence train_one_epoch(model, train_loader, optimizer, criterion) acc = evaluate(model,

How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

The Art of (Mis)alignment: How Fine-Tuning Methods Effectively Misalign and Realign LLMs in Post-Training

arXiv:2604.07754v1 Announce Type: cross Abstract: The deployment of large language models (LLMs) raises significant ethical and safety concerns. While LLM alignment techniques are adopted to improve model safety and trustworthiness, adversaries can exploit these techniques to undermine safety for malicious purposes, resulting in emph{misalignment}. Misaligned LLMs may be published on open platforms to magnify harm. To address this, additional safety alignment, referred to as emph{realignment}, is necessary before deploying untrusted third-party LLMs. This study explores the efficacy of fine-tuning methods in terms of misalignment, realignment, and the effects of their interplay. By evaluating four Supervised Fine-Tuning (SFT) and two Preference Fine-Tuning (PFT) methods across four popular safety-aligned LLMs, we reveal a mechanism asymmetry between attack and defense. While Odds Ratio Preference Optimization (ORPO) is most effective for misalignment, Direct Preference Optimization (DPO) excels in realignment, albeit at the expense of model utility. Additionally, we identify model-specific resistance, residual effects of multi-round adversarial dynamics, and other noteworthy findings. These findings highlight the need for robust safeguards and customized safety alignment strategies to mitigate potential risks in the deployment of LLMs. Our code is available at https://github.com/zhangrui4041/The-Art-of-Mis-alignment.

The Art of (Mis)alignment: How Fine-Tuning Methods Effectively Misalign and Realign LLMs in Post-Training Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

SealQA: Raising the Bar for Reasoning in Search-Augmented Language Models

arXiv:2506.01062v4 Announce Type: replace Abstract: We introduce SealQA, a new challenge benchmark for evaluating SEarch-Augmented Language models on fact-seeking questions where web search yields conflicting, noisy, or unhelpful results. SealQA comes in three flavors: (1) Seal-0 (main) and (2) Seal-Hard, which assess factual accuracy and reasoning capabilities, with Seal-0 focusing on the most challenging questions where chat models (e.g., GPT-4.1) typically achieve near-zero accuracy; and (3) LongSeal, which extends SealQA to test long-context, multi-document reasoning in “needle-in-a-haystack” settings. Our evaluation reveals critical limitations in current models: Even frontier LLMs perform poorly across all SealQA flavors. On Seal-0, frontier agentic models equipped with tools like o3 and o4-mini achieve only 17.1% and 6.3% accuracy, respectively, at their best reasoning efforts. We find that advanced reasoning models such as DeepSeek-R1-671B and o3-mini are highly vulnerable to noisy search results. Notably, increasing test-time compute does not yield reliable gains across o3-mini, o4-mini, and o3, with performance often plateauing or even declining early. Additionally, while recent models are less affected by the “lost-in-the-middle” issue, they still fail to reliably identify relevant documents in LongSeal when faced with numerous distractors. To facilitate future work, we release SealQA at huggingface.co/datasets/vtllms/sealqa.

SealQA: Raising the Bar for Reasoning in Search-Augmented Language Models Leggi l'articolo »

AI, Committee, Notizie, Uncategorized

Constellations

I. We had crash-landed on the planet. We were far from home. The spaceship could not be repaired, and the rescue beacon had failed. Besides me, only the astrogator, part of the captain, and the ship’s AI mind were left.  Outside, the atmosphere registered as hostile to most organisms. We huddled in the lifeboat, which was inoperable but still held air. Vast storms buffeted our cockleshell shelter, although we knew from prior readings that other areas remained calm. All that remained to us was to explore, if we wanted to live. The captain gave me the sole weapon. She tasked the astrogator with carrying some tools that would not unduly weigh him down. Little existed on the planet except deserts of snow. But alien artifacts lay in an area near us. We were an exploration team, so this discovery had oddly comforted us, even though we had been on our way elsewhere. The massive systems failure had no discernible source, and the planet had been our only choice for landfall. The artifacts took the form of 13 domes, spread out over that hostile terrain. The domes had been linked by cables just below shoulder level, threaded through the tops of metal posts at irregular intervals. Whether intended or not, these cables and rods formed a series of paths between the domes.  Before our instruments failed, the AI had reported that the domes appeared to have a heat signature. The cables pulsed under our grip in a way that teased promised warmth far ahead. It took some time to get used to the feeling. The shortest path between domes was a thousand miles long. The longest path was 10 thousand miles long. Our suit technology was good: A suit could recycle water, generate food, create oxygen. It could push us into various states of near hibernation while motors in the legs drove us forward. For the captain, the suit would compensate for having lost her legs and ease her pain. We estimated we could reach the nearest path and follow it to the nearest dome … and that was it. If the dome had life support capabilities, or even just a way to replenish our suits, we would live. Otherwise, we would probably die. We revised the estimate of our survival downward when we reached the path and soon encountered the skeletons of dead astronauts littering the way. In all shapes and sizes, cocooned within their suits. Their huddled forms under the snow displayed a serenity at odds with their fate. But when I wiped the frost from face plates, we saw the extremity of their suffering. It is difficult to explain how we felt walking among so many fatalities. So many dead first contacts.  We no longer had to puzzle over the systems failure. Spaceships came here to crash, and intelligent entities came here to die, for whatever reason. We could not presume our fate would be any different, and adjusted our expectations accordingly. The AI’s platitudes about courage did not raise morale. There were too many lost there in the frozen wastes.  Here were the ghastly emissaries of hundreds of spacefaring species we had never before encountered. The number of the bodies and their haphazard positioning hampered our ability to make progress to the dome. The AI estimated our chances of survival at below 50% for the first time. We would starve in our suits as the motors propelled us forward. We would become desiccated and exist in an elongation of our thoughts that made us weak and stupid until the light winked out. But still, we had no choice. So even in places where the dead in their suits were piled high, we would simply plunge forward, over and through them, headed for the dome.  What we would find there, as I have said, we did not know. But we were in an area of the galaxy where ancient civilizations had died out millions of years ago. We had been on our way to a major site, an ancient city on a moon with no atmosphere in a wilderness of stars.  Although our emotions fluctuated, a professional awe and curiosity about the dead eventually came over us. This created much debate over the comms. We had made a discovery for the ages, but our satisfaction was bittersweet. Even if we lived longer than expected, we would never return home, never see our friends or family again. The AI might continue on after we were dead, but I doubt it envied being the one to report on our discovery centuries hence. And to who? Here were the ghastly emissaries of hundreds of spacefaring species we had never before encountered. Their suits displayed an extraordinary range, although our examination was cursory. Some even appeared to be made out of scales and other biological substances from their home worlds, giving us further clues as to their origins.  The burial of the suits by snow and the lack of access to anything other than a screaming face or faces, often distorted by time and ice, worked against recording much usable data. This issue was compounded in those cases where the suit was part of the organism and they had not needed any “artificial skin,” as the AI put it, to survive harsh conditions. That many had died despite appearing well-­prepared for the planet’s environment sobered us up even before our own suits dispensed drugs to help our mental states.  After a time, each face seemed to express some aspect of our own stress and terror at the seriousness of our situation. After a time, the sheer welter of detail defeated us and caused us extreme distress. The captain made the observation that even one instance of alien contact might cause physiological and mental conditions, including anxiety, stress, fatigue. Here, we were constantly encountering the alien dead of what seemed at times an infinite number of civilizations.  We stopped recording. We recommitted ourselves to the slog toward the nearest dome.  The captain’s

Constellations Leggi l'articolo »

We use cookies to improve your experience and performance on our website. You can learn more at Politica sulla privacy and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
it_IT