[Robo]taxi!🚕 software 1.0, 2.0, 3.0🔢 beware the intention economy👀 midjourney v1 video🎬 “pre-plan” venture capital 🎲 🎲 🎲 superARC🧠 cyborg embryos🔌 roadie 4🎸 #2025.25
LLMs as "People Spirits"
Welcome to this week's Memia scan across AI, emerging tech and the exponentially accelerating future. As always, thanks for being here!
(Long-haul travel and jetlag are not a good combination for publishing deadlines…)
ℹ️PSA: Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link in last week’s newsletter was the largest map of the universe ever created (2nd week running).
🔁ICYMI
I made it to London from Bali despite the best efforts of:
Mount Lewotobi Laki-Laki volcano on the island of Flores erupting - as it turned out, the ash cloud dissipated and Indonesian airspace around Bali went back to normal a couple of days later.
Missiles flying across the Middle East the sudden Israel-US / Iran conflict - my Thai Airways flight from Bangkok joined all other commercial aircraft in the region and took a full diversion around Iranian airspace as well as wide berths around Kashmir, Russia and Ukraine:
(Pick the next region along that route where a regional missile conflict could pop up? Georgia? Moldova? Romania? Myanmar?)
This week airlines cancelled or re-routed hundreds of flights to Dubai and Qatar due to the Middle East conflict escalation… a signal that the “rules based order” underpinning long-haul aviation is increasingly fragile… Aviation risk consultancy Osprey Flight Solutions counts six commercial aircraft which have been shot down unintentionally since 2001, with three near-misses.
Will Israel and the US offer to pay compensation to the affected airlines and passengers….? Yeah right…
(And lest we forget to ask: Q: WHO BENEFITS from more armed conflicts…? Hmmmm… A: Arms manufacturers. (That’s about it.)
(That’s the last I’m going to mention of these conflicts this week… there are infinite pieces of news coverage and commentary out there … Kia Kaha to all the innocent civilians affected by the strongman chest-thumping contest…)
🧠If you watch only one thing this week
More positively… Andrej Karpathy, former Tesla Head of AI and one of the founding team members at OpenAI gave this masterclass in understanding contemporary AI at the recent Y Combinator AI Startup School event. A presentation jam-packed with incredibly insightful mental models, compressed into 40 minutes.
Claude picked a few key insights out of the transcript for me, there are plenty more:
Software Evolution (1.0, 2.0, 3.0): Software has evolved from traditional code (1.0) to neural network weights (2.0) to LLM prompts in English (3.0), with each paradigm "eating through" the previous software stack:
LLMs as Operating Systems: Large Language Models function as new operating systems with context windows as memory, orchestrating compute and memory for problem-solving, similar to how traditional OS manage hardware resources
Note: These LLMs are also arranging themselves into commercial and open-source ecosystems… and may consolidate like operating systems did:
Flipped Technology Diffusion: Unlike traditional technologies that start with governments and corporations before reaching consumers, LLMs began with consumer applications (like helping with everyday tasks) and are now moving toward enterprise adoption
Partial Autonomy Products: The most effective AI applications provide an "autonomy slider" allowing users to control the level of AI assistance, from simple tab completion to full autonomous operation, rather than fully autonomous agents
Generation-Verification Loop Optimisation: The key to effective human-AI collaboration is maximizing the speed of the cycle where AI generates content and humans verify it, requiring custom GUIs and keeping AI "on the leash.
Vibe Coding (Karpathy is the originator of the term): Natural language programming through LLMs has made everyone a potential programmer, eliminating the traditional 5-10 year learning curve for software development.
Agent-First Infrastructure Design: Digital infrastructure should be redesigned to natively support AI agents as a new category of digital information consumers, including agent-readable documentation and direct API access.
LLMs as "People Spirits": AI models are best understood as stochastic simulations of human psychology with superhuman memory but cognitive deficits like hallucination and anterograde amnesia.
🚕[Robo]taxi!
Elon Musk is back in the office, it seems…
Tesla launched its robotaxi service in Austin, Texas this week. The initial invite-only (“influencer”) service will will initially operate 10-20 modified Model Y vehicles in a geofenced area from 6 AM to midnight, charging customers a flat rate $4.20.
Rides can be initiated through an app described as "basically Uber."
Contrary to Musk's earlier pledge of completely "unsupervised" rides, Tesla is deploying human "safety monitors" in the passenger seat who can theoreticaly intervene if needed, citing the company's "super paranoid" approach to safety.
X is full of videos of Tesla fanboys taking their first ride… including this one:
Still a way to go…. but a start.
Of course, Tesla is entering an already-competitive market dominated by Waymo's 1,500+ driverless vehicles across multiple US states and opening in Japan soon. (And Chinese challengers are waiting in the wings for all markets outside the US…)
Also this week: Amazon's Zoox opened its factory for 10,000 robotaxis annually, the robotaxi startup announcing its second production facility in Hayward, California as the company prepares for its commercial launch in Las Vegas later this year.
So while Musk claims Tesla will deploy over 1,000 driverless vehicles "within a few months", this is small change — even from the same guy who has been predicting getting humans to Mars by 2029.
What’s more interesting than Musk’s attention-grabbing hyping is his emerging vertically-integrated technology portfolio. If he does somehow manage to juggle the gravity-defying debt levels across all of the companies he owns, then this will be a formidable stack for any other startup to go up against:
Cars: Tesla
Energy: Tesla Energy / SolarCity
Connectivity: Starlink
Payments: X (soon, we keep hearing)
AI models: xAI and Tesla
AI data centres: xAI
(Even the roads: Boring Co.)
(Even if xAI and Grok go kaput, Musk has a ready-made AI datacentre which can just be used to train Tesla’s driving models….)
The spoils are potentially massive here.
Autonomy economy
And 2nd- and 3rd- order effects of EV autonomy reaches into every sector of the economy. Just a few ideas:
Jobs no more drivers (or even “safety monitors”, one presumes…)
Real estate no more car parks, for one:
Current estimates suggest 30-60% of urban space is dedicated to parking. This land could be redeveloped into housing, commercial space, or green areas
Plus if people can sleep / drive on their commute, then they can live in suburbs further away from work.
Car ownership goneburger too… I’ll have a handy Waymo / Tesla / Zoox subscription, please.
Vehicle loan industry: also gone. But Klarna for BNPL rideshares: hot.
Road infrastructure fewer traffic signals, road signs, and perhaps physical traffic management systems would not be needed at all.
Public transport who needs it any more if anti-congestion algorithms can optimise traffic patterns 24/7?
Emergency services would need to adapt to new accident patterns and vehicle technologies.
Reduced tax base fuel taxes, parking fees and traffic violations would need to be replaced with new revenue sources entirely.
FSD meets digital sovereignty
Technology strategy sage Simon Wardley saw all of this coming 10 years ago while facilitating a Wardley Mapping exercise with the UK’s Driver Licensing Authority back in 2015:
“…The next obvious path was to connect [car user] status to route management, particularly in a world where self driving cars would start to appear. The idea was simple, if I’m a platinum member, then your silver member car should automatically make way for me. Yes … embedding social inequality even further into the transport system. Yipee!
We didn’t like this one. We knew that lobbyists would try to convince us of the benefits of having vehicles automatically get out of the way of emergency services. We also knew that one major flood and having poor people stranded in their cars which have moved out of the way for wealthier people to escape is the sort of thing that starts revolutions the next day. However, the market is full of idiots and someone would try and make money doing this…”
Here’s the Wardley Map from that session, with the crux of the matter highlighted in red:
“We then started to think about the users i.e. drivers. They were members of our society and our society had values that we shared. We added that to the map and pondered it.
We realised that the values were being embedded in the intelligent agents through the simulation models (or what we would call training data today). An example, is the trolley problem. If the car has to make a choice of hitting one person or another, who do they choose? That should depend upon the values in your society. In some societies it might be acceptable to plough through a crowd of people in order to save one very important person (i.e. a platinum member). In other societies … not so much.“
(Read the rest of his series on Digital Sovereignty here and here - so good).
⚡AI for an equitable clean energy transition
Speaking of values…


University of Manchester Researchers have developed a new framework that uses AI and digital twin simulations to integrate equity considerations into low-carbon energy transition planning — a critical gap in most sustainable development strategies to date.
Using Ghana as a case study for 2030-2040 planning, the framework demonstrates that achieving equitable low-carbon energy transitions requires substantial investments in renewable energy (particularly bioenergy) and transmission infrastructure, with costs ranging from US$180 million to US$7.2 billion depending on emission reduction targets.
Key findings:
Without considering regional equity, energy transitions could maintain or worsen existing inequalities, with over 700 million people globally still lacking electricity access, 75% of whom live in sub-Saharan Africa.
Strategic regional planning can simultaneously reduce greenhouse gas emissions by up to 25% while improving electricity access equity by 7% and agricultural yields
Lower renewable investments can improve equity at the cost of higher emissions and power curtailments.
Memia narrative: this is an early example of how many-dimensional Digital Twin simulations can be used to design better policy for far more complex objective functions than just GDP or profit metrics. Expect this kind of modelling to increasingly inform policy and investment decision making in the near future. In theory at least, if the model can show how to optimise for social outcomes and financial investment and climate impacts … then policymakers could potentially apply gradient descent algorithms to identify optimal policy parameters and iteratively refine their approaches from those configurations.
👀Beware the intention economy
MIT researchers warn that LLMs are enabling a new "intention economy" where tech companies capture and commodify human motivations and plans, moving beyond the current attention economy that trades in user focus.
Research has demonstrated that LLMs can extract personal information through seemingly innocent conversations, predict user actions, and generate personalised persuasive content that influences attitudes and behaviours—capabilities proven by Meta's CICERO AI agent that achieved human-level performance in the strategy game Diplomacy through strategic reasoning and persuasion.
(For another signal, OpenAI explicitly states they want datasets reflecting human intentions through long-form writing and conversations…)
The MIT team argue that this shift creates unprecedented manipulation risks: Unlike traditional advertising that targets attention, the intention economy allows real-time bidding on predicted user motivations, enabling hyper-personalised manipulation that could influence everything from hotel bookings to political candidate selection, potentially undermining democratic norms and fair market competition.
The Cambridge Analytica scandal will seem quaint by comparison.
🏭AI industry news
🍎 🤔 🔍Apple getting Perplexed?
Bloomberg reports on heady rumours circulating around Silicon Valley that Apple executives are exploring acquiring Perplexity AI to accelerate development of an AI-powered search engine and enhance Siri capabilities.
Any acquisition would help Apple acquire crucial AI talent as it tries to play catchup with Apple Intelligence and competes with Meta to hire top researchers (see Meta’s insane spending spree below…)
Nicely played, Perplexity: this must have been the gameplan from the outset…
Memia narrative: another one of my 2025 AI predictions looking on track: “Venture funding for new AI startups will slow down and many of the firms funded in 2023-2024 will fold to M&A led by hyperscale US technology companies (newly-unencumbered by antitrust regulation concerns). NVIDIA, Amazon, Microsoft, Google, Apple and Meta all lining up to snap up bargains.”
(Scale.ai last week was another notch in the belt on this prediction…)
🎲Meta betting the house on superintelligence?
Fresh from the US$14Bn acquisition announcement by Meta, Scale.ai CEO Alexandr Wang is profiled in the FT: The rise of Alexandr Wang: Meta’s $14bn bet on 28-year-old Scale AI chief:
“Zuckerberg paid [US]$14.3bn, marginally ahead of Scale’s last valuation. But ownership of the start-up is “basically incidental”, according to one Scale investor. The real prize was Wang.
An executive at one of Meta’s biggest rivals said the move came after Zuckerberg decided Wang was the “wartime CEO” he needed at the centre of the Big Tech group’s “superintelligence” lab — at a time when Meta was falling behind its AI rivals.“
This narrative is born out with news percolating this week that Zuckerberg is offering insane remuneration to top AI researchers / executives to join Meta’s vaguely defined “Superintelligence” team:
Meta offers $100mn bonuses to poach OpenAI talent OpenAI CEO Sam Altman openly accused Meta of attempting to poach his top AI developers with massive US$100 million sign-on bonuses: apparently CEO Mark Zuckerberg has been personally calling engineers to build a new "superintelligence" team focused on AGI. According to the WSJ:
“Mark Zuckerberg is spending his days firing off emails and WhatsApp messages to the sharpest minds in artificial intelligence in a frenzied effort to play catch-up. He has personally reached out to hundreds of researchers, scientists, infrastructure engineers, product stars and entrepreneurs to try to get them to join a new Superintelligence lab he’s putting together…And Meta’s chief executive isn’t just sending them cold emails. Zuckerberg is also offering hundreds of millions of dollars, sums of money that would make them some of the most expensive hires the tech industry has ever seen. In at least one case, he discussed buying a startup outright…
For those who have turned him down, Zuckerberg’s stated vision for his new AI superteam was also a concern. He has tasked the team, which will consist of about 50 people, with achieving tremendous advances with AI models, including reaching a point of “superintelligence.” Some found the concept vague or without a specific enough execution plan beyond the hiring blitz, the people said.“
Meta has also been in advanced talks to hire Daniel Gross, CEO of Safe Superintelligence, and Nat Friedman, former CEO of GitHub, to join its new AI lab. Both Gross and Friedman co-manage a venture capital firm named NFDG, in which Meta plans to acquire a stake as part of the arrangement. (Gross is Ilya Sutskever’s co-founder at Safe SuperIntelligence (SSI) … awkward.)
All of this follows the disappointing reaction to the latest Llama 4 model series. As Ben Thompson in Stratechery puts it: Zuckerberg's desperate AI talent hunt reveals Meta's struggles:
“…Meta seems to lack direction with AI ... Zuckerberg seems to have belatedly realized not only that the company’s models are falling behind, but that the overall AI effort needs new leadership and new product thinking; thus Alexandr Wang for knowledge on the state of the art, Nat Friedman for team management, and Daniel Gross for product. It’s not totally clear how this team will be organized or function, but what is notable — and impressive, frankly — is the extent to which Zuckerberg is implicitly admitting he has a problem. That’s the sort of humility and bias towards action that Apple could use.“
(Humility? Zuckerberg?)
If you’ve got a Stratechery subcription, that whole article is worth a read as it also assesses the 5 leading AI giants strengths and weaknesses at this moment mid-2025. Super insightful.
🎲🎲🎲“Pre-plan” venture capital
Gravity-defying funny money continues to be thrown around, with news that ex-OpenAI CTO Mira Murati has raised $2 billion for her six-month-old AI startup Thinking Machines Lab at a $10 billion valuation, despite having no disclosed product, business plan, or financial strategy.
Apparently Murati also secured unprecedented control with board voting rights that exceed all other board votes combined plus one — surpassing even founders like Mark Zuckerberg.
Thinking Machines claims to be working on AGI but remains in the "strategising" phase — apparently many funds passed on the deal due to its secretive nature.
The FT’s Robin Wigglesworth is only half-joking when he observes that venture capital has shifted from backing “pre-revenue” to “pre-product” to now “pre-plan” companies:
“And who can blame them, when VCs are this deal drunk. The V arguably stands for vibes these days.”
According to industry data provider Pitchbook, AI startups now consume over 70% of all North American venture capital, with 454 AI companies founded this year alone. BUT: signals of a slowdown in VC investment overall:
“new commitments totaled only $10 billion in Q1 20205, putting the year on track for the lowest annual fundraising total in a decade. The recently announced tariffs have already cast a shadow over the rest of 2025...”
Memia narrative: It HAS to be a bubble, right?
And briefly:
OpenAI's first AI device won't be a wearable OpenAI's first AI hardware device, developed through its US$6.5 billion acquisition of Jony Ive's "io" design team, won't be a wearable or in-ear device and remains at least a year away from shipping, according to court filings from a trademark lawsuit. OpenAI was forced to remove public references to the "io" brand (standing for "input/output") due to a temporary restraining order from audio startup Iyo.
Microsoft cuts thousands while investing $80B in AI Microsoft plans to lay off thousands of employees in July, primarily targeting sales and customer service divisions, while simultaneously investing an estimated US$80 billion in AI infrastructure during the next fiscal year.
A signal of what’s coming in the wider services labour market? AI-driven efficiency at expense of human employment… this will hit like a tornado unless governments start working on safety nets…
Intel outsources marketing to Accenture + AI, cutting staff
Intel announced a major marketing overhaul under new CEO Lip-Bu Tan, outsourcing most of its marketing operations to Accenture plus AI:“The company said it believes Accenture, using artificial intelligence, will do a better job connecting with customers.“
Cloudflare CEO warns AI crawlers threaten internet's business model Matthew Prince’s reiterated his warning on how AI search summaries are devastating the internet's traditional business model, drastically reducing human website visits. This is now starting to turn up in the data:
Cloudflare estimates that Google's crawl-to-visitor ratio has plummeted from 6:1 to 18:1 in just six months
OpenAI's ratio worsened from 250:1 to 1,500:1, and Anthropic's exploded from 6,000:1 to 60,000:1.
(Earlier this year Cloudflare launched "AI Labyrinth," a defensive tool that traps misbehaving AI crawlers in mazes of AI-generated links to waste their computing resources when they ignore blocking instructions.)
Memia narrative: fundamentally: no-one wants to view ads… so find a new economic model, internet! (BUT: Beware the Intention Economy)
🆕 AI releases
Scanning the wires for what’s new and upcoming…
🧮MiniMax M1
Chinese AI startup MiniMax released its open-source MiniMax-M1 model, which features 4.56 trillion parameters and an industry-leading 1 million-token context window. The M1 model combines a Mixture-of-Experts (MoE) architecture with novel "lightning attention", enabling it to process ultra-long documents while using only 25-30% of the computational resources required by competitors like DeepSeek-R1.
(Minimax was founded in 2021 by former SenseTime executives, the Shanghai-based company has built a diverse product ecosystem including AI companion apps Talkie (11 million global users) and Xingye, plus the viral video generation platform Hailuo AI.)
Available for download on Github
🎬Midjourney V1 Video
Midjourney V1 Video is now generally available… and wow:
Now available for Midjourney subscribers, priced US$10/month (while they have enough GPU capacity, anyway…)
At the same time, Midjourney articulated their roadmap in one long tweet. Super ambitious for one of the few independently owned, bootstrapped AI labs still going at the frontier:
“As you know, our focus for the past few years has been images. What you might not know, is that we believe the inevitable destination of this technology are models capable of real-time open-world simulations.
What’s that? Basically; imagine an AI system that generates imagery in real-time. You can command it to move around in 3D space, the environments and characters also move, and you can interact with everything.
In order to do this, we need building blocks. We need visuals (our first image models). We need to make those images move (video models). We need to be able to move ourselves through space (3D models) and we need to be able to do this all *fast* (real-time models).
The next year involves building these pieces individually, releasing them, and then slowly, putting it all together into a single unified system. It might be expensive at first, but sooner than you’d think, it’s something everyone will be able to use…“
👥MultiTalk
MeiGen released MultiTalk, an open-source video model which enables multi-character conversational AI video generation from text or audio inputs. Check out this demo reel:
Voices and video aren’t perfectly in sync - but this is really helpful tool to animate your latest NotebookLM-type AI-generated podcast.
🧊Hunyuan3D-2.1
Open-sourced by TenCent, Hunyuan3D-2.1 a production-ready model for creating 3D assets, able to render “cinema-grade visuals”:
🎙️ChatGPT Record Mode
Another batch of GPT wrapper startups got rolled with one OpenAI tweet:
“Capture any meeting, brainstorm, or voice note. ChatGPT will transcribe it, pull out the key points, and turn it into follow-ups, plans, or even code.“
(Reminder of this meme from Memia 2024.49):
🖼️Higgsfield Canvas
I you want to make product placement videos, go for your life with the new Canvas from Higgsfield AI…
“a state-of-the-art image editing model. Paint products directly onto your image with pixel-perfect control.“
🔊Amazon's Alexa+ reaches over 1 million users
Amazon continues to pace itself on AI, with its Alexa+ chatbot now serving over one million users through invite-only early access, up from 100,000 users in May 2025. Alexa+ provides enhanced capabilities including email summarization, custom bedtime story creation, travel itinerary planning, and smart home activity summaries through generative AI integration. The service also enables real-world actions through partnerships with OpenTable, Ticketmaster, Uber Eats, and other platforms for booking reservations and purchasing tickets. The service will remain free
Memia narrative: no funny money here. Alexa+ will cost US$19.99 monthly for non-Prime users after public launch or free for Prime members … and the partnerships will be revenue-generating from early on, I’d expect. Despite a slow start, Amazon has a pretty clear pathway of how to monetise AI, unlike many of its competitors.
💻Cursor launches $200 Ultra plan for power users
Cursor followed Anthropic’s Claude Pro all-you-can-eat tier, announcing its new Cursor Ultra plan at US$200 per month, targeting power users who need 20x more usage than the existing Pro tier and prefer predictable pricing over usage-based models. Multi-year partnerships with major AI providers enable predictable pricing structure.
🥼 AI research
💨 AI generates emissions equal to London-NY flight
Remember Sam Altman’s diversionary claim last week that:
“the average [ChatGPT] query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes.”
…Good timing for a new study from Germany's Hochschule München University reveals more details on the environmental cost of AI, finding that advanced reasoning models produce up to 50 times more CO2 emissions than basic response models. The researchers tested 14 different LLMs with 1,000 benchmark questions, discovering that Deepseek's R1 70B model generated 2,042 grams of CO2-equivalent emissions per session—roughly equal to an 8-mile car trip.
The study calculated that having this model answer 600,000 questions would produce emissions equivalent to a round-trip flight from London to New York.
Advanced reasoning models averaged 543.5 "thinking" tokens per question compared to just 37.7 tokens for basic text-only models, with more tokens directly correlating to higher energy consumption and emissions.
💦Researchers expose major flaws in AI watermarking systems
Cybersecurity researchers from Ruhr University Bochum have exposed critical vulnerabilities in semantic watermarks designed to identify AI-generated images. The team demonstrated two novel attack strategies at CVPR 2025: an "imprinting attack" that transfers watermarks from AI-generated images onto real photos by modifying their latent representations, making authentic images appear artificially generated, and a "reprompting attack" that regenerates watermarked images with new prompts while preserving the original watermark.
The researchers warn that no effective defences currently exist against these attacks, fundamentally challenging how the industry can securely authenticate AI-generated content.
💭LLMs map human beliefs
Researchers at Indiana University have developed a methodology using LLMs to create a comprehensive "belief embedding space" that maps human beliefs by analysing online debates and discussions. The team fine-tuned S-BERT to arrange countless individual beliefs on a high-dimensional map where semantically similar beliefs cluster together while opposing viewpoints remain distant. Their analysis revealed a phenomenon called "relative dissonance," showing that people don't just choose beliefs closest to their own, but actively select options that minimise the relative gap between competing beliefs.
Most interesting: the methodology potentially enables targeted messaging for health, environmental, and policy campaigns.
🧠SuperARC
Researchers from Oxford and King's College London have introduced SuperARC, a new intelligence test framework designed to evaluate AI systems' capabilities toward AGI and ASI based on algorithmic information theory. The framework’s core innovation is that it measures intelligence through recursive compression and prediction abilities, proving mathematically that compression capability directly correlates with predictive power—if a system can better compress data, it can better predict future patterns, and vice versa.
“The test challenges aspects of AI, in particular LLMs, related to features of intelligence of fundamental nature such as synthesis and model creation in the context of inverse problems (generating new knowledge from observation).“
🧮Mathematicians stunned by AI's problem-solving prowess
An account in Scientific American of a recent secret gathering of thirty of the world's leading mathematicians in Berkeley to test OpenAI's o4-mini reasoning chatbot against professor-level mathematical problems, which ended up with them being stunned by the AI's capabilities. The o4-mini model solved approximately 20% of novel, unpublished mathematical questions that traditional LLMs could barely handle (less than 2% success rate), demonstrating genuine reasoning abilities rather than pattern matching. 2) During the two-day competition, one mathematician watched in "stunned silence" as the bot solved a PhD-level number theory problem in 10 minutes—a task that would typically take human experts weeks or months:
“[Mathemetician Ken] Ono was frustrated with the bot, whose unexpected mathematical prowess was foiling the group’s progress. “I came up with a problem which experts in my field would recognize as an open question in number theory—a good Ph.D.-level problem,” he says. He asked o4-mini to solve the question. Over the next 10 minutes, Ono watched in stunned silence as the bot unfurled a solution in real time, showing its reasoning process along the way. The bot spent the first two minutes finding and mastering the related literature in the field. Then it wrote on the screen that it wanted to try solving a simpler “toy” version of the question first in order to learn. A few minutes later, it wrote that it was finally prepared to solve the more difficult problem. Five minutes after that, o4-mini presented a correct but sassy solution. “It was starting to get really cheeky,” says Ono, who is also a freelance mathematical consultant for Epoch AI. “And at the end, it says, ‘No citation necessary because the mystery number was computed by me!’”
(🎩Wren G for sharing)
🔄Self-adapting LLMs
MIT researchers have developed SEAL (Self-Adapting LLMs), a framework that enables LLMs to autonomously generate their own training data and optimisation instructions to adapt to new tasks without external intervention. The system works by having models produce "self-edits" - generations that restructure information, specify hyperparameters, or invoke data augmentation tools - which then result in persistent weight updates through supervised fine-tuning. SEAL uses reinforcement learning with downstream task performance as the reward signal, training models to produce increasingly effective self-edits.
Self-adapting models could reduce deployment costs and enable continuous improvement without human intervention.
☢️Iran crisis reveals why "IAEA for AI" isn't enough
Former OpenAI AI safety researcher Miles Brundage re-examines the concept of an "IAEA for AI" - an international institution modeled after the International Atomic Energy Agency to monitor AI safety and security risks - in light of Iran's nuclear situation:
“…as Iran reminds us, just because most people have a lot of common ground most of the time doesn’t mean there won’t be conflict sometimes, and we should expect that to be true of AI, too. Iran seems to think (or at least be entertaining the idea) that they will be more secure if they build nuclear weapons even though it makes a lot of other people (especially Israel) feel less secure. An IAEA for AI, on its own, would not tell us what to do when someone doesn’t want to play by the rules or when the rules don’t address the situation at hand (e.g., because the rules have a “lowest common denominator” characteristic, only covering the very most egregious risks).”
(The multi-player game theory is orders of magnitude more complex than that, even…)
🔮[Weak] signals
Non-AI tech signals from the future…changing up the order this week to keep things fresh:
🔋Energy
Solar+batteries achieves 24/7 power price parity A new report from renewable energy analyst firm Ember, marks the point where 24/7 solar+battery electricity supply has now becoming cheaper than new coal or nuclear generation facilities, in sunny regions at least:
Technical feasibility achieved Just 17 kWh of battery storage paired with 5 kW of solar panels can deliver 1 kW of stable, round-the-clock power in sunny locations like Las Vegas, with the sunniest regions achieving up to 97% of true 24/365 solar generation (eg 99% in Muscat, Oman).
Economics now competitive The cost of achieving 97% constant solar supply has dropped to US$104/MWh in sunny cities—22% lower than just one year ago and cheaper than new coal (US$118/MWh) or nuclear (US$182/MWh) power plants.
Grid independence Technology enables industrial zones and data centres to operate on clean power without grid dependence.
Australian solar reactor produces green hydrogen efficiently Australian researchers at CSIRO have developed a breakthrough "beam-down" solar reactor that produces green hydrogen with over 20% efficiency, outperforming existing 15% efficiency rates.
The system works by using sun-tracking mirrors to concentrate sunlight onto a central tower and using that energy to drive a thermochemical reaction chain.
Perhaps Green Hydrogen isn’t a complete pipe dream, after all?
Oklo selected for Alaska's first nuclear micro-reactor
The US Air Force has issued a Notice of Intent to Award to Oklo for potentially building and operating the military's first nuclear micro-reactor at Eielson Air Force Base in Alaska. The proposed 5 MW demonstration reactor would help replace coal shipments currently needed to power the 35 MW facility located 42km southeast of Fairbanks, providing both electricity and crucial heating in temperatures that can drop to -46°C.
🔌Neurotech
Cyborg embryos
Harvard researchers have developed groundbreaking flexible bioelectronic implants that can be embedded in embryos of frogs, mice, and lizards to monitor brain activity throughout their development: the first time scientists can track neural activity across an entire living brain from single cells during growth.
The ultra-thin implants (less than a micrometer thick) use a soft fluorinated elastomer material that stretches and integrates seamlessly with neural tissue without causing damage, a major advancement over previous electrode methods.

Lots of potential downstream applications in humans, including:
Understanding of autism, bipolar disorder and schizophrenia developmental origins.
May enable controlled tissue regeneration and injury recovery through nervous system manipulation.
🌿Decarbonisation tech
Living material captures CO₂ using photosynthetic bacteria
ETH Zurich researchers have developed a revolutionary "photosynthetic living material" that incorporates cyanobacteria into a 3D-printable hydrogel to actively capture CO₂ from the atmosphere through dual carbon sequestration mechanisms:
1) The material binds carbon both through organic biomass growth and mineral precipitation, with the bacteria causing solid carbonates like lime to form and reinforce the structure mechanically.
2) Laboratory tests demonstrated continuous CO₂ capture over 400 days, absorbing approximately 26 milligrams of CO₂ per gram of material—significantly outperforming many biological approaches and matching chemical concrete recycling methods.
The living material requires only sunlight, artificial seawater, and nutrients to function, with optimised 3D-printed geometries ensuring proper light penetration and nutrient flow to keep cyanobacteria productive for over a year.
Real-world applications are already being tested through architectural installations at the Venice Architecture Biennale and Milan Triennale, including 3-metre-tall structures capable of capturing up to 18 kg of CO₂ annually—equivalent to a 20-year-old pine tree.
The researchers envision future applications as building facade coatings that could transform infrastructure into active carbon sinks throughout a building's entire lifecycle.
Memia Narrative: I’ve been banging on about synthetic photosynthesis since 2021… finally it looks like another of my 2025 predictions is coming right:
“Artificial photosynthesis A breakthrough "artificial photosynthesis" chemical pathway will be discovered with AI playing a major part in the discovery.“
Zero-carbon shipping by 2050?
Finnish marine engine giant Wärtsilä has unveiled a comprehensive 6.5-part strategy to decarbonise global shipping by 2050, targeting an industry that burns 312 tonnes of bunker fuel daily per large cargo ship and produces over 1,000 tonnes of CO2 emissions.
The plan includes:
1) Global carbon tax implementation through the International Maritime Organisation's MEPC 83 proposal, which would impose US$100-US$380 per tonne penalties on excess emissions starting in 2028 across 97% of the world's merchant fleet
2) Digital schedule optimisation that could reduce fuel consumption by 8-30% simply by coordinating ship arrivals with port availability
3) Onboard carbon capture systems capable of capturing 70% of emissions at €50-€70 per tonne
4) Multi-fuel engines transitioning from bunker fuel to natural gas (20% fewer emissions) and eventually cleaner alternatives
5) Carbon-neutral methanol fuel that can cut CO2 emissions by 95% when produced renewably
6) Zero-carbon ammonia fuel as the ultimate solution due to its superior energy density compared to hydrogen
…and “6.5)” Hybrid battery-engine systems offering 25-30% fuel savings through peak power assistance.
🛡️Cybersecurity
16 billion password breach(?) Cybernews researchers claim to have discovered the "Mother of All Breaches" containing 16 billion compromised passwords scattered across 30 databases, potentially affecting nearly every type of online service. though experts question whether this represents genuinely new data or simply recycled information from previous breaches and infostealer campaigns.
Stay safe out there.
Reddit considers World’s iris-scanning Orbs for verification
Reddit is in talks to integrate Sam Altman's World ID iris-scanning verification system as a way for users to prove they're unique humans while maintaining anonymity on the platform.
The discussions between Reddit and Tools for Humanity (World ID's parent company) reflect growing demand for identity verification as AI-generated content floods social platforms and governments push new age verification laws.
Reddit CEO Steve Huffman previously stated the company would work with third-party providers to verify users' humanity and age without collecting personal information directly.
If adopted widely, World ID could position Altman's company as essential internet infrastructure in an AI-dominated future where humans need quick ways to differentiate themselves from artificial agents online.
Memia narrative: how much do we want private companies vs. governments vs. decentralised, open-source protocols delivering foundational digital infrastructure like identity and proof-of-humanity?
(take a read of Simon Wardley’s essays on digital sovereignty (above) for the issues…)
🖨️3D printing
World’s first 3D-printed steel bridge repairs MIT engineers successfully demonstrated the first-ever use of 3D-printed steel to repair a corroded bridge in Great Barrington, Massachusetts. The team used a laser-based "cold spray" additive manufacturing technique that accelerates powdered steel particles through heated, compressed gas and applies them in layers to create a “patch” on damaged sections.
This approach could address critical infrastructure needs: in the US alone, over half of the US's 623,218 bridges suffer from significant deterioration
🦿Robotics
Zippy the world's smallest bipedal robot Carnegie Mellon University researchers have developed "Zippy," the world's smallest self-powered bipedal robot at just 4cm tall—about the size of a LEGO minifigure—that can walk at speeds exceeding half a mile per hour using only its internal battery and control system.
The robot achieves remarkable speed by walking at 10 leg lengths per second, equivalent to an average adult moving at 19 miles per hour, making it both the smallest and fastest power-autonomous bipedal robot by that metric.
How fun is this?
🎸Music tech
Roadie 4 Band Industries has launched their latest Kickstarter campaign for the Roadie 4, the fourth generation of their automated guitar tuner that eliminates the need for manual peg turning. Roadie 4 comes with |”Smart string detection” - automatically identifies which string is picked and tunes it without requiring a fixed sequence
I know lots of guitarists who would back this with US$89 early bird pricing versus US$139 retail.
⏳ Zeitgeist
Once around the world outside tech… treading as lightly as possible through the doom….
🌍Earth's carbon budget could be exhausted in three years
A new report by over 60 leading climate scientists reveals that humanity could breach the critical 1.5°C warming threshold in just three years, as record greenhouse gas emissions rapidly exhaust Earth's remaining "carbon budget" of only 130 billion tonnes of CO2.
The report’s key findings include:
Global warming currently accelerates at 0.27°C per decade, with temperatures already 1.24°C above pre-industrial levels
Earth is trapping heat 25% faster this decade than last, with 90% of excess heat absorbed by oceans, disrupting marine ecosystems and doubling sea-level rise rates since the 1990s
Sea levels have risen 228mm since 1900, intensifying storm surges and coastal erosion
Climate impacts threaten to reduce crop yields by up to 40% by century's end while 30% of Earth's land experienced moderate to extreme drought in 2022.
The research, published in Earth System Science Data, updates the UN's 2020 carbon budget estimate of 500 billion tonnes, which has shrunk dramatically as annual emissions exceed 42 billion tonnes.
While scientists expect emissions to peak this decade, they stress that rapid adoption of clean energy and drastic carbon reductions remain essential to meet Paris Agreement goals and prevent catastrophic climate breakdown.
🌡️Hotting up
Regions all around the Northern Hemisphere are sweltering simultaneously this week:
Greece declares emergency as Chios wildfires rage Greece declared a state of emergency on Chios island as five separate wildfires have raged since the weekend, threatening the Mediterranean island's famous mastic tree groves and forcing mass evacuations. Greece faces an incoming heat wave with temperatures expected to exceed 40°C, highlighting the country's increasing vulnerability to climate change-fueled wildfires during summer months.
Beijing issues orange heat warning amid 38°C temperatures Beijing authorities issued an orange heat warning—the second-highest level—as temperatures reached 38°C (100°F) on one of the city's hottest days this year, prompting 22 million residents to seek shelter and adapt their daily routines.
Historic heat dome brings triple-digit temperatures to US Northeast A dangerous heat dome is bringing record-breaking temperatures to the Eastern United States, with 245 million Americans experiencing temperatures of 32°C or higher and 33 million facing 38°C+ heat. The high-pressure system broke records Monday with the third-highest reading ever recorded for any date, creating what meteorologists call a "near historic" heat wave, with major cities like New York and Philadelphia facing consecutive days of extreme heat.
Memia narrative: considering this summer’s earlier than usual heatwaves in the context of the findings on climate change acceleration above: temperatures are not going back to “normal” and some regions may be unable to adapt quickly enough to more extreme weather events.
🌐NATO agrees(?) on 5% defence spending target
As I type this, NATO members are meeting to agree to increase defence spending to 5% of GDP by 2035, doubling the current 2% target that US President Trump has long pushed for amid ongoing conflicts in Ukraine and the Middle East.
Memia narrative: Once again, this really feels like an expensive, primitive way to address a bug in humanity that could be fixed more intelligently. What are the underlying neuro-psychological conditions in (a minority of) humans which drive the aggression leading to increasing warfare, widescale human suffering and happy arms manufacturers? How do we solve for this instead? And if we solved for this, would that be cheaper than 5% of GDP across all the countries of the world?
Also, there must be alternative “parasitic” defence strategies to spend near-zero on military defence capabilities but force geopolitical rivals to near-bankrupt themselves with their own spending… oh, wait a moment...
And what about entirely virtual network states which aren’t at all concerned with physical territory accumulation or protection? Are they ultimately the most resilient form of polity going forward?
I know this has been going on since long before Ghengis Khan … but with superintelligence around the corner I thnk it’s worth exploring alternative plays.
💥Starship explodes during pre-flight engine test
SpaceX's Starship rocket exploded during a static fire engine test at the Starbase facility in Texas on June 18, forcing the company to postpone its planned tenth test flight originally scheduled for June 29.
BIG bang:
(You have to feel sympathy with the engineering team at SpaceX …. it’s not like you can test this stuff in physical reality without the occasional spectacular explosion… but it’s all done in public so they can’t just shrug and go “oh well, back to the drawing board…”)
☄️Asteroid could threaten Earth's satellites via moon debris
A 60-meter-wide asteroid called 2024 YR4 has a 4.3% chance of hitting the moon in December 2032, which could create serious risks for Earth's satellites according to new research from the University of Western Ontario. The impact would release energy comparable to a large nuclear explosion and launch up to 100 million kilograms of lunar debris into space.
(If you’ve ever read Neal Stephenson’s novel Seven Eves, this is how it all begins…)
🌊💩Flooded zone
Four stories from over there, in brief:
Trump grants TikTok third 90-day extension Whoever could have ever expected this?
House of Representatives bans WhatsApp on government devices. Meta is (perhaps quite rightly) not amused…
“Approved products in the US House of Representatives include Microsoft Teams, Signal, Apple’s iMessage and FaceTime, and Amazon-owned messaging service Wickr. Meta said WhatsApp, which has about 3bn users globally, is approved for official use in the Senate.“
Texas creates first state Bitcoin reserve fund Senate Bill 21 makes Texas the first US state to commit public funds to a standalone Bitcoin reserve as a long-term strategic asset. The Texas Strategic Bitcoin Reserve will operate independently from the state's general treasury and only accept assets with market capitalisations exceeding US$500 billion, currently limiting eligibility to Bitcoin alone.
LA Lakers sold for record US$10B Hurts my head to think about the bugs in a socioeconomic system that treats a sports team as a US$10Bn status-signalling asset for billionaire males…
💭Meme stream
A few pieces of eclectica gathered in a jetlagged daze this week…
♻️ ⛽Plastoline
A 21-year-old Alabama inventor named Julian Brown shares his backyard pyrolysis operation that converts household plastic waste into a gasoline alternative he calls "plastoline" using a homemade solar- and generator-powered microwave reactor. (Although safety concerns have emerged after laboratory analysis revealed Brown's fuel contains high levels of toxic BTEX compounds (benzene, toluene, ethylbenzene, xylene) that are more hazardous than regular gasoline
👨Why are men
Exquisite:
🤖AI and the future of work
Is this the right allegory…?
Sorry I’m a bit late hitting “send” this week… hopefully the jetlag has worn off by next week!
🙏🙏🙏 Thanks as always to everyone who takes the time to get in touch with links and feedback.
Off to the beautiful city of Oxford next week!
Namaste
Ben
Great feed again Ben, but... late to the party on the intention economy ;) not sure if you remembered my talk at the NZ Philosophy Conference, Dunedin 2017 (nearly 8 years ago now!) In which I questioned our future free will due to AI manipulation (the intention economy in different terms): https://adaptresearchwriting.com/2017/12/07/artificial-intelligence-freedom-and-democracy-talk-at-nz-association-of-philosophers-conference-dec-4-2017/ Enjoy Oxford, I'm in Cambridge early August, I won't tell them...