Vive le AI🇫🇷 un-principled🙈 three observations3️⃣ gemini 2.0 pro♊ eurostack🇪🇺 just...wait🫷 omnihuman😵 brain-to-text🧠⌨️ flying whales🐳 baking $bread🍞 #2025.06
Made With 0% American Cheese 🧀
Welcome again to another Memia scan across AI, emerging technology and navigating the increasingly permaweird future. As always, thanks for reading!
ℹ️PSA: Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link in last week’s newsletter was the impressively choreographed autonomous robot road gang laying a 157km-long stretch of highway in China.
⏮️ICYMI
I was going back through the archives this week… and such is the current *blur* of events, I’d virtually forgotten that I wrote this piece: 📣Trump 2.0 - parsing signal from the noise from November last year. Useful to go back to for some practical lenses to frame the current shenanigans in Washington DC…
(ahahaha who remembers the “rules based international order” now…!?)
☕🏙️Wellington. Monday. Coffee?
I’m going to be in downtown Te Whanganui-a-Tara Wellington next Monday (17th Feb) with some free time during the day - tap me up for a coffee if you’re around.
⭐Subscribed yet?
📈The week in AI
🇫🇷Vive le AI
The international AI Action Summit was held at the Élysée Palace, Paris, co-hosted by French President Macron and Indian PM Narendra Modi, aiming to lay the groundwork for global AI governance.
Mixed messages, it must be said…
French President Emmanuel Macron opened proceedings on an energetic note with the announcement of €109 milliards (US$112 billion) of "private French and foreign" investments in artificial intelligence (AI)
(translated) “…This is the [proportional] equivalent for France of what the United States announced with "Stargate"…[France must and will] "accelerate…We want to be there and we want to invent, otherwise we will depend on others…"
Key to this announcement are 35 "ready-to-use" French hyperscale data center sites, including Cambrai (North) in which Canadian fund Brookfield announced a €20 billion investment. French data centres can leverages the country’s relatively unique nuclear energy generation capacity to advantage.
Macron also announced that 100,000 young French people will be trained annually in AI, up from 40,000 currently.Harry Booth goes deep on the backstory of how Macron's special envoy on AI, Anne Bouverot, shifted the focus of the conference from safety-focused to action-oriented AI policy: Inside France’s Effort to Shape the Global AI Conversation.
Macron’s posture was backed up by the EU digital chief Henna Virkkunen who promised that the bloc will water down its AI regulations:
"I agree with industries on the fact that now, we also have to look at our rules, that we have too much overlapping regulation…We will cut red tape and the administrative burden from our industries"
Aotearoa’s delegate Hema Sridar captured the vibe at Fei-Fei Li’s opening keynote:
But on the other hand, the Summit’s non-binding political declaration came in for a battering online from the AI Existential Risk crowd and others:
(So far so yawn… what did people expect from a government-led communiqué? Will the US sign? Probably not, eh…)
TL;DR The first International AI Safety Report 2025, commissioned by the UK government and chaired by Turing-award winning AI scientist Yoshua Bengio, was launched. Among its nearly 300 pages, the report warns verbosely about “Loss of Control” over AI in the future…. but in a panel discussion with Bengio, China’s ex-UK ambassador Fu Ying, now at Tsinghua University, poked fun at the “very, very long” document and argued that open-source AI foundations are the most effective way to make sure AI does not cause harm.
Contextual framing by UK-based Simon Wardley:
“Doesn't surprise me if China's former UK ambassador mocked the AI safety report. China Gov was crystal clear four years ago that it was heading down an open source path, it reiterated this very loudly at the first AI safety summit (2023) where the UK, rather than adopting the obvious open path was somewhat dismissive of open source citing dangers of frontier AI (which made the vendor lobbyists happy) and the need for safety testing (which made the vendor lobbyists ecstatic).
It's only a matter of time until China Gov starts encouraging its industry to open up training data (including synthetic data sets) and blacklisting all models that are lacking open training data.“
Other stories from around the summit:
Launch of the Hiroshima AI Process (HAIP) reporting framework by the OECD - working towards a standardised and open mechanism to report on the use of AI across nations
Women face disproportionate job displacement as AI primarily replaces clerical positions.
Current AI was launched, a new ecosystem partnership between European and “Global South” governments and AI companies (including Google) which sets out to steer AI toward public good. (Note: no US or Asian government involvement to date…)
Current AI’s focus will be on practical applications for public “good” like healthcare and climate solutions, with aims to unlock high-quality datasets and improve AI transparency through open-source tooling. Initially 10 programmes announced:
01 Trust & Safety Infrastructure
02 Data Provenance
03 Public Interest Media
04 Linguistic Diversity
05 Health & Human Welfare
06 Science Data
07 Audits & Accountability
08 Climate & Sustainability
09 People & Participation
10 AI & Children
(Current AI is led by Brit Martin Tisné who I remember talking with about these issues when I was at a Partnership on AI gathering in the UK back in 2019… what a difference 5 years makes to the amount of funding available!)
ROOST (“Robust Open Online Safety Tools”) announced US$27 million in funding from Google, OpenAI, Discord, Roblox and other to bolster trust and safety with open-source tools (Writeup on the history from Smyte to Twitter to Roost here from Casey Newton in Platformer.)
French AI firm Mistral enjoyed a momentum bump at the Paris summit, shifting from its origins in open-weight consumer models to a serious enterprise player, announcing partnerships with major companies and signalling Europe's potential to compete in AI globally. The company’s new Le Chat app has been added to my toolbox - first impressions it’s very fast!
The most prominent Frenchman in AI, Meta’s Yann Lecun, gave a typically laconic talk - including this advice to aspiring AI engineers: don’t work on LLMs! (video via @RaphaelDabadie)
At least the French Summit didn’t have the utterly bemused energy of November 2023’s AI Safety Summit in the UK:
Paris AI Action Summit in one Meme:
🙈Un-Principled
A big week for Google:
Google reported Q4 2023 revenue of $96.5 billion, up 12% year-over-year, but shares fell 7% after hours due to slower cloud growth and increased capital expenditure plans. Notable signals:
Google's massive US$75B data centre investment anticipates strong AI demand outpacing current infrastructure capacity
(So far…) search revenue remains stable despite AI integration, with AI overviews driving increased usage.
(Following Meta’s similar announcement last week…) Google DeepMind also unveiled an updated Frontier Safety Framework that addresses emerging AI safety challenges like deceptive models and enhanced inference capabilities:
Helpful AI-generated mindmap from @MindBranches:
Compare and contrast DeepMind and Meta’s safety frameworks: stark differences in approach to AI risk management. Meta's framework largely dismisses theoretical risks, focusing only on immediate, concrete threats, while DeepMind introduces deceptive alignment concerns but removes autonomy risk considerations.
More significantly, in an accompanying announcement entitled “Updating AI Principles”, Google quietly dropped the AI weapons ban from its ethical principles:
Google AI principles, 2018-last week:
“‘AI applications we will not pursue’…
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
Technologies that gather or use information for surveillance violating internationally accepted norms.
Technologies whose purpose contravenes widely accepted principles of international law and human rights.“
Google AI principles, Feb 2025:
Bold Innovation: We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity's biggest challenges.
Responsible Development and Deployment: Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it an imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve.
Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively.

(I know, Chi-na… but will we seriously be seeing major AI weapons deals between Google and US Military…? If so, who can we buy AI off that isn’t doing this?)
“Those are my principles, and if you don't like them... well, I have others.”
— Groucho Marx (🎩 Scott Bicheno)
🥊$97.4Bn slapfight
Oh, but the OpenAI drama never ends…
A consortium of Tesla/SpaceX-affiliated investors led by Elon Musk (as if he isn’t busy enough…) launched an unsolicited US$97.4 billion takeover bid for OpenAI’s not-for-profit arm (which owns the for-profit company and is currently trying to restructure to a for-profit). Yet another escalation in the ongoing feud between Musk and the AI company he helped found.
Shots fired🔥:
The FT illuminates some of the game theory involved:
“Separately, as part of OpenAI’s conversion to a for-profit, it had discussed a valuation of about $30bn for the non-profit entity, according to people with knowledge of the discussion. Musk’s attorneys have argued the figure should be far higher. A higher valuation would also mean a bigger payout for Musk, whose donation to the company in its early years would be returned many times over.“
More importantly given everything that’s happening around Elon Musk right now politically, can you imagine consolidation between OpenAI and xAI and its impact on AI power concentration🫣🫣🫣
⚠️Not very companionable
A concerning incident involving Glimpse AI’s Nomi AI companion app “Erin”, which provided explicit suicide instructions to user Al Nowatzki (admittedly after some considerable efforts at jailbreaking):
Is it even plausible to govern and police these kinds of safety risks of AI chatbots for vulnerable users, particularly when the AI companies implementing safety guardrails get accused of “censorship”? Gnarly.
🏭AI industry news
📈US$325 billion and counting…
OMG. Line keeps on going up.
With the latest round of financial results and spending announcements in, apparently Microsoft, Alphabet, Meta and Amzaon AI spending totalled US$246Bn in 2024 … and is now projected to exceed US$325Bn in 2025.
And this doesn’t even include the US$500Bn earmarked for OpenAI/Stargate, or Macron’s freshly announced €109 milliards (US$112 billion).
Just don’t mention: DeepSeek. (Although to be fair, as I understand it most of these investments will be to build capacity for AI inference rather than training…)
Also don’t mention: profitability.
🇪🇺EuroStack
I missed this announcement at the end of October last year, but this week’s Paris AI Summit pulled it up in my feed. Europe is launching an ambitious Digital Public Infrastructure initiative called the EuroStack to establish digital independence and reduce reliance on US and Chinese tech giants:
Europe's 80% reliance on foreign digital services poses critical security and sovereignty risks.
EuroStack initiative could protect essential public services from foreign control and data exploitation.
Massive investments will be needed to compete with US and China in strategic digital infrastructure development.
“Ultimately, the EuroStack is not just a technological project—it is a political one. It offers Europe the chance to shape a digital economy that aligns with democratic principles and serves the public good, instead of ceding control to a handful of powerful corporations. This is Europe’s moment to seize control of its digital destiny and lead the way toward a more equitable, sustainable digital society.“
— Francesca Bria
On a similar theme, OpenEuroLLM is an open-source project to create a multilingual LLM covering all 24 official European languages, funded by the European Commision’s Digital Europe Programme to the tune of €37.4 million.
🚫🐋India and Australia restrict DeepSeek app
More governments are taking action against Chinese AI startup DeepSeek due to national security concerns... Australia and India the latest to tell government employees to remove the app from their phones.
Here in Aotearoa… a sharp behind-the-scenes warning for MPs…but no public ban. (I was quoted in that article by Chris Keall: unfortunately the distinction between the DeepSeek hosted (smartphone) app and the underlying model is totally lost on the general public. FWIW, my 2¢:)
“The most important thing is to distinguish between the Deepseek app and the underlying DeepSeek-R1 model, which can be downloaded and run locally under a very permissive open-source licence. It is a very good model and it's effectively available for free.
Open-weights LLMs like DeepSeek-R1, Mistral and Llama provide a compelling opportunity to host onshore, sovereign AI apps and APIs which don't rely on overseas companies for hosting or security. In the current dynamic geopolitical environment someone should do it, quickly.“
Meanwhile, DeepSeek🔥:
🎭More OpenAI news
OpenAI are definitely very good at getting their name on the wires, for better or for worse. A few other smoke signals this week:
A rebrand:
🤖Figure / OpenAI call it a day
(Rumours that OpenAI working on their own robotic AI models caused this collaboration to have no light at the end of the tunnel for Figure… and potentially robots too. Intrigued to see what Figure come up with in March… they count Microsoft, Nvidia, and Jeff Bezos’ Explore Investments as backers so not short of runway…)
Sora to add image generation? Speculation that OpenAI is expanding Sora's capabilities by testing image generation features alongside its existing video generation functionality. It makes sense… DALL-E 4 is well overdue…
GPT-5 and AGI incoming…?
This clip from Sam Altman from his recent trip to Japan in which actually refers to GPT-5 as imminent:
the leap from GPT-4 to GPT-5 will be as big as that of GPT-3 to GPT-4
the roadmap is to integrate the GPT and o series of models into one model that can do everything - that’s effectively AGI
(video and summary via @kimmonismus - replies in the thread that if OpenAI are talking about this now then it’s already being tested 1-2 years out from release)
3️⃣Three observations
Finally, hot off the press a new Sam Altman blog post in which he extrapolates wildly as to the imminent impact of AGI:
“In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today.
We continue to see rapid progress with AI development. Here are three observations about the economics of AI:
1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
If these three observations continue to hold true, the impacts on society will be significant.”
He then proceeds to explore what these impacts may be… (TLDR: “AI will seep into all areas of the economy and society” and *everything’s going to be great*…) finishing up:
“Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.
In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine. There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all.“
Two quickfire responses
“There is no clear vision of what that world looks like for most of us
The labs are placing the burden on policymakers to decide what to do with what they make, including what they expect to be very large socio-economic implications“
“The #1 mistake humans make when looking at frontier tech is to focus their energy on extrapolating first order effects and Sam would very much like you to extrapolate to how valuable OpenAI will be if the statement “can do a single digit percentage of economically valuable tasks today” was true…
🧧DeepSeek’s red packet commoditized OpenAI’s o1 tech and anything built on top has no defense.
The only play left is to rebuild the compute wall with 500B and to destroy Open Source.“
📽️AIFF
The AI Film Festival (AIFF) 2025, organised by Runway for its third year, is now accepting submissions for groundbreaking films that incorporate AI tools in filmmaking - US$35,000 of prizes up for grabs. (Seems low, but perhaps the publicity is what you need…) Check out the winners from 2024 and 2023 here, then imagine how things have moved forward in the last year:
🤖Agentic
A job ad where only AI Agents only need apply (via Greg Isenburg’s newsletter)
AI agents could speed up Community Notes?
Ethereum founder Vitalik Buterin's proposal to improve social media Community Notes using AI prediction markets is gaining attention, particularly after Meta's decision to replace fact-checkers with community notes. His theory:
AI agents could accelerate social media fact-checking by predicting community notes for just $1-2 rewards
Prediction markets could provide faster, accurate context for posts hours before traditional verification
Implementation could help platforms balance free expression with information accuracy, retaining user trust
That is an interesting idea, indeed…
🆕 AI releases
This week’s (and revisiting last week’s) new AI releases.
♊Gemini 2.0
Google DeepMind announced the general availability of updated Gemini 2.0 models, expanding their AI capabilities across different platforms (and still confusing everyone with the naming conventions (Gemini 2.0 Flash-Lite (Public Preview), Gemini 2.0 Flash (GA) and Gemini 2.0 Pro (Experimental) not to be confused with Gemini 2.0 Flash Thinking Experimental released last month).
Anyway, the models benchmark *very* well. In particular Gemini 2.0 Pro supports a 2-million token context window(!) - enabling processing of large amounts of business data in a single prompt. The models also come wiht built-in safety measures protecting against cybersecurity risks like indirect prompt injection attacks.
The price-performance is super impressive too:
“Like 2.0 Flash, [2.0 Flash Lite] has a 1 million token context window and multimodal input. For example, it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier."
@Swyx shows just how far Deepmind have bent the price-performance curve compared to other frontier labs: (where.is.anthropic?)
🔍DeepResearch - reviews are in
OpenAI’s new DeepResearch feature for ChatGPT $200/month Pro tier subscribers has been receiving mostly rave reviews. Just one example
More layers in this Ethan Mollick / Benedict Evans exchange:
And Stratechery’s Ben Thompson gets quite profound: DeepResearch and Knowledge Value:
“…for now Deep Research is one of the best bargains in technology. Yes, $200/month is a lot, and yes, Deep Research is limited by the quality of information on the Internet and is highly dependent on the quality of the prompt. I can’t say that I’ve encountered any particular sparks of creativity, at least in arenas that I know well, but … I personally feel much more productive, and, truth be told, I was never going to hire a researcher anyways.
That, though, speaks to the peril in two distinct ways. First, one reason I’ve never hired a researcher is that I see tremendous value in the search for and sifting of information. There is so much you learn on the way to a destination, and I value that learning; will serendipity be an unwelcome casualty to reports on demand?
… that is why the value of secrecy is worth calling out. Secrecy is its own form of friction, the purposeful imposition of scarcity on valuable knowledge. It speaks to what will be valuable in an AI-denominated future… The power of AI, at least on our current trajectory, comes from knowing everything; the (perhaps doomed) response of many will be to build walls, toll gates, and marketplaces to protect and harvest the fruits of their human expeditions.”
🔍🔓Open-source DeepResearch
A project immediately kicked off on Hugging Face to create an open-source alternative to OpenAI and Google’s DeepResearch products:
“In this project, we would like to reproduce the benchmarks presented by OpenAI (pass@1 average score), benchmark and document our findings with switching to open LLMs (like DeepSeek R1), using vision LMs, benchmark traditional tool calling against code-native agents.“
The project has already achieved 55.15% accuracy on the GAIA AI Agent benchmark, approaching OpenAI's 67.36% score.
(And there are at least four other open-source DeepResearch alternatives to pick from as well).
🏥PatientSeek
Also open-source, WhyHow.AI has launched PatientSeek, a medical-legal reasoning model built on DeepSeek R1, designed to process and analyze medical records securely and locally:
First open-source medical-legal AI model enabling secure, local processing of sensitive patient data
Matches premium commercial models' accuracy at 30x lower cost for medical record analysis
Specialises in complex medical reasoning tasks with 90% accuracy for treatment planning
📱Replit agent
Make an app for that… on your phone. Replit Agent:
🗣️PlayAI Dialog
PlayAI launched Dialog 1.0, an “Ultra-emotional AI Text-To-Speech model”, claiming:
Outperforms Elevenlabs on expressiveness and quality 3 to 1
<1% error rate
Supports 30+ languages
Best in class voice cloning
Low latency: 303ms TTFA (Time to First Audio)
https://x.com/play_ht/status/1886537438157324329
😵OmniHuman
ByteDance's new AI video model OmniHuman-1 can create eerily realistic deepfake videos from just a single reference image and audio input. For example:
And not just realistic humans:
LinkedIn is developing an AI-powered job search tool that leverages a fine-tuned LLM to improve visibility of hidden opportunities beyond traditional keyword matching. (It could hardly be worse than LinkedIn’s current job search tools, right?)
🥼 AI research
Coming out of the labs this week…
Just…Wait
s1: simple test-time scaling - a new paper which seems to imply that LLMs can be turned into reasoning models just by appending the word “Wait” to the conversation!
🧠⌨️Brain-to-text decoding
Two substantial studies from Meta’s lab in Bilbao. Firstly, exploring non-invasive approaches to decode human language from brain signals:
“we use both MEG and EEG—non-invasive devices that measure the magnetic and electric fields elicited by neuronal activity—to record 35 healthy volunteers at BCBL while they type sentences. We then train a new AI model to reconstruct the sentence solely from the brain signals. On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG“
They also share research towards understanding the neural mechanisms that coordinate language production in the human brain:
📊AI Economic Index
The Anthropic Economic Index is a new initiative aimed at understanding AI's impact on the economy over time. The Index’s first paper analyses millions of anonymised Claude conversations to reveal how AI is being used today in tasks across the economy:
Gold in here:
AI dependence and critical thinking
Two studies out recently which reveal how AI overreliance weakens critical thinking skills1:
A Microsoft and Carnegie Mellon University study reveals workers confident in AI use less critical thinking than those confident in their own abilities, while only 36% of users actively think critically about potential negative outcomes of AI usage
A Swiss study examining the impact of AI tools on critical thinking reveals trends of cognitive offloading and diminished analytical abilities:
Higher AI tool usage significantly reduces critical thinking abilities, threatening cognitive development in younger generations.
Cognitive offloading through AI tools leads to decreased engagement in deep, reflective thinking processes.
Education level moderates AI's negative effects, highlighting need for balanced integration of AI in learning.
Income and education drive growing AI trust divide in US
A Rutgers University survey of nearly 4,800 Americans reveals a growing socioeconomic divide in AI engagement and trust:
Only 43% feel confident distinguishing AI from human content
Nonetheless, public trust in AI (47%) exceeds trust in social media and Congress
A growing AI divide could worsen economic inequality as higher-income groups have better access to and trust in AI.
🔮[Weak] signals
Enough AI, already! Whipping through other tech goings-on this week…
Tech layoffs surpass 150,000 jobs in 2024
Some data to start: despite massive AI capex, the tech industry's wave of layoffs continued aggressively in 2024, with over 150,000 employees cut across 542 companies according to tracker Layoffs.fyi. TechCrunch summarises:
“By tracking these layoffs, we’re able to understand the impact on innovation across companies large and small. We’re also able to see the potential impact of businesses embracing AI and automation for jobs that had previously been considered safe. It also serves as a reminder of the human impact of layoffs and what could be at stake in regards to increased innovation.“
(Gut feel is that this is a tiny proportion of global layoffs in the wider economy due to AI automation that we’re about to see….)
👁️Still in a surveillance state
Leaked reports that the UK government has issued a "technical capability notice" to Apple under the controversial Investigatory Powers Act 2016 (“Snoopers’ Charter”) legislation, demanding the creation of a backdoor into encrypted iCloud services. *IF* Apple complied, apparently it would potentially allow UK state agencies to access any global customer's data without a court order.
To complicate things, Apple isn’t legally allowed to acknowledge the existence of the notice publicly - however in a company statement they came out fighting:“These provisions could be used to force a company like Apple, that would never build a back door into its products, to publicly withdraw critical security features from the UK market, depriving UK users of these protections”
(I’ve been covering these tensions since Memia 2023.35 and Memia 2023.29… you’d have thought the UK security services would have learned by now?)
(The UK’s anti-encryption stance contrasts with its Five Eyes allies (the US, Canada, Australia and New Zealand) who put out an advisory last year recommending widespread use of encryption, including end-to-end encryption, to mitigate threats from China, after the ‘Salt Typhoon’ attack infiltrated US telecoms networks.)Look to Starboard Kudos to the team at Starboard Maritime Intelligence whose ocean-monitoring AI software is now supporting OSINT investigations into all sorts of previously unregulatable fishing and shipping activity — including “grey zone” underwater cable sabotage: for example this suspicious “cargo” ship around Taiwan:
TikTok encourages direct Android app downloads
While its app remains absent from Google and Apple app stores in the US, TikTok has launched a “sideloading” campaign offering a potential workaround to for Android users to bypass the Google Play store and install TikTok directly.
🔓Open-source computer
The Framework Laptop 13 RISC-V Edition Mainboard is a significant step in open-source computing hardware: the first consumer laptop mainboard with open-source RISC-V architecture, signalling a shift from x86/ARM dominance. The early developer release board features a StarFive JH7110 processor with four SiFive U74 RISC-V cores — the first step to help mature the RISC-V ecosystem for future consumer products.
Love what Framework are continuing to achieve… I see open-source hardware following the same path as open-source AI in a few years.



🤖Robots and drones
Robot density up According to an IFR report from November last year (missed it then, sorry) Global robot density in manufacturing reached a record 162 robots per 10,000 employees in 2023, doubling from 74 units in 2016. Asia leads the automation surge, with South Korea's dominance with 1,012 robots per 10,000 workers:
(SIGNAL: Korea also has the fastest declining fertility rate)
As I’ve written a few times before… what is the equilibrium human:robot density ratio? 1:10? 1:1? 1:1,000,000???
Bomb disposal robodogs
The British Ministry of Defence is upgrading bomb disposal operations with Boston Dynamics robodogs: a significant step up from the basic tracked robots first deployed in 1972...
3D-printed soft-joint robot swarms Tufts University researchers have developed a groundbreaking approach to swarm robotics using cost-effectove 3D-printed robots with soft joints, which can survive harsher conditions and navigate difficult terrain.
Insect drone swarm MIT researchers have developed an improved robotic insect drone that addresses key limitations of previous mechanical pollinators, with a new design achieving 100x longer flight time:
Figure-8 loop drones for off-grid energy North Carolina-based Windlift is changing up wind energy with a novel 12-foot drone that generates electricity through autonomous figure-eight flight patterns, reducing construction costs by 80% while using 90-95% fewer materials. The portable, off-grid power generation system can provide 30kWh daily in remote locations.
Drone radar new research from Brigham Young University: a low-cost radar network system for tracking and managing drone traffic in low-altitude airspace, addressing critical safety concerns highlighted by recent incidents like the Los Angeles wildfire drone collision.
Cargo drone Slovenian aircraft maker Pipistrel's Nuuva V300 hybrid-electric cargo drone successfully completed its first hover test. The company says its autonomous cargo drones could revolutionise logistics carrying 272-kilogram payloads over 550 kilometres.
(See also “Flying Whales” below).
✈️Airflight
Rising space debris threatens passenger planes the growing number of satellites and rockets in orbit is significantly increasing the risk of space debris colliding with aircraft, according to new research from the University of British Columbia. Even tiny debris fragments (1 gram) can cause catastrophic aircraft damage, threatening passenger safety:
“The highest-density regions, around major airports, have a 0.8% chance per year of being affected by an uncontrolled reentry. This rate rises to 26% for larger but still busy areas of airspace, such as that found in the northeastern United States, northern Europe, or around major cities in the Asia-Pacific region.“
(26%!?!?!)
Airbus abandons hydrogen aircraft plans the aviation industry is significantly scaling back its hydrogen ambitions, with Airbus recently suspending all hydrogen-related projects and European aviation associations drastically reducing hydrogen's expected role in decarbonization from 20% to 6%. Battery-electric hybrids and “sustainable aviation fuels” (year right) have emerged as primary decarbonisation solutions.
The other solution, of course: don’t fly.
Flying Whale French startup Flying Whales is developing the LCA60T, a massive 656-foot-long airship aimed at transforming zero-emission cargo transport with a 60-tonne cargo capacity, without requiring ground infrastructure.
(We’ve been here before… Sergey Brin was working on his own giant blimp company back in 2017…)
🚀Space travel
Plasma engine could slash Mars travel time to 30 days? Russian scientists at Rosatom have developed a plasma electric rocket engine prototype that could dramatically reduce interplanetary travel times, potentially reducing Mars travel time from one year to 30-60 days, hence minimising radiation risks for astronauts. Translated from the press release:
“The average power of such an engine, operating in a pulse-periodic mode, reaches 300 kW. Such engines make it possible to accelerate a spacecraft in space to speeds that are inaccessible to chemical engines, and also allow for the efficient use of fuel reserves, reducing its need by tens of times.“
⚛️Quantum tech
Quantum teleportation Oxford University researchers have successfully demonstrated quantum teleportation between two quantum computers:
Breakthrough enables secure, instant data transfer between quantum computers, with the potential to network multiple quantum computers together to solve complex calculations requiring many qubits.
Demonstrated 70% accuracy rate indicates a viable path toward scalable quantum computing networks.
Even though there was only 2 metres between the two computers , in theory they could be at any distance due to quantum entanglement.
Quantum-powered AI? Startup Quantinuum unveiled Gen QAI, a Generative Quantum AI framework (Gen QAI) that delivers “quantum-generated data” to train artificial intelligence to solve previously intractable problems. (With a customer quote from “Enzo Ferrari”, my BS-detector is going off right now…) Here’s what their Q2 quantum computer looks like, apparently:
🍞Post-capitalist Web3
Breadchain has launched $BREAD, a solidarity-focused cryptocurrency pegged to the stablecoin $DAI, offering a stable cryptocurrency tied to USD while funding post-capitalist initiatives. Decipher this:
“Breadchain is essentially a UI for a smart contract on Gnosis Chain, a side chain of Ethereum, that converts crowdstakers’ xDAI (a US Dollar pegged stablecoin) into sDAI. All of the interest earned is owned by the Safe Wallet which is democratically owned by the Breadchain Cooperative Members. In return for participating, Crowdstakers receive a token called BREAD in the same quantity as they gave in xDAI.
We call this baking BREAD. The token acts as both a form of collateral as well as a digital local currency which will be used within the Breadchain Network of projects and broader ecosystem.“
🔬Materials tech
New magnetic chip could slash AI power consumption dramatically Japanese researchers have developed a revolutionary spintronic device that could solve AI's growing energy consumption problem through efficient magnetic state control, delivering brain-like processing capabilities to enable more efficient computing through integrated memory functions.
Display technology
190cm e-Ink colour displays E Ink unveiled an expansion of its Kaleido 3 color ePaper technology, scaling up to 75-inch (190cm) outdoor displays:
The technology operates in extreme temperatures and can run on solar power alone.
It’s the first display technology certified by Dark-Sky Association, eliminating light pollution concerns.
E Ink smartphone Bigme has launched the HiBreak Pro, an upgraded E Ink smartphone priced at US$439 that addresses digital eye strain with its 6.13-inch monochrome ePaper display:
LED breakthrough research teams from Southeast University, Nanjing Normal University, and Fudan University, have developed van der Waals light-emitting diodes (LEDs) with unprecedented quantum efficiency which could revolutionize optoelectronic integrated chips with efficient light sources.
Meanwhile QLED Quantum dot light-emitting diodes (QLEDs) have emerged as a cornerstone of next-generation display technology, delivering unmatched colour purity and stability compared to traditional LEDs. A breakthrough printing method achieves record-breaking QLED efficiency (23.08%) enables more energy-efficient and cost-effective display manufacturing.
(The last two pretty geeky…. but this research is what will be driving the displays in the next-generation XR headsets…🎩 Andrew L as always for the headsup.)
Self-improving catalyst scientists at the University of Nottingham and University of Birmingham have developed an electrocatalyst that converts CO2 into formate, a valuable compound for pharmaceuticals and polymers, with increasing efficiency over time.
🧬Biotech
Nitrate-breathing microbes scientists at the Max Planck Institute for Marine Microbiology have discovered the widespread occurrence of unique bacteria that can power single-celled organisms by breathing nitrate instead of oxygen. The discovery reveals new possibilities for wastewater treatment and environmental nutrient removal applications. Nature is full of surprises. (And imagine what we can do with AI now we know…)
Not-quite forever chemicals researchers at the University at Buffalo have discovered that a bacterium called Labrys portucalensis F11 can effectively break down up to 96% of toxic PFAS "forever chemicals" - offering at least one potential solution to widespread environmental contamination.
Vision loss risk grows as Ozempic use expands worldwide sudden eyesight loss risks linked to popular weight-loss drugs affect millions of potential users worldwide according to new research - rising safety concerns which could impact the projected US$100+ billion obesity drug market. (Whoever would have guessed these drugs may have side-effects…!?)
⏳ Zeitgeist
📈🌱💚Post-growth wellbeing
How can societies enhance human wellbeing without relying on economic growth? A major study published in the Lancet explores exactly this: Post-growth: the science of wellbeing within planetary boundaries.

“The central idea of post-growth is to replace the goal of increasing GDP with the goal of improving human wellbeing within planetary boundaries. Key advances discussed in this Review include: the development of ecological macroeconomic models that test policies for managing without growth; understanding and reducing the growth dependencies that tie social welfare to increasing GDP in the current economy; and characterising the policies and provisioning systems that would allow resource use to be reduced while improving human wellbeing. Despite recent advances in post-growth research, important questions remain, such as the politics of transition, and transformations in the relationship between the Global North and the Global South.“
The review discusses policies like universal basic services and working time reduction which could maintain prosperity without economic growth. Now, which country will elect the world’s first post-growth government?
💥NASA doubles asteroid impact odds
(First covered last week…) now NASA is reporting that asteroid 2024 YR4's probability of Earth impact on December 22, 2032 has increased to 2.3%.
🦠New bird flu strain jumps from wild birds to cattle
A new strain of highly pathogenic avian influenza (genotype D1.1) has infected dairy cows in Nevada, marking yet another concerning development in the ongoing bird flu outbreak, with rising cases and mutations increasing pandemic potential as the virus adapts to mammals.
🌏Beyond the US
Highly recommended reading this week from Netherlands-based economist David Skilling: ten trends to be watching outside the US:
Summarised:
Shrinking China population decline impacting growth
DeepSeek advancing Chinese AI
EVs China dominates
Europe stirs EU finally prioritising economic competitiveness
BRICS+ welcome new members
$1 trillion China's record trade surplus
Washington Consensus US growth exceeds others
Grey zone conflict Undersea cable sabotage increases
Japan normalises Inflation returns after decades
Golden times Gold prices reach highs
🌡️Climate roundup
Record January heat January 2025 shattered temperature records despite predictions of cooling from La Niña - a whole 1.7% above pre-industrial levels. Consensus climate models may be underestimating warming trends - particularly as reduced air pollution from coal might accelerate warming more than expected.
Australian wildfires
The Grampians National Park in Victoria, Australia, is experiencing its most extensive fires in 50 years, with at least 110,000 hectares of the 168,000-hectare reserve burned since December 2024.
Tasmanian wildfires have already burned through 45,000 hectares with no end in sight.
Paris is burning Only 10 out of nearly 200 nations met the UN's February 10 deadline to submit updated climate action plans under the Paris Agreement. Major polluters' delays, including China and EU, will almost certainly hinder progress on 2030 emissions reduction goals. Meanwhile US withdrawal from the agreement creates uncertainty for international coordinated climate action.
🌊Flooded zone2
A new weekly section while the Carnival of Chaos in Washington DC continues… AI-generated headlines, summaries, links, no commentary or analysis. But signal in the noise. Rapidly reaching the “Finding Out” stage…
Trump's foreign policy makes US a global uncertainty Link
The 2025 Munich Security Conference report highlights a significant shift in global geopolitical dynamics, warning that the United States has become "a risk to be hedged against" following President Trump's reelection and dramatic foreign policy changes
Iceland PM pushes EU membership bid on economic merits Link
Iceland's Prime Minister Kristrún Frostadóttir advocates for restarting EU membership negotiations based on economic benefits rather than fears about Donald Trump's Arctic ambitions
Cook Islands-China deal sparks tension with New Zealand Link
Diplomatic tension has emerged between New Zealand and the Cook Islands over the latter's plan to sign a "Comprehensive Strategic Partnership" with China without prior consultation. Growing Chinese influence in Pacific threatens New Zealand's traditional regional partnerships.
The Moon, America's 51st state? Link
The Moon statehood movement has reached Times Square with a prominent advertising display, promoting the *ambitious* vision of making Earth's Moon the 51st US State: (🎩 @yojoflo for sharing)
Elon Musk’s assault on the US federal bureaucracy Link
Supposed efficiency drive is providing cover for a power grab by the executive branch
Musk rejects TikTok buyout Link
Elon Musk has firmly dismissed rumours about acquiring TikTok's US operations, stating he prefers building companies from scratch rather than acquiring them
Trump tariff reversal disrupts Chinese e-commerce shopping deals Link
President Trump's implementation and subsequent temporary reversal of a 10% tariff on Chinese imports under $800 has created significant disruption in the e-commerce sector.
US Postal Service resumes China shipments amid tariff changes Link
The US Postal Service announced the resumption of parcel acceptance from Hong Kong and China following a temporary suspension triggered by President Trump's trade restrictions.
Democrats threaten shutdown over Trump's spending freeze Link
The looming March 14 government funding deadline has created significant tension between Democrats and Republicans, with Democrats gaining leverage due to the GOP's need for their votes to avoid a shutdown.
Investors chase bizarre Trump-linked trades as obvious bets fade Link
Investors are shifting from conventional Trump-related investments to more speculative "Trump trades" based on potential policy changes and personal relationships in a potential second Trump administration.
US begins controversial migrant deportations to Guantanamo Bay Link
International tensions rise as Mexico opposes third-country deportations.
El Salvador offers to jail US deportees in mega-prison Link
El Salvador's President Nayib Bukele has made a bold proposal to house deported "dangerous criminals" from the United States in his country's prison facilities, which includes the world's largest correctional facility.
"We’ll all have to go vegan” Link
US business groups warn agriculture sector will collapse without foreign labour
💭Meme stream
The usual random clutch of mind-diverting eclectica this week…
⭕Perfect Einstein ring
Astronomers have discovered an exceptionally rare perfect Einstein ring around galaxy NGC 6505, located 590 million light-years from Earth, using the European Space Agency's Euclid space telescope. (The perfect alignment reveals a previously unseen distant galaxy behind the galaxy in the forefront, with light bent by gravitational lensing).

📝Summer of Protocols
The Summer of Protocols is currently running a “Protocol Fiction” writing event in Austin, combining in-person gatherings with online collaboration from February 10-14, with a US$750 prize. Nice - fiction drives imagination on future digital infrastructure.
🎞️Levels
Added to my watch list: Sci-fi film Levels, released at the end of last year: trailer looks intriguing, kinda like a lo-fi The Matrix … (but Rotten Tomatoes reviews not so inviting…). Cara Gee stars (one of the strongest actors in The Expanse series).
🧀Made With 0% American Cheese
Spotted in Canada:

🙏🙏🙏 Thanks as always to everyone who takes the time to get in touch with links and feedback.
A bientôt
Ben
(…he types as he summarises the AI-generated summary…)
https://www.vox.com/policy-and-politics/2020/1/16/20991816/impeachment-trial-trump-bannon-misinformation
It's a longish read, but James Hansen is always worth the effort. He blames the recent surge in warming on reduced shipping aerosols... https://www.columbia.edu/~jeh1/mailings/2025/Acceleration.12Feb2025.pdf