DeepSeek-R1⚠️ self-improving AI?🌀 evolutionaryscale🌱 trump and dump🪙 a distinct lack of AI vibrancy📉 mousegoggles🐁👓 DNA for books🧬 ...and I feel fine😏 #2025.03
Make America Grift Again 💸
Welcome to this week's Memia scan across emerging tech and the exponentially accelerating future. As always, thanks for being here!
ℹ️ Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link in last week’s newsletter (3% of openers) was the Realbotix barbie doll humanoid robot. Once again: yuk now, but imagine one year on...🤢
🙏Thanks to those who responded to my reader poll last week. Results shown below:
That’s heartening (especially everyone who said they read all of it… although,er, *survivor bias*😂) - as I said last week I’m aiming for the newsletter to be around a 30-minute read, so approximately in the right ballpark.
🔄 ICYMI
My 2025 AI predictions finally out yesterday… (already in play with this week’s news of DeepSeek R1 and Trump’s instant repeal of the Biden AI executive order!)
📈The week in AI
The week's AI news and releases. The momentum continues to build…
📜 My AI executive order trumps your AI executive order
LOL😂… one of my AI predictions was prescient by less than 3 hours! Published at 11:20am NZT yesterday:
Timestamp on this Reuters story: 1:58pm NZT:
🌍 AI geopolitics
Into the vacuum we go…. in the absence of any concrete direction on AI from the new US administration (other than memecoins and TikTok unbans… see Zeitgeist below), this week’s big open-source frontier model announcement from Chinese lab DeepSeek — surely no coincidence timed on the same day as Trump’s inauguration — changes the geopolitical landscape on AI considerably. (More in “AI Releases” below.)
Zhipu blacklisted The last hurrah of the Biden Administration - Chinese AI firm Zhipu the latest to be blacklisted (makers of GLM- series of LLMs and backed by US$400 of Chinese investor funds), singled out by the US Dept of Commerce citing concerns about alleged advancement of Chinese military capabilities through AI research. (And, like, Anduril isn’t doing exactly this with OpenAI??)
China responded by launching an anti-dumping investigation into US semiconductors… it’s anyone’s guess if the Trump administration doubles down on export restrictions, tariffs and sanctions and trips off an all-out tech trade war.
US-China collaboration drives global AI research forward Despite the geopolitical tensions, analysis from Georgetown University's Emerging Technology Observatory, based on a database of over 260 million research papers, found that AI research collaboration between the US and China has flourished over the past decade, with the two nations emerging as the most frequent research partners. Go figure:
📉A distinct lack of AI vibrancy
The Global AI Vibrancy Tool, developed by Stanford's Institute for Human-Centered AI, analyses AI development across 36 countries using 42 distinct indicators grouped into 8 pillars: R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion, and Infrastructure. The US comes out on top, with China second and the UK punching above its weight in 3rd place:
My own abode 2nd from bottom… (cue much brow-beating on LinkedIn):
I’m always a bit sceptical of how useful these national "AI league tables” actually are (are nation-states the correct aggregate entity to be scoring here?) Nevertheless, a data-based signal to policymakers to identify national AI strengths and weaknesses to work on… noting the data only runs to the end of 2023 so it will be interesting to see if anything changed significantly last year - however the rankings are mostly consistent over the last 6 years… although go UAE!
📈Miracle or myth? AI and productivity growth An OECD working paper from November last year provides a comprehensive analysis of AI's expected impact on macroeconomic productivity over the next decade. The report expects AIto boost annual productivity growth of 0.25-0.6%… although there is a very wide range of opinions among economists:
The report calls out multiple policy levers to enhance AI's economic benefits if properly implemented (summarised by Claude):
AI diffusion and adoption policies
Support firms' capabilities to adopt AI through education and skills development
Improve access to digital technologies, including through liberalised digital trade
Ensure markets remain competitive to incentivize technology adoption by lagging firms
Support open-source solutions as alternatives to closed ones to facilitate diffusion
Demand-side policies for AI-powered goods and services
Ensure safety and reliability of AI to build trust among users
Strike the right balance in regulation between safety and innovation
Improve transparency about AI capabilities
Resolve legal uncertainties around accountability
Foster social dialogue to enhance workplace acceptability of AI
Factor reallocation policies
Support workers transitioning between jobs/sectors through retraining programs
Implement effective active labour market policies
Ensure well-functioning capital markets to facilitate productive allocation of capital
Recognise growing importance of intangible assets in financial systems
FWIW I always think that economists and those of us in tech talk apples and oranges when it comes to the word “productivity” - Ray Kurzweil’s position in his recent book The Singularity Is Nearer is worth delving deeper into:
“…This has been one of the great economic mysteries of the past decade. With information technology transforming business in so many ways, we'd expect to see much stronger productivity growth. Theories abound as to why we haven't.
If automation is really having such a huge impact, there appears to be several trillion dollars of the economy "missing." In my view, which has been growing in acceptance among economists, much of the explanation is that we don't count the exponentially increasing value of information products in GDP, many of which are free and represent categories of value that did not exist until recently. When MIT bought the IBM 7094 computer I used as an undergraduate for around $3.1 million in 1963, that counted for, well, $3.1 million ($30 million in 2023 dollars) in economic activity. A smartphone today is hundreds of thousands of times more powerful in terms of computation and communication and has myriad capabilities that did not exist at any price in 1965, yet it counts for only a few hundred dollars of economic activity, because that is what you paid for it…
…So the problem is that GDP naturally counts today’s $900 chip as equivalent to one produced over two decades ago, even though the current one is 72,000 times more powerful for the same price.”A better theory of exponential economics would surely help shape better economic policy for the 21st century…!?
🌀Self-improving AI?
OK, so things got interesting this week.
The original articulation of the concept of an “Intelligence Explosion”, by British mathematician Irving “Jack” Good in 1965, underpins the whole techno-Singularity hypothesis:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
Last week, AI oracle Gwern Branwen (he who independently grokked the AI scaling hypothesis back in 2020) posted a comment to LessWrong describing an “AI improving the next AI” virtuous feedback loop apparently setting San Francisco house parties abuzz:
Illuminating. So have we reached AGI takeoff yet? In an edit to his comment above, Gwern says he doesn’t think so… however, AI researcher Dave Shapiro is definitely on board: :
“Okay, we know what Ilya saw, and it wasn't just Q* and Strawberry. That was the first step. What he really saw was the oscillation between these three steps:
1. Scaling laws of training and compute. We can reliably predict how much data/compute are required to get to the next level of AI models.
2. Inference time compute scaling laws. Letting the models think longer (spending more tokens on thinking and such) yields orders of magnitude improvements.
3. Distillation, which is where you use one "teacher model" to train the next generation "student model" which is sort of like going from a pidgin language to a true creole language. (magically better)
Combine that with increasing evidence that AI is generalizing beyond its training distribution and it's GAME OVER.
This is it. But that's not even the best part. It seems like every AI shop figured this all out at the same time. There is no moat. ASI will be for everyone.“
Distillation:
Virtuous feedback loop:
An amazing time to be alive… (and then… Deepseek R1 below for evidence that, indeed, “every AI shop figured this all out at the same time“.)
PLUS: Self-adaptive LLMs Also this week, Japanese AI research company Sakana released a paradigm-altering paper describing Transformer²: Self-Adaptive LLMs: AI models that dynamically updates its weights at inference time for various tasks:
Good explainer video here from Matt Berman:
🏢AI industry beat
Outside the frontier lab bubble (marginally)...
Microsoft unveiled 3 big AI initiatives to kick off 2025:
“As we begin the new year, it’s clear that we’re entering the next innings of this AI platform shift. 2025 will be about model-forward applications that reshape all application categories. More so than any previous platform shift, every layer of the application stack will be impacted. It’s akin to GUI, internet servers, and cloud-native databases all being introduced into the app stack simultaneously. Thirty years of change is being compressed into three years!“ — Satya Nadella
Establishing a new CoreAI - Platform and Tools division, merging its Dev and AI platform teams
Introducing pay-as-you-go agents for Copilot Chat business users.
Integrating Office AI features into Microsoft 365 for consumers, accompanied by a price increase.
Coverage from: ArsTecnica | The Verge
London-based AI avatar startup Synthesia raised a significant $180 million funding round led by NEA, achieving a US$2.1 billion valuation.
French AI lab Mistral is partnering with Agence France-Presse (AFP) news agency in a multimillion-euro deal to combat misinformation and maintain factual accuracy - signalling a new competitive line between European and US approaches to AI content moderation. (And also clutching at hope for new revenue streams for desperate legacy news organisations?)
Replit CEO Amjad Masad revealed a dramatic shift in the company's direction following the success of their "Agent" product, which can create working software applications from natural language prompts:
Replit’s AI coding tools now enable non-programmers to create custom software, making traditional coding skills less relevant
Replit's revenue grew 5x in six months demonstrating strong market demand for AI-powered software development.
Anysphere, the company behind Cursor AI IDE, raised US$105M at a US$2.5Bn valuation. I’ve been a user for 5 months now (when I have time to code) and am a true believer… also I hadn’t realised that Anysphere bought Supermaven in November - I was briefly an enthusiastic user of that tool as well before hopping across.
🆕AI releases
This week’s AI releases… a momentous week.
⚠️Chinese AI lab Deepseek announced the release (under fully open-access MIT licence) of their new R1 reasoning model, including a full research paper detailing how the models were constructed.
Amazingly, this open-source model demonstrates benchmark performance competitive with OpenAI’s o1:
PLUS: 6 small open-source “Distilled” models derived from R1:
PLUS: It is available via API for as little as:
US$0.14 / million input tokens (cache hit)
US$0.55 / million input tokens (cache miss)
US$2.19 / million output tokens
…WAY lower than OpenAI’s $20/month or $200/month “Pro” subscription prices.
Remember OpenAI’s o1-preview model was only launched in September last year - and o1 itself only on December 5th! Now, as George Zoller asks acerbically: whither OpenAI’s moat now?
“OpenAI’s performative 200$ pricing experiments were more acts of desperation and value signalling than rooted in solid market economics……DeepSeek-R1, [DeepSeek’s] “reasoning model” not only is head to head with OpenAI’s o1, and available without 200$ club ticket on their chat app, their API pricing is also a tiny fraction of OpenAI’s and the weights can be downloaded for free on huggingface for self hosting… eliminating concerns about sending data to China…
What now remains for Sam Altman is already in progress - desperate diplomacy in front of the Orange Throne begging to rescue the encircled and harried OpenAIs investors isolated on the AGI hill.
The Red Scare is a good tale here, projecting visions of advanced technology in American hands that will save the future if only the security establishment merges with OpenAIs.
Unfortunately for them, even that will only work as long as OpenAI is paying the new king. Zuck has bought equal access and will make the case that Opensource is the only way to not get crushed by an equally Open Source opponent.
“We are losing $$ at 200/mo” 🫠“Reminder: Open AI is currently losing US$5 billion per year (that's US$9,500 per minute) with some predicting that may increase to US$14 billion by 2026.
The CCP is surely across this release (not-so-coincidentally on the same day as Trump’s inauguration): US-based @kimmonismus’ take:
“…DeepSeek's paper makes it clear that distillation and RL1 can probably be used to create even better models. The question now is: why? Why are they publishing it, why are they putting it in people's hands?
The answer: to exert pressure. Chinese models are getting better and better, putting more pressure on US companies. On the one hand, DeepSeek is acting as a “good guy” by offering its model fully open source and free of charge, but on the other hand, OpenAI, Anthropic and co now have to show what they have; they are forced to reveal their cards.
Ultimately, it is a political game and a power struggle, and one must not be naive enough to believe that DeepSeek, as a Chinese company, is simply very nice. The Chinese government will certainly have given the go-ahead. Just as OpenAI is now also meeting with high-ranking officials and making decisions. What follows from this?
…AI is certainly seen as a Manhattan Project 2.0. The long-term possibilities are hardly manageable. Only one thing is clear: neither side wants to let the other win and be the first in the race for AGI and ASI.
The release must be read against this background. Today, China showed OpenAI the middle finger. They say: we are the good guys, they are the closed ones. We are transparent, you work with the government. Today's release of DeepSeek was a declaration of war.“
In a similar vein: Alberto Romero has been covering DeepSeek since late 2023. His take: DeepSeek Is Chinese But Its AI Models Are From Another Planet:
“.. [In 2025] The geopolitical risk discourse (democracy vs authoritarianism) will overshadow the existential risk discourse (humans vs AI). DeepSeek is the reason why…
Three questions…:
Does China aim to overtake the United States in the race toward AGI, or are they moving at the necessary pace to capitalize on American companies’ slipstream?
Is DeepSeek open-sourcing its models to collaborate with the international AI ecosystem or is it a means to attract attention to their prowess before closing down (either for business or geopolitical reasons)?
How did they build a model so good, so quickly and so cheaply; do they know something American AI labs are missing?”
It also appears that R1 has fewer guardrails than the US commercial models - and “grown up mode” is (refreshingly?) available by default:
So 2025 begins with a bang … open source frontier AI which is only 1 month behind the US SOTA. For those of us (over 7.6 billion people and counting) who *don’t* subscribe to the current tide of oligarchic US exceptionalism sweeping across Washington right now, this is great news. As I said in Memia 2025.01 commenting on DeepSeek v3:
“Every AI developer and every government in every country outside the US should download and add DeepSeek to their library of open source AI models to maintain some degree of independence from a handful of US commercial labs.“
Make them dance2.
(Could the next DeepSeek/OpenAI/Anthropic/Google challenger be an international, distributed, decentralised, open-source effort?!)
OpenAI’s next trick: o3-mini … coming in 2 weeks?
(“Worse than o1 pro at most things… but very fast!” according to @sama )
Hailuo Minimax-Text-O1 and Minimax-VL-O1 (multimodal) are now open source … and API available for only US$0.2 per million input tokens and US$1.1 per million output tokens.
Luma Ray2: “a large–scale video generative model capable of creating realistic visuals with natural, coherent motion. It has strong understanding of text instructions and can take image and video as input.”
Omnitool (led by aforementioned Georg Zoller) - Open Source
AI Desktop app to quickly discover, learn, evaluate and build
with thousands of generative AI Models:Kaiber In a similar vein, Kaiber Creative Superstudio (🎩 @SamRag for the tip):
Lovable Lovable Dev - another text-to-app offering competing with Replit:
Getting more biological…:
🌱EvolutionaryScale Released ESM 3:
“Simulating 500 million years of evolution with a language model“
Founder Alex Rives explains:
“ESM3 is a generative language model that reasons over the three fundamental properties of proteins: sequence, structure, and function.
Today we're making ESM3 available free to researchers worldwide via the public beta of an API for biological intelligence. Trained with over a trillion teraflops of compute, this is the first time a model of this scale has been trained for biology, pushing the frontier of AI for biological discovery and engineering.
ESM3 learns to represent the immense complexity of protein biology, learning from billions of natural proteins. From this training it developed the capability to design proteins, responding to complex prompts combining atomic level details and high level instructions to generate new proteins.
ESM3 can explore protein space far beyond natural evolution. We prompted ESM3 to generate a fluorescent protein at a far distance from any known fluorescent proteins, searching an unknown region of protein space, to discover a new fluorescent protein.
We estimate this is equivalent to simulating five hundred million years of evolution.“
Amazing.
OpenAI is also getting into the scientific AI model game - developing a new AI model called GPT-4b micro that enhances the efficiency of cellular reprogramming using Yamanaka factors - proteins that convert regular cells into stem cells — achieving 50x improvement in stem cell conversion efficiency, potentially accelerating longevity.
🔬AI research
Quite a broad range of AI- and AI-related research to cover this week
Titans is a family of neural architectures that combines the strengths of attention mechanisms and long-term memory modules, enabling processing of 2M+ context windows for “needle-in-haystack” tasks, far exceeding current limitations and outperforming existing models across language, reasoning, genomics, and time series tasks.
Autonomous molecular nanostructures Researchers at TU Graz are developing an autonomous AI system to revolutionise the construction of complex molecular nanostructures using scanning tunneling microscopes, which could dramatically accelerate the creation of complex molecular structures and nano-circuits.
AI creates bizarre but superior wireless chip designs Link Princeton Engineering and IIT researchers have developed a generative AI system that transforms wireless chip design, dramatically reducing the time required from weeks to hours while discovering unconventional but highly effective designs. For example:
We know what you’re thinking Researchers have developed a novel approach using the transformer architecture to predict human brain states through fMRI data analysis up to 5 seconds ahead, demonstrating that transformer AI models can effectively understand complex brain activity patterns. (The usual philosophical questions about free will (etc) arise…)
Commentary: Rohan Paul.
📚Research using AI
AI tutors - Ethan Mollick surfaced a recent pilot programme in Nigeria's Edo State which claims to revolutionise education by using AI chatbots as a tutor to compress two years' worth of learning in just six weeks:
“The results of the randomized evaluation, soon to be published, reveal overwhelmingly positive effects on learning outcomes. After the six-week intervention between June and July 2024, students took a pen-and-paper test to assess their performance in three key areas: English language—the primary focus of the pilot—AI knowledge, and digital skills
Students who were randomly assigned to participate in the program significantly outperformed their peers who were not in all areas, including English, which was the main goal of the program. These findings provide strong evidence that generative AI, when implemented thoughtfully with teacher support, can function effectively as a virtual tutor.
…The learning improvements were striking—about 0.3 standard deviations. To put this into perspective, this is equivalent to nearly two years of typical learning in just six weeks.“
More analysis: Ethan Mollick | Microsoft’s Mustafa Suleyman | Psychology Today
Fake detector Researchers at the UK’s Keele University claim to have developed an AI tool that achieves 99% accuracy in detecting fake news, using “ensemble voting” between multiple machine learning models working together provide more reliable fact-checking capabilities.
For now. This will become a new adversarial benchmark test to be broken
See also: Now that BIG US social media has abdicated any responsibility for what appears on their platforms, the Omni-Net Project is an open proposal to decentralise content moderation - shifting it out to the “edge”:
“The Omni-Net Project seeks to reimagine social networking by returning control to individual users. At its core, it is:
Decentralized: Instead of relying on a central server or corporate owner, users form a peer-to-peer mesh where each node can store, relay, and serve content.
User-Centric: Every participant retains full ownership and control of their data, with the freedom to import content from any social network or data feed.
AI-Enhanced: Automated content analysis, deduplication, and personalization become possible through locally run or network-provided AI models.
Community-Developed: The network itself evolves via a swarm of AI-driven coding agents guided by user priorities, making bug fixes and building new features without relying solely on human volunteers.“
This approach is definitely worth considering as the technology to implement it becomes exponentially more powerful every day…
AI elderly companions Singapore is aggressively deploying AI solutions to address its eldercare challenges as it faces a critical shortage of healthcare workers and a rapidly aging population. Solutions include early detection of depression through AI voice analysis and “AI companions” (humanoids) to reduce elderly loneliness. This is the future we wanted, right?
Cooking nuclear fusion in your kitchen with Claude?
Obviously I am sceptical enough to understand that this post below — in which Waterloo maths student Hudhayfa Nazoordeen claims to have built a nuclear fusor in his kitchen guided by AI — *could well be a spoof*… but nonetheless a signal of how physical science and hardware are rapidly moving into the domain of AI and software.
(Going deeper, I notice it’s a repost from August 2024 which was covered by some media and on the face of it seems authentic enough…?)
🔮[Weak] signals
Tech signals from near and far futures which aren’t AI, more or less...
🚀💥⚠️Rapid Unscheduled Disassembly (2025 edition)
SpaceX’s Starship Flight Test 7 exploded spectacularly over the Atlantic Ocean 8 minutes after launch, causing the FAA to delay or divert flights out of the debris area. Starship is grounded until an investigation into the “rapid unscheduled disassembly” is completed.
… however the mission did successfully catch the Super Heavy booster with “chopsticks” for a second time:
Also of note half way around the world, space debris from SpaceX rocket launches has also forced Australian airline Qantas to delay multiple flights between Sydney and Johannesburg, as the southern Indian Ocean reentry zone intersects with their flight paths.
🫣Anon but for how long?
The US Supreme Court is weighing whether the requirement for age checks on porn sites is constitutional in a landmark case which could redefine online age verification (and hence identification) requirements across all digital platforms, challenging previous internet freedom precedents and potentially affecting broader online speech protections.
💬Social media
📸✨Flashes of brilliance Berlin-based developer Sebastian Vogelsang is behind Flashes, a new photo-sharing app built on Bluesky's AT Protocol, offering users an Instagram-like alternative within the 30M-user decentralized social media network.
🗺️ Mozi Ev Williams (co-founder of Twitter and Medium), has launched Mozi, a new privacy-focused social networking app designed to facilitate real-world connections rather than content sharing. The platform helps users discover when friends are in same location, encouraging in-person meetups. Currently waitlisted and only on iOS.
🕶️XR
🐁👓VR goggles for mice MouseGoggles is a tiny immersive virtual reality headset for studying mouse neuroscience and behaviour:
🔒Crypto
💷 Bank of England unveils digital pound testing lab The BoE has unveiled a comprehensive blueprint for implementing a digital version of the British pound, alongside plans to launch a “Digital Pound Lab” in 2025.
The Framework outlines seamless integration with the UK’s existing currency, reducing implementation risks.
The Digital Pound Lab will test real-world applications, helping assess feasibility before major financial decisions.
No final commitment made to launching the Digital Pound, allowing flexibility to adapt to an evolving payments landscape.
Quantum computers won't threaten Bitcoin until 2030s? An article in CoinTelegraph assesses the impact of quantum computing to Bitcoin's security, with a panel of experts expecting quantum threats to emerge in the 2030s. Preparations for future quantum resistance are underway, for example a draft Bitcoin Improvement Proposal (BIP) known as QuBit introduces a new address type, Pay to Quantum Resistant Hash (P2QRH), which uses various quantum-resistant signature schemes to protect against attacks.
Most importantly, like the enormous Ethereum roadmap, Bitcoin's cryptography upgrades can be implemented gradually without disrupting the network's functionality.
XRP now third-largest cryptocurrency Amidst ongoing crypto market highs, XRP’s value has grown 31.5% since the start of 2024, catapulting it to become the third-largest cryptocurrency by market cap:
Its market cap now exceeds US$183 billion, surpassing both Tether USD (US$138 billion) and BlackRock (US$149 billion), now ranking only behind Bitcoin and Ethereum.
Fuelled by Donald Trump's presidential election victory and expectations of pro-crypto policies, particularly the anticipated resignation of SEC Chair Gary Gensler and potential spot XRP ETF approval
New developments from Ripple Labs (the originators of XRP), including the launch of stablecoin RLUSD.
🛸Drones
🦅 Bionicbird I found this online, this is bleeding edge of the push towards drone miniaturisation… BionicBird makes biomimetic drones - basically tiny bird- and insect-like robots that can fly autonomously.
X-Fly is a sensor-assisted “Ornithopter Drone”:
The MetaBird is a smartphone-controlled flying bird:
Construction: Carbon fibre & liquid crystal polymer wings and “indestructible” foam body
Weight: Ultra-light 1.6g
Power: LiPo battery (58 mAh), quick 12-minute charge
Range: 100m Bluetooth 4.0 connection, phone app to control
Top speed: 19 kph
It’s real, here’s a reviewer video:
(Now imagine an autonomous swarm of these, insect-size, equipped with poisonous darts and an individual’s biomarkers for targeting…)
DJI removes drone flight restrictions Amid the current escalating US/China technology tensions, leading Chinese drone maker DJI announced the removal of its geofencing restrictions that previously prevented drones from flying over sensitive areas like airports, wildfires, and government buildings in the US.
(Nothing to see here…)
⚡🔋Energy
World's first floating nuclear plant hits 1 billion kWh milestone Russia's floating nuclear power plant, Akademik Lomonosov, has achieved a significant milestone by generating its first billion kilowatt-hours of energy since beginning operations in May 2020.
US$6B “uninterrupted” solar power with massive battery storage The United Arab Emirates is embarking on a groundbreaking US$6 billion renewable energy project that aims to solve one of clean power's biggest challenges: intermittency:
Combining 5 gigawatts of solar capacity with 19 gigawatt hours of battery storage to produce 1 gigawatt of “uninterrupted clean power” according to Abu Dhabi energy company Masdar.
🌊All at sea
A couple of links on maritime tech passed my way this week. Autonomy is going to have major effects on the accessibility / control of the planet’s ocean surface:
SubSeaSail, a San Diego-based company, is developing dual wind- and solar-powered Autonomous Undersea & Surface Vehicles (AUSVs) for ocean monitoring:
🔴📡Smart Buoys: HyperKelp makes solar-powered ocean monitoring “buoys, providing “ocean data as a service”:
🧬DNA for books
Asimov Press is breaking new ground with their second anthology by offering the first commercially-available book encoded in DNA and sold in both traditional and molecular formats. DNA storage technology enables preservation of content for tens of thousands of years.
⏳Zeitgeist
Once around the non-AI, non-tech world lightly and avoiding the Trump mindwarp gravity well as much as possible... the usual topics:
🌡️Climate
Is Global warming actually accelerating? We know that 2024 was the warmest on record. But is global warming is showing more severe impacts than climate models predicted? 2024 also revealed significant gaps between scientific forecasts and actual temperature data.
Record-breaking temperatures affected one-third of the world's population, creating unexpected hot spots where temperature records are consistently broken:
“Regions across the globe are heating up more intensely than anyone anticipated, creating mysterious hot spots where temperature records fall daily. Farmers watch crops wither under unprecedented heat. City planners and public officials scramble to protect millions from supercharged events. Insurance companies recalculate risk models that no longer work. We all need answers about our local climate future, but the models that guided our understanding of climate change are showing their limits, just when we need them most.“ — Ricky Lanusse
Anecdotally the disconnect between global climate predictions and local experiences continues to widen — we are now “officially” in uncharted territory:
*If* it is the case that global warming is accelerating outside of conventional climate models, then the cumulative effects will escalate in 2025, 2026, 2027. Build that scenario into your planning…
🌿Global ecosystems
💧Northern Aral Sea restoring In one piece of positive news… the Northern Aral Sea has experienced a remarkable 42% increase in water volume, reaching 27 billion cubic meters, nearly doubling the water surface area and reducing salinity by 4X, following the successful completion of the first phase of a World Bank-supported preservation project. A second phase is planned. (The Aral Sea is arguably the world’s largest man-made environmental disaster due to river flow diversion for agriculture and industry). Timeline from Wikipedia:
🦠Bird flu watch
Mutations in Texas patient raise pandemic concerns Texas Biomed researchers have identified nine concerning mutations in a human H5N1 bird flu strain which show increased disease severity and brain replication ability. Current antiviral medications remain effective, providing crucial defence before vaccine development.
🫣It’s the end of the world as we know it…
The accession this week of Donald Trump is a symptom of wider global instability as a new world order grapples to take hold. (See the flurry of ALL CAPITAL LETTERS EXECUTIVE ORDERS AND ANNOUNCEMENTS on the new Whitehouse.gov website for the tone of what we’re in for over the next 4 years…*if* we choose to pay attention) Two takes which resonated with me this week:
Weimar 2.0? Historian Robert D. Kaplan argues that current global instability mirrors the Weimar Republic's chaotic final years:
“Today, China, Russia, and the United States, to say nothing of the mid-level and smaller powers, are all running a strange simulation of the Weimar Republic: that weak and wobbly political organism that governed Germany for 15 years from the ashes of World War I to the ascension of Adolf Hitler.
America’s Weimar syndrome may be obvious with the reelection of the institution-destroyer Donald Trump as president. But the entire world is one big Weimar now, connected enough for one part to mortally influence the other parts, yet not connected enough to be politically coherent. Like the various parts of the Weimar Republic, we find ourselves globally in an exceedingly fragile phase of technological and political transition.
I see no Hitler in our midst, or even a totalitarian world state. But don’t assume that the next phase of history will provide any relief to the present one. It is in the spirit of caution that I raise the subject of Weimar.“
(Article adapted from Waste Land: A World in Permanent Crisis by Robert D. Kaplan, published this month.)
Ezra Klein: It’s the end of the world as we know it…
(…and I feel fine.😏)
💸Make America Grift Again
(Big 🎩 to John McDermott for that headline!)
TikTok (and other ByteDance apps) were briefly shut down for around 8 hours… and then partially restored the day before Trump’s inauguration, anticipating the executive order for a reprieve which came the next day:
Bizarrely, Perplexity was amongst those who put in a last minute bid for TikTok’s US assets.
TikTok CEO Chew Shou Zi was amongst the “Hunting Trophy Cabinet” of tech CEOs on stage attending Trump’s inauguration (Apple’s Tim Cook was also there, but notably not Microsoft’s Satya Nadella). Great photo:
Now there’s a 90-day reprieve while Trump attempts to muscle “The Deal” through with TikTok owner ByteDance to arrange a sale to US owners - most interesting will be whether Trump’s executive order changes matters for companies who could still be liable for delivering TikTok’s service to US users: - Google, Apple, AWS and others. (Illuminating once again to see the centralised choke-points on the commodity digital services we use every day).
(And a reminder ICYMI: Jeffrey Yass, one of Bytedance’s largest shareholders, is a mega-donor to Trump’s political campaign).
🪙Trump and dump Also, just days before the inauguration… the Trump Organisation launched $TRUMP (and since, $MELANIA) memecoins, reaching a combined market cap of over US$80 million at one point… before rapidly declining to “just” US$8 Billion.
Even without selling a single token, the Trump Organisation raked in an estimated US$58 million in a single day in trading fees alone.
CNN on point: Trump’s meme coin is a reminder of crypto’s dumbest use case.
Kyla Scanlon tells us everything else we need to know:
“Within hours, Trumpcoin became the template for narrative-driven wealth creation… It’s pump and dump season! It’s simple memecoinonomics:
Hype: A coin launches with buzz and excitement.
FOMO: People pile in, driven by fear of missing out.
Rug Pull: The creators sell their holdings, crashing the price and leaving investors with nothing but narrative.
It’s a bit sloppy. It’s a bit embarrassing! It’s extremely effective!
Follow the MoNeY: Why did Trump build his $hitcoin on Solana? Surely nothing to do with the crypto grifters in his Mar-a-lago kitchen cabinet wanting to be made whole…
What is the easiest way to understand this if you’re outside the US?
🎭And then there were memes...
Once again my head too deep in the tech sand to filter much entertainment out of the ether this week… a meagre two offerings:
✝️TikTok resurrection
Christian mythology updated for 2025:
📹 OK Go again
Music video innovators OK Go have a new track out: made with 64 videos on 64 phones. As usual so impressive the imagination, effort and intricate planning that goes into their productions. (If somehow you have never come across OK Go before, check out their YouTube channel, starting with OK Go on Treadmills from 15 years ago).
🙏🙏🙏 Thanks for reading and as always to everyone who takes the time to get in touch with links and feedback.
Namaste
Ben
RL - Reinforcement Learning
https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai - search for “Dance”