The planet groans🌍 on the wrong side of history⛓️ (more) deep research🔬 goosed🪿 pipedream🚇 in a pickle🥒 from EVs to EIVs🚗🔋🧠 gradual disempowerment📉 artificial gills🐟 #2025.05
Captain, January is not over yet
Welcome to another Memia weekly scan across the AI, emerging tech and signals of the accelerating future. As always, thanks for reading!
ℹ️PSA: Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link in last week’s newsletter was: Wired’s headline DeepSeek’s Popular AI App is Explicitly Sending User Data to China. (Er, obviously…?)
📝ICYMI
My first Mind Expanding Links post in a while out yesterday:
📈The week in AI
🐋DeepSeek-R1 splash (continued…)
Vibe (even though it’s now February😅):
The global AI industry (and wider market) continued to absorb the apparent shock of DeepSeek’s (¡cHiNeSe!) open-source R1 model. Commentary has been spiralling off in every direction… some better-informed than others.
Distilling thirteen DeepSeek narratives I’ve been following in the last week with links:
1. R1, the ChatGPT killer
(But OpenAI kicked off the week by shipping its latest reasoning model o3-Mini in its ChatGPT free tier - see below - not down and out yet…)
2. DeepSeek’s economics need a closer look
DeepSeek's true AI training costs were closer to US$1.6B, not the US$5.5M headline which has been circulating. An investigation by semiconductor industry journal SemiAnalysis estimated that — despite US AI chip export restrictions - DeepSeek has access to around 50,000 Hopper Nvidia GPUs of various vintages (partly as a result of its hedge fund heritage), which would take its total capex well up above a $billion:
…And hence, they argue, misleading training cost claims were one of the key reasons behind the market meltdown last week.
Semianalysis’ Dylan Patel together with Nathan Lambert take part in this 5-hour(!) epic Lex Fridman episode discussing the lowdown in far more detail than I have time to absorb:
The market… as of 1 week later…not entirely convinced:
3. DeepSeek’s R1 release was actually a hedge fund strategy
The origin story of DeepSeek as a spinout from hedge fund High-Flyer has led to a slew of not-completely-unlikely conspiracy theories…
@jessecoheninv via bdtechtalks (BUT: it’s a very good model!!!)
4. It’s OK, Jevon’s paradox strikes again
Microsoft’s CEO among many claiming this is all fine:
Jevon’s paradox, illustrated:
![Image Image](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ee2b34c-5a7b-4b0a-86d6-8550e3f99a88_882x478.png)
(This whole trillion-dollar question hinges on if/when market sentiment decides against “AGI”…)
5. DeepSeek “stole” OpenAI’s intellectual property to train R1
OpenAI told the Financial Times that it found evidence linking DeepSeek to efforts to “scrape” OpenAI’s own APIs in order to “distil” training data for R1:
The irony is kinda delicious:
To be fair to OpenAI CEO Sam Altman, he handled it quite gracefully:
“No, we have no plans to sue DeepSeek right now. We are going to just continue to build great products and lead the world with model capability, and I think that will work out fine…DeepSeek is certainly an impressive model, but we believe we will continue to push the frontier and deliver great products, so we're happy to have another competitor…We've had many before, and I think it is in everyone's interest for us to push ahead and continue to lead.“
(This while on a trip to Tokyo to announce another joint venture with SoftBank, to deliver OpenAI’s services to Japanese Businesses).
(See also OpenAI CEO Sam Altman “wrong side of history” remarks on open-source, below.)
6. R1’s censorship can/’t be fine-tuned out
After last week’s flurry of examples of “disappearing queries” about Tiananmen Square, Xi and human rights, the jury’s out on whether you can download R1 and then somehow “fine-tune” the censorship out of the model when running it locally.
This is harder than it would seem: the censorship appears to be multi-layered at pre-training-, post-training- and application- level. Hosting it yourself gets rid of the “disappearing queries”, but not inbuilt training biases baked into the model itself.
Some resources to try it yourself:AICyclopedia Understanding DeepSeek’s Censorship Layers and How to Bypass Them
WIRED Here’s how DeepSeek’s censorship actually works - and how to get around it
PromptFoo Algorithmically finding prompts which jailbreak DeepSeek R1
PLINY the Prompter DeepSeek jailbreaks itself:
(Who knows if that last one works…!)
7. DeepSeek’s own hosting security is flawed:
Oops if true:
WIRED (which seems to have it in for DeepSeek, tbh): DeepSeek data breach exposes million user records and API keys.
8. Most countries have NOT blocked DeepSeek’s AI app on national security concerns
(But those who have include…)
Taiwan implemented a ban on DeepSeek usage across government agencies and critical infrastructure facilities, citing national security concerns about the Chinese AI company's data practices.
Italy’s Data Protection Authority (DPA) launched an investigation into DeepSeek’s data collection practices and compliance with GDPR
And the US: US Navy, the Pentagon, NASA, US Congress and the state of Texas have all implemented DeepSeek bans.
9. Everyone and their dog is now hosting a version of DeepSeek R1:
Nvidia added the DeepSeek-R1 671-billion parameter model to the NIM microservice on build.nvidia.com - delivering up to 3,872 tokens per second on a single NVIDIA HGX H200 system.
Microsoft CEO Satya Nadella moved with surprising haste to get engineers to test and deploy R1 on Azure AI Foundry and GitHub within 10 days of its release.
Potential TikTok suitor Perplexity says (uncensored) “me too”:
Tencent Cloud enables DeepSeek-R1 deployment in three minutes.
Venice AI’s platform offers secure, private, uncensored access to DeepSeek R-1 without data being sent to China - funded with the launch of its VVV token on Ethereum's Base network, which reached a fully diluted valuation of US$1.65 billion within hours of its debut. 400,000 registered users demonstrate significant demand for privacy-focused AI solutions.
10. DeepSeek provides hope for AI startups outside the US
DeepSeek's sparks hope for European startups Commentary from Europe: DeepSeek-R1 has *to some degree* challenged the prevailing notion that developing competitive AI models requires massive financial resources and cutting-edge chips - European companies can leverage efficient, focused reinforcement learning solutions rather than massive capital investments to compete.
Here in Aotearoa, Justin Flitter at NewZealand.ai convened an impromptu DeepSeek Expert Panel discussion last Friday, watch the recording back here. (Very interesting discussion… but where are are the non-male AI experts on our islands!?!)
Mike Hall - bottom right in the screenshot above - summarises viewer comments.
My 2¢:
“Immediate next steps: (1) JFDI on AoNZ-hosted Llama, Mistral, Qwen and DeepSeek large models for inference (although if no-one's paying for the difference it seems the value of onshore compute still isn't recognised). (2) Contributing AoNZ (and other) data to fully open-source training datasets. (Which would mean a cavalier attitude to copyright unless there's new Open Commons carve-out legislation...) and (3) Participation in fully open-source, decentralised model R&D (eg Prime Intellect project and others)”
11. DeepSeek: US vs. China
Anthropic CEO Dario Amodei chose this moment to push out another blog post advocating (of course) for stronger US AI chip export controls: On DeepSeek and Export Controls:
“Making AI that is smarter than almost all humans at almost all things will require millions of chips, tens of billions of dollars (at least), and is most likely to happen in 2026-2027. DeepSeek's releases don't change this, because they're roughly on the expected cost reduction curve that has always been factored into these calculations.
This means that in 2026-2027 we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will also be able to get millions of chips.
If they can, we'll live in a bipolar world, where both the US and China have powerful AI models that will cause extremely rapid advances in science and technology — what I've called "countries of geniuses in a datacenter". A bipolar world would not necessarily be balanced indefinitely. Even if the US and China were at parity in AI systems, it seems likely that China could direct more talent, capital, and focus to military applications of the technology. Combined with its large industrial base and military-strategic advantages, this could help China take a commanding lead on the global stage, not just for AI but for everything.
If China can't get millions of chips, we'll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It's unclear whether the unipolar world will last, but there's at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage. Thus, in this world, the US and its allies might take a commanding and long-lasting lead on the global stage.”
Once again the appeal to “US and its allies” exceptionalism rings flat outside the US, particularly with the new regime in Washington. (Of course this has absolutely nothing to do with cutting out competition and creating a moat to protect investors…). Even newly-converted Trump-ite Marc Andreessen isn’t impressed🍿:
(Hint: “winning” is not the winning position the US needs).
DeepSeek Panic at the App Store, Zvi Mowshowitz on hand as usual with a broad sweep of commentary, including:
“I think who has what access to the best models for what purposes is very much not a red herring, but compute definitely still matters a lot in every scenario that involves strong AI.
Imagine if the ones going ‘I suppose we should drop the export controls then’ or ‘the export controls only made us stronger’ were mostly the ones looking to do the importing and exporting. Oh, right.
And yes, the Chinese are working hard to make their own chips, but:
They’re already doing this as much as possible, and doing less export controls wouldn’t suddenly get them to slow down and do it less, regardless of how successful you think they are being.
Every chip we sell to them instead of us is us being an idiot.
DeepSeek trained on Nvidia chips like everyone else.“
My usual diagnosis: time for Rest of World to stop behaving like NPCs in this childish scrap…
12. DeepSeek: US vs. Open-source (!!)
Open-source AI will always be behind commercial AI, right? (Right?)
You couldn’t make this up: a proposed "Decoupling America’s Artificial Intelligence Capabilities from China Act" from by Missouri Senator Josh Hawley, would (summarised by Perplexity):
Ban imports and exports of AI technology and intellectual property between the US and China
Prohibit American companies from conducting AI research in China or collaborating with Chinese entities
Prevent US companies from investing in Chinese AI development
Impose severe penalties for non-compliance, including fines and potential imprisonment
Basically: jail time for anyone who downloads or hosts DeepSeek / other open-source AI from China?! (See Carnival of Chaos in Zeitgeist below for more US shenanigans…)
Back in the real world: open-r1 hosted on HuggingFace is…
“… an open initiative to replicate and extend the techniques behind DeepSeek-R1, a state-of-the-art reasoning model, in a fully transparent and collaborative way: https://github.com/huggingface/open-r1
This organization is dedicated to:
Sharing datasets and models built on the path to replicating DeepSeek-R1.
Fostering meaningful discussions and collaboration.“
Effectively a completely open-source (data+code+weights), improved r1 … distilled from r1!
More concerning: Singapore-based Georg Zoeller has his eyes open for US attempts at Enclosure of the Data Commons, already underway?
13. Man of the moment
DeepSeek founder Liang Wenfeng, the 40-year-old billionaire who made his fortune through the High-Flyer hedge fund, returned to his home province of Guangdong for Lunar New Year celebrations, this time accompanied by bodyguards and receiving widespread local acclaim. However…
“Despite his rising star, Wenfeng has largely avoided public attention, frustrating those eager to learn more about him. Given the fate of high-flying Chinese CEOs like Jack Ma and Pony Ma — who faced government scrutiny after gaining a little too much visibility — don’t be surprised if Wenfeng chooses to stay in the background.“
OK, enough DeepSeek… it’s just an AI model, right?
🏭AI industry news
A fair amount happening elsewhere:
AI Tarpits A new form of digital resistance against AI web crawlers ignoring robots.txt rules: "tarpits" - malicious software designed to trap and poison AI data ingestion systems:
Building on an anti-spam cybersecurity tactic known as tarpitting, [pseudonymous hacker “Aaron”] created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months…
Poisoned AI training data. Pliny the Prompter providing demonstrative proof that LLMs can be corrupted by poisoning their training data scraped off the web.
[This half-fulfils one of my 2025 AI predictions:]
“The first “AI supply chain” cybersecurity attack will be documented - where AI-generated code based upon data scraped off the internet includes a malicious payload“
Meta posted record revenue in the fourth quarter last week, driven by AI improvements in its ads business. Overall the social media giant reported a 21% increase in sales and US$20.8 billion in net income, both ahead of analyst expectations. The stock rose by about 5% in after-hours trading on Wednesday, following a rally over the past week after it announced plans to increase spending on AI. Meta’s capital-expenditure estimate for 2025 is roughly 70% over 2024 projections!
Ben Thompson in Stratechery lays out just how effectively Meta appears to be leveraging AI internally inside its business to grow revenue and cut costs:
“Meta’s “year of efficiency” really arrived just in time: it’s hard for investors to give CEO Mark Zuckerberg a hard time about ever increasing spend on AI and Reality Labs when they are expanding margins. And, I should note, the expanding operating margins aren’t just because of lower headcount; gross margins are up as well.“
Standing out among the Mag7, Meta’s open-source approach to AI appears to be driving internal efficiency for its profitability flywheel… (See also Meta Reality Labs results in “XR” below)
⛓️“On the wrong side of history” OpenAI CEO Sam Altman acknowledged during a wide-ranging Reddit AMA session that his company needs to reconsider its approach to transparency and open-source after the challenge from DeepSeek:
“we are discussing [releasing more open-weights models]. i personally think we have been on the wrong side of history here and need to figure out a different open source strategy; not everyone at openai shares this view, and it's also not our current highest priority.“
Empty words? Easy to say but can Sam pull his elite-appointed board around to an open-source friendly position? I doubt it.
Also on the topic of “wrong side of history”: OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security.
“Wait, isn't this the plot to the "Terminator" movies?”
Copyright for humans+AI? The US Copyright Office released new guidelines for AI-assisted creative works. Apparently “the centrality of human creativity” in authoring a work warrants copyright protections.
“Where that creativity is expressed through the use of AI systems, it continues to enjoy protection“ — Register of Copyrights Shira Perlmutter
(But…how could you tell? That horse has bolted, methinks…)
The EU can now ban AI systems deemed to pose ‘unacceptable risk’, with fines up to €35 million or 7% annual revenue, as the first compliance deadline of the AI Act was initiated this week.
Meta unveiled a policy document with its new Frontier AI Framework, outlining scenarios where it may restrict or halt the release of powerful AI systems based on risk levels, while maintaining its open-source approach by default:
“Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems, for example:
Identifying catastrophic outcomes to prevent: Our framework identifies potential catastrophic outcomes related to cyber, chemical and biological risks that we strive to prevent. It focuses on evaluating whether these catastrophic outcomes are enabled by technological advances and, if so, identifying ways to mitigate those risks.
Threat modeling exercises: We conduct threat modeling exercises to anticipate how different actors might seek to misuse frontier AI to produce those catastrophic outcomes, working with external experts as necessary. These exercises are fundamental to our outcomes-led approach.
Establishing risk thresholds: We define risk thresholds based on the extent to which our models facilitate the threat scenarios. We have processes in place to keep risks within acceptable levels, including applying mitigations.“
(This will shurely not go the same way as the recent fact-checking pull-back…!)
The UK government's ambitious new AI innovation plan, which aims to leverage NHS health data, faces significant trust challenges based on past NHS data-sharing failures. Raises the wider question: how do public AI initiatives in liberal democracies demonstrate trustworthiness through actions, not just promises, to gain public support?
OpenAI’s new trademark filing with the USPTO reveals plans for robots, smart devices, jewelry and more. Just a small extract of the classes claimed:
“…computer hardware; earphones; headphones; sunglasses; laptop cases; mobile phone cases; smart watches; smart bands; smart jewelry; wearable computers; wearable cameras; digital media streaming devices; virtual and augmented reality headsets, goggles, glasses, controllers, and remotes; wireless communication devices for providing access to the internet and sending, receiving, and transmitting data, images, audio, video, and multimedia content; user-programmable humanoid robots, not configured; humanoid robots having communication and learning functions for assisting and entertaining people; integrated circuits; microprocessors and semi-conductors; computer hardware for high-speed processing and storage of data using multiple CPU’s; data processing units for use in the fields of artificial intelligence, high performance computing, and data center operations; data processing units that enable the secure transmission of compute data between servers for artificial intelligence workloads“
🆕 AI releases
A far quieter week on the “what’s new” wires than usual.
OpenAI The best way to counter open-source competition… is to ship. Even better: drop your prices to give it away for free.
OpenAI released their new o3-mini reasoning model, including access for ChatGPT free tier users:
“Previewed in December 2024, this powerful and fast model advances the boundaries of what small models can achieve, delivering exceptional STEM capabilities—with particular strength in science, math, and coding—all while maintaining the low cost and reduced latency of OpenAI o1-mini.“
It comes integrated with Search capabilities out of the box. I’ve yet to notice a huge difference from o1-mini responses… but I haven’t been throwing it chunky tasks. (As always it takes a while to learn the character / capabilities / “vibe” of each major new model that’s released.)
🔬OpenAI Deep Research (not to be confused with, Google, er, Deep Research…)
“An agent that uses reasoning to synthesize large amounts of online information …a new agentic capability that conducts multi-step research on the internet for complex tasks. It accomplishes in tens of minutes what would take a human many hours…
Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.”
Only available on the US$200 ChatGPT Pro tier so far, rollout to other tiers coming soon. Here’s the latest *slightly awkward but also charmingly authentic* launch livestream (demo starts at 3:30 in):
It benchmarks super impressively:
Now imagine how all learning and work will accelerate once it’s in the hands of everyone who wants it (and what new work will get created as a result)?
Reactions coming thick and fast:
Andrew Maynard: Does OpenAI's Deep Research signal the end of human-only scholarship?
“Having got my hands on Sam Altman's latest AI offering yesterday, I'm beginning to wonder when research and scholarship that isn't augmented by AI will be seen as an anachronism”
From a personal perspective, how will this affect newsletters like Memia? I will try a variant of this prompt when I get access:
“Give me a concise roundup scan of this week’s most significant developments in AI, emerging technology and thinking about the future…”
🪿Goosed Jack Dorsey popped up randomly on
TwitterX……the next day announcing Block’s new AI Agent building framework Goose (what is it with Jack and birds…?)
Goose is a free, open-source framework for automating software engineering tasks - designed to work across multiple platforms and tools by leveraging Anthropic’s open-source Model Context Protocol (MCP). Jackie Brosamer, VP of data & AI platform engineering at Block, told VentureBeat:
“I think the advantage of Goose is its ability to work across different systems. It’s not just limited to Google Drive — it can also integrate Google Drive with Slack, for example. One of our business team members finds it especially useful for figuring out what they did last week because they’re in 40 hours of meetings and can’t remember. And so it’s able to hook into any of these different systems, versus having to use one agent for Google, one agent for Slack, and kind of marry those together yourself.”
What is Block’s play here… is this something they will roll into their products to automate money / payment management?
🥒In a pickle Pickle is a startup building “your body double for video calls” - attend a Zoom call in person… but with only your voice. I can see this being useful… and misused!
🥼 AI research
Constitution Classifiers Anthropic published a breakthrough security technique it calls "constitutional classifiers", designed to prevent users from “jailbreaking” AI models into generating harmful content. According to synthetic jailbreak attempts, it significantly prevented malicious use with minimal extra compute overhead:
The system is based on a similar process to Constitutional AI, another technique Anthropic uses to “align” Claude:
Jan Leike (now with Anthropic) put out a challenge:
…which Pliny got to the end of… (due to a UI bug?? ROFL)
AI agents in digital markets A new preprint examines infrastructure requirements needed for AI agents to function as autonomous participants in digital markets, finding that current digital infrastructure blocks AI participation through its human-centric design and anti-automation measures, across four areas:
Identity and authorisation
Service discovery
Interfaces
Payment systems
The authors propose that enabling AI agents in markets could unlock new forms of economic organisation beyond human capabilities. Well, yes. But be careful what you wish for…
Gradual disempowerment What happens once AIs make humans obsolete? Even without AIs seeking power, a new paper Gradual Disempowerment Systemic Existential Risks from Incremental AI Development argues that competitive pressures will fully erode human influence and values. This goes to some pretty dark places:
“…we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment. This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.
…Decision-makers at all levels will soon face pressures to reduce human involvement across labor markets, governance structures, cultural production, and even social interactions. Those who resist these pressures will eventually be displaced by those who do not…
…Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down. For example, states funded mainly by taxes on AI profits instead of their citizens' labor will have little incentive to ensure citizens' representation. This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans' ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.
Though we provide some proposals for slowing or averting this process, and survey related discussions, we emphasize that no one has a concrete plausible plan for stopping gradual human disempowerment and methods of aligning individual AI systems with their designers' intentions are not sufficient. Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.”
Your cheery messenger here.
🔮[Weak] signals
Non-AI emerging tech… so much going on!
📉Memecoins (etc) go pop
A week after the AI bubble went “pop” last week, the crypto market lost US$2.2B in 24 hours yesterday, the biggest liquidation event ever, more than the FTX collapse (~US$1.6B), with over 700,000 traders liquidated in one day. Crypto tokens with practical utility and *without utility* alike were caught in the rout (but BTC still sitting above US$100K as I write this…):
(Reminder: the house still wins even at the toy casino…)
🎥Decentralised social media does video
Multiple developers are racing to fill potential TikTok void on BlueSky. The market opportunity for decentralised alternatives as Trump’s 75-day stay of execution ticks down towards a full US TikTok ban, startups are building on top of BlueSky’s AT protocols:
Bluescreen from Berlin developer Sebastian Vogelsang, will focus on videos posted to Bluesky (similar to his Instagram Alternative Flashes (covered last week)
Reelo is developing its own stand-alone app on the AT Protocol:
Videos for BlueSky:
(Bluesky’s own social network limits videos to up to 60 seconds in length and 50MB in size so the dev and hosting costs will scale up pretty quickly…. remind me what is the business model again?)
🕶️XR
🥽And then there were three This is quite exciting: Marques Brownlee gets an exclusive try of Samsung’s new AndroidXR headset prototype, which looks remarkably similar to the Apple Vision Pro. Brownlee says Google and Samsung are pitching this as a “Pixel” effort to provide a reference model for other hardware makers. (🎩 Ray S for sharing)
Wouldn’t surprise me to see Samsung and Google get more closely integrated over the next few years…💰Meanwhile Meta's Reality Labs division achieved record sales of US$1.1 billion in Q4 2024, primarily from Meta Quest 3 headset sales, while simultaneously posting a US$4.97 billion loss due to extensive ongoing R&D investments. A few other nuggets:
59% of VR developers target Meta Quest so far the platform is dominant (but… AndroidXR above!)
Reality Labs invested a total of US$19.9bn in R&D in 2024, pushing the company's total VR/AR investment since 2014 past $80bn and on track to exceed $100bn in 2025!!
🛡️Privacy and security
NordVPN unveiled NordWhisper, an updated network protocol designed to make VPN connections virtually undetectable by mimicking regular web traffic. If it works, it potentially enables far more unrestricted internet access in heavily censored regions like China and Russia — but like all good privacy tech it comes with a speed and performance cost.
WhatsApp has uncovered and disrupted a significant spyware operation targeting approximately 90 users, including journalists and civil society members, through malicious PDF files distributed in group chats. Be careful out there.
🏠ConstructionTech
Boxabl (Elon Musk is a key investor in the startup) unveiled the Baby Box, a revolutionary US$19,999 foldable tiny home that can be set up by one person in just 60 minutes without tools. The "box” comes with off-grid capability and solar compatibility to increase location flexibility. It’s, er. bijou:
TransportTech
🚗🔋🧠From EVs to EIVs The Chinese automotive industry is undergoing a terminology shift from "EV" (electric vehicle) to "EIV" (electric intelligent vehicle), highlighted by CATL Co-Chairman Pan Jian at the World Economic Forum in Davos.
Electric hydrofoil ferry takes flight in Auckland harbour New Zealand's largest ferry operator, Fuller360, has launched commercial operations of the VS-9 Kermadec, a small (10-passenger) electric hydrofoiling ferry developed by Kiwi startup Vessev, tracing its origins back to the hydrofoiling tech first used by Team New Zealand in the 2013 America’s Cup.
🚢Cargo ship captures and stores CO2 from its own exhaust A pilot project is set to launch aboard the cargo ship Clipper Eris becomes equipped with the first Onboard Carbon Capture and Storage (CCS) system. The system captures 70% of the ship's CO2 emissions and can be retrofitted to existing shipping fleet.
🛩️Boom Supersonic achieved a historic milestone as its XB-1 prototype became the first privately-built supersonic aircraft to break the sound barrier, reaching Mach 1.122 during its 12th test flight over California's Mojave Air & Space Port.
🦾Robots
Folk dancing robots Chinese robotics company Unitree put on a robotics performance with 16 of its H1 humanoid robots executing a complex dance routine alongside human performers at a Spring Festival Gala event.
(Next year’s event: will you be able to tell the humans from the robots!?)
Deep Robotics’ Lynx wheeled robodog showcased more all-terrain capabilities including in snow an ice. Only US$17,999 price point and coming down… Chinese companies appear to be far in the lead here.
Robot road gang autonomous robots completed the world's first fully unmanned paving project on a 157.79 km stretch of the Beijing-Hong Kong-Macao Expressway:
🚇Delivery robots startup Pipedream Labs is working on an automated urban delivery system that runs through a series of underground pipes:
Smart disassembly robots Fraunhofer Institute researchers are developing the iDEAR robotic system to tackle the growing global e-waste crisis (projected to reach 74 million metric tons by 2030) to automate the disassembly of electronic waste and significantly increase recycling efficiency compared to manual methods.
🐟Artificial gills Researchers at Helmholtz-Zentrum Hereon have developed a novel “artificial gill” system that could revolutionize underwater “ocean glider” robotics by extracting oxygen directly from seawater to power fuel cells. Currently the first prototype is too large to fit inside, however:
Biotech
3D-printed “brain” Researchers at TU Delft have developed a 3D-printed “nanopillars” brain-like environment that successfully supports natural neuron growth patterns, enabling more accurate testing of potential treatments for Alzheimer's, Parkinson's, and autism spectrum disorders:
Sleep hormone for obesity Scientists have discovered a novel sleep-induced hormone called raptin that could revolutionise obesity treatment, potentially working alongside existing GLP-1 medications.
⚡Energy
Helion races to deliver fusion power to Microsoft by 2028 Sam Altman-backed Helion Energy secured US$425 million in Series F funding, pushing its total investment past US$1 billion, as it races to deliver the world's first commercially viable fusion power plant by 2028. (The three-year deadline with financial penalties creates unprecedented accountability in fusion development… watch this one play out…).
⚛️Quantum tech
I still struggle to get my head around the underlying principles of quantum computing… so stories like this leave my head spinning somewhat…
DNA breakthrough powers new quantum computing possibilities Link Researchers at Peking University have discovered a method to manipulate DNA at the atomic level for potential quantum computing applications:
Electric field gradients can control nitrogen atoms in DNA, enabling new quantum computation methods:
DNA-based quantum computing could revolutionise data storage and processing capabilities beyond silicon limitations.
Natural biological molecules could reduce costs and environmental impact of future computing technologies.
🖨️Materials
3D printed precision shape-shifting materials Researchers from Harvard SEAS, Princeton University, and national laboratories have developed a way to control the properties of 3D-printed liquid crystal elastomers (LCEs), materials that change shape in response to heat similar to muscle contractions. This breakthrough enables precise control of shape-morphing materials — for example for prosthetics and soft robotics applications. (Video shows high operating temperature-direct ink writing (HOT-DIW) of square spirals printed with low director alignment.)
Smart solar powered fabric to keep you warm in the cold Scientists at the University of Waterloo have developed an innovative smart fabric that harnesses sunlight to generate heat, eliminating battery dependency and potentially revolutionising cold-weather clothing.
⏳ Zeitgeist
Once around the world, lightly…
⏰Doomsday is nearer
Scientists from the Bulletin of Atomic Scientists moved the Doomsday Clock to 89 seconds to midnight - the closest to catastrophe since its creation in 1947. Their announcement contains a sobering analysis: global catastrophic risks are at historic peak with multiple converging threats (nuclear war, climate, AI, biological) demanding comprehensive response strategies:
“In 2024, humanity edged ever closer to catastrophe. Trends that have deeply concerned the Science and Security Board continued, and despite unmistakable signs of danger, national leaders and their societies have failed to do what is needed to change course. Consequently, we now move the Doomsday Clock from 90 seconds to 89 seconds to midnight—the closest it has ever been to catastrophe. Our fervent hope is that leaders will recognize the world’s existential predicament and take bold action to reduce the threats posed by nuclear weapons, climate change, and the potential misuse of biological science and a variety of emerging technologies.
In setting the Clock one second closer to midnight, we send a stark signal: Because the world is already perilously close to the precipice, a move of even a single second should be taken as an indication of extreme danger and an unmistakable warning that every second of delay in reversing course increases the probability of global disaster.“
(Is anyone in power listening…?)
Don’t look up A newly discovered asteroid, designated 2024 YR4, currently has a 1-in-77 chance of colliding with Earth on December 22, 2032. At present, the best models of the asteroid's motion have an uncertainty of around 100,000 km in its position at the time it would be closest to the Earth. (At around 12,000 kilometres in diameter, our planet falls inside that region of uncertainty). Current rough estimated risk area is shown below - final confirmation of collision risk will come after December 2028 observations. If it did hit, the impact would be comparable to the 1908 Tunguska event, potentially devastating a city-sized area.
🌍The planet groans…
Peak air pollution has passed(but ammonia, CO2 and methane still increasing)
Ocean warming now four times faster than 1980s levels Ocean temperatures are rising at an unprecedented rate, with warming now occurring 4.5 times faster than in the 1980s, according to new research published in Environmental Research Letters:
Ocean warming has quadrupled since the 1980s, threatening marine ecosystems and coastal communities.
Rising temperatures indicate Earth's energy imbalance has doubled since 2010, requiring urgent policy action.
Current warming rates suggest next 20 years could exceed past 40 years' temperature increase.
Greenland's ice cracks widen dramatically A new study led by Durham University reveals the Greenland Ice Sheet is fracturing at an unprecedented rate, with significant implications for global sea levels
and coastal communities worldwide.
New high-resolution mapping enables better predictions of future ice sheet behavior.
25% increase in crevasse volume signals rapid deterioration of Greenland's ice stability.
Genetic diversity crisis threatens two-thirds of Earth's species A comprehensive study published in Nature reveals alarming declines in genetic diversity across two-thirds of animal and plant populations, with birds and mammals showing the most significant losses.
a, World map with colour representing the number of unique populations included Image credits Satellite data reveals the world's biggest continuous methane emitters data from the Copernicus Sentinel-5P satellite has identified the world's top 10 persistent methane-emitting regions, analyzing 217 potential locations over four years (2018-2021).
A study by the University of Bremen identified 217 potential persistent methane source regions, estimated their emissions and created a global map of their distribution. The data was from the Copernicus Sentinel-5P satellite, taken over the four-year period 2018–2021. It identifies some of the major regions that emit methane on a persistent, continuous basis. Credit: ESA Climate Change Initiative GHG Project (contains modified Copernicus Sentinel data, 2024) Climate change threatens West Africa's chocolate future Climate change is significantly reshaping cocoa production across West and Central Africa, with dramatic regional variations projected by 2060:
Climate change could reduce cocoa production by 50% in Ivory Coast, affecting global chocolate supply.
But also increased CO2 in the atmosphere could increase yields and shift cultivation to new areas …potentially causing deforestation in previously untouched African rainforests.
Either way, expect rising chocolate prices and supply chain disruptions worldwide.
🦠Pandemic watch
New H5N9 bird flu strain A new strain of “highly pathogenic avian influenza” (HPAI H5N9) has been detected for the first time in US poultry at a California duck farm, alongside the already widespread H5N1 strain:
Threatens food security and public health, particularly with US withdrawal from WHO which could hamper global tracking of emerging viral threats.
The risk of human pandemic increases as bird flu shows mutation potential in animals.
🌇Middle East megaprojects
The Line megacity aims to house 300,000 by 2030 Saudi Arabia’s *ambitious* Neom megaproject revealed new details about The Line, the futuristic linear city under construction in the desert with a US $500B price tag. The first phase completion is targeted by 2030, with a length of 2.4 km and capacity to house 300,000 people in its, er, novel vertical city design. Right now it’s just a big hole (line) in the sand:
5 years to this? Hmmm.
(Reported in Memia 2024.45: over 21,000 foreign worker deaths during construction (so far😮💨))
🚄Gulf railway network Meanwhile the ambitious 2117-kilometre Gulf railway project is gaining significant momentum with Kuwait's recent award of a contract to a Turkish firm for its 111-kilometer section connecting to Saudi Arabia.
The whole network aims to enhance Gulf connectivity, reducing travel time between major business hubs and projected to transport 8 million passengers and 95 million tons of cargo annually by 2045.
High-speed rail between major UAE cities could generate US$39.5 billion in economic benefits.
🎪Carnival of chaos
Catching up on Washington this week… summarising the stories I’ve been following into bullet points, avoiding the constant attempted limbic hijack of MSM:
A fatal plane / military helicopter crash on the Potomac took 67 lives. US President Trump blamed the crash on “DEI” (without evidence).
Trump announced sweeping trade tariffs on Mexico and Canada…and promptly walked them back just a few days later.
Gideon Rachman in the FT: Trump's tariff wars risk pushing allies toward China:
“Some influential Europeans are even asking whether the US or China is now the more direct threat. This would have been an absurd question just two months ago. But it is Trump — not Xi — who is talking about ending the independence of Canada, a Nato member state. And it is the Trump administration and Elon Musk — not the Chinese government — that is promoting the far-right in Europe.“
Trump launched airstrikes against Islamic State-affiliated targets in Somalia in his first overseas military action.
Elon Musk’s “DOGE” team took over access to US Treasury data and payment IT systems … prompting major concerns over privacy and illegal access to information. (Expect federal payments may experience “service outages” soon…?)
Thomas Shedd, a former Tesla engineer and Elon Musk ally recently appointed as Technology Transformation Services (TTS) director, announced an "AI-first strategy" for the General Services Administration (GSA), including a proposal for a centralised government data repository (raising the usual significant privacy and security concerns).
Tbh I advocated for a similar approach in my book ⏩Fast Forward Aotearoa last year… but perhaps with more transparency and less oligarchic overtones… it will be instructive to the rest of the world whether Musk can single-handedly drive a Twitter→X downsizing to the US federal government…or whether the “deep state” has *way deeper* resources
Trump proposed a sovereign wealth fund to acquire TikTok and potentially other “strategic tech assets”.
And on and on… how to keep it out of my brain!?!?!
💭Meme stream
What’s been diverting me this week…
⚛️“Think big” but nuclear
🎩Thanks to Ben Campbell who spotted this on X: resurfacing a US OSTI report from 1968 Nuclear energy centers, industrial and agro-industrial complexes
(Imagining the environmental impact assessment right now…)
🖥️Discord, Windows 95/98 edition
A developer known as iProgramInCpp has created "Discord Messenger", an unofficial Discord client that remarkably runs on legacy Windows operating systems dating back to Windows 98 and even Windows 95. Because…
⚔️Civilization VII
Civilization, the legendary strategy game series by Sid Meier, is launching its seventh instalment on February 11. It maintains its classic 4X(“Explore, Expand, Exploit and Exterminate”) gameplay formula that has sold 73 million copies to date. I don’t have the time but it’s one of the few games which ever grabbed my attention.
🐋🤣DeepSeek does viral memes
A classic meme take on DeepSeek vs. US AI industry (video via Johan Crafoord-Larsen on LinkedIn):
(Now we just need someone to make the Hitler Downfall version…)
🙏🙏🙏 Thanks as always to everyone who takes the time to get in touch with links and feedback. More again next week!!
Namaste
Ben
Thanks to Matt Miller for sharing the following Memia alternative newletter out of a pipeline using "perplexity + o3 mini and then rinsed through google ai studio ( flash 2.0 thinking)" in seconds using the prompt listed above:
Concise Roundup of the Week's Key Developments in AI, Emerging Technologies, and Future-Oriented Thinking (as of February 5, 2025)
Artificial Intelligence
DeepSeek's Competitive AI (Launched January 2025): Chinese AI startup DeepSeek launched R1, an advanced AI model reportedly rivaling leading Western models like ChatGPT but at a lower cost. This has generated market buzz and discussions about open-source AI's potential. While some reports linked this to market fluctuations, Nvidia's $600 billion valuation dip over a period should be viewed within broader market dynamics, including overall tech sector performance and investor sentiment (Source: tech industry news outlets). Potential Impact: Could accelerate AI adoption across various industries due to lower costs, but also raises questions about intellectual property and the concentration of AI power.
Agentic AI Expansion (Ongoing): Autonomous AI agents capable of independent decision-making are increasingly being integrated into enterprise applications by companies like OpenAI and Google. KPMG reports rising interest in scaling agentic AI across industries (Source: KPMG industry report, Q1 2025). Potential Impact: Increased automation of complex tasks, requiring workforce adaptation and reskilling initiatives. Ethical considerations surrounding autonomous decision-making need careful attention.
Quantum-AI Synergy (Announced January 2025): Quantinuum unveiled a generative quantum AI framework for tackling complex problems in areas like drug development and logistics. This represents advancement in leveraging quantum computing for AI (Source: Quantinuum press release). Potential Impact: Could revolutionize fields relying on complex simulations and optimization, but widespread adoption depends on overcoming the limitations of current quantum hardware.
Emerging Technologies
Immersive Tech Advancements (Ongoing): Meta continues upgrading its VR ecosystem. Apple is integrating PlayStation VR2 controllers into its Vision Pro headset, expanding user input options. Sony's XYN ecosystem is further blurring the lines between real and digital environments through augmented reality (Source: Meta product announcements, Apple developer updates, Sony XYN showcase). Potential Impact: New forms of entertainment, training, and remote collaboration, but also raises concerns about digital addiction and the impact on real-world social interactions.
Energy-Efficient Computing (Pilot Results Published January 2025): Firebird's pilot kiln achieved a 70% energy savings breakthrough in EV battery production, indicating progress in sustainable manufacturing technologies (Source: Firebird pilot program report). Potential Impact: Reduced environmental impact of EV battery production, contributing to a more sustainable transportation sector.
AI-Powered Cameras for Road Safety (UK Trial Initiated January 2025): AI cameras are being tested in the UK to detect drunk drivers, demonstrating AI's potential to improve public safety (Source: UK Department of Transportation press release). Potential Impact: Reduced accidents and fatalities related to drunk driving, but raises privacy concerns about surveillance and data usage. Robust safeguards and transparent policies are crucial.
Policy and Governance
EU AI Act Implementation (Initial Provisions Effective February 2025): The first provisions of the EU's AI Act have come into effect, banning high-risk applications such as subliminal techniques and certain biometric systems. This establishes a global benchmark for ethical AI use (Source: European Commission official statement). Potential Impact: Promotes responsible AI development and deployment, but may also create challenges for companies operating in the EU and influence global AI standards.
US Stargate Initiative (Announced January 2025): President Trump announced a $500 billion investment in "Stargate," a project focused on building next-generation AI data centers to maintain US leadership in AI innovation (Source: White House press briefing). Potential Impact: Could accelerate AI research and development in the US, but also raises questions about resource allocation and the potential for a global AI arms race.
Future-Oriented Thinking
Technology Convergence: The convergence of AI, biotechnology, and quantum computing is accelerating innovation across industries, from bioengineering to sustainable energy solutions. This trend could propel us toward a "fifth industrial revolution" (Source: World Economic Forum Future Trends Report 2024). Potential Impact: Unforeseen breakthroughs in various fields, potentially addressing global challenges but requiring careful management of risks associated with powerful technologies.
Workforce Evolution: The World Economic Forum forecasts rapid growth in roles like big data specialists and AI engineers, driven by expanding digital access and automation technologies. Conversely, clerical jobs are declining as automation reshapes labor markets (Source: World Economic Forum Future of Jobs Report 2024). Potential Impact: Increased demand for skilled tech workers, necessitating investments in education and training programs. Social safety nets and workforce transition programs are needed to support those displaced by automation.