Humanoid futures🤖 DOGE style💰 grok 3⤴️ gpt-5 roadmap🛣️ judge ai⚖️ torque clustering🌌 darkmind⬛ manufacturing in space🚀🏗️ sexome🦠 gigslave⛓️🤑 #2025.07
Noise over Signal on X🙉
Welcome to this week's Memia scan across emerging tech and the exponentially accelerating future. As always, thanks for being here!
ℹ️PSA: Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link in last week’s newsletter was my strategy note from the end of last year attempting to model Trump 2.0 using AI:
💰DOGE style
Man of the moment Elon Musk somehow found time last week to beam in to the World Governments Summit in UAE wearing a casual “Tech Support” T-Shirt. In a frank interview with Omar Sultan AlOlama, UAE Minister of State for AI in front of a packed plenary hall he talked up the rationale for his current antics at DOGE:
“We're moving people from low to sometimes negative productivity roles in the government sector to higher productivity roles in the private sector and the net effect of that will be an increase in the output of useful goods and services, which increases the standard of living and well-being of the average American"
(Which “higher productivity roles in the private sector”, exactly?)
Watch the whole discussion/monologue below. His arguments veer from cogent rationality to well, …clutching at scientific straws:
"…I think we might be headed to a bimodal human intelligence distribution where there's a small number of... it's kind of maybe like more like Brave New World - Aldous Huxley - where you've got sort of a sort of a small group of very smart humans but then maybe the average intelligence drifts lower over time potentially, because we have assortative mating in the last few decades that or several decades that did not exist before..."
(Informed not a little bit by the premise to Idiocracy…)
The FT’s opinion piece How to Understand Elon Musk provides one lens to interpret his political stance:
“His attack on government is not ideologically coherent but it can be traced to an engineering mindset that values radical ideas”
Whether he survives (metaphorically …and literally🫣) through spearheading a purge of US military spending remains to be seen. The Economist:
“Reforming the Pentagon is much harder than other parts of government. America cannot focus on preparing for war in 2035 if that involves lowering its defences today. It cannot simply replace multi-billion-dollar submarines and bomber squadrons with swarms of drones, because to project power to the other side of the world will continue to require big platforms. Instead America needs a Department of Defence that can revolutionise the economics of massive systems and accelerate the spread of novel systems at the same time.
Mr Musk and his boss are conflicted. If Mr Trump prefers sacking generals for supposedly being “woke” or disloyal, he will bring dysfunction upon the Pentagon. If Mr Musk and his mil-tech brethren use DOGE’s campaign to wreck, or to boost their own power and wealth, they will corrupt it.“
Either way, his huge image up on stage giving off very Citizen Kane / Big Brother vibes…



🤖Humanoid futures
An avalanche of new videos from humanoid robot makers this week, making it even more apparent how close we are to full-autonomy, human-parity robots existing amongst us. (2-3 years max I think until we start seeing them walking on the streets…)
In the same way as we may on occasion think optimistically about “humans augmented with AI” inheriting the cognitive labour economy, we also need to start thinking about what an economy of “humans augmented with [semi-]autonomous robots” might look like. Some imagined scenarios off the top of my (and Claude’s) head:
A solopreneur plumber able to remotely control 50 robots all out in the field on different jobs at once. (Never stay home waiting for a tradie ever again!)
A surgeon performing complex operations simultaneously in multiple hospitals worldwide by controlling precise robotic avatars, expanding access to specialist care in remote and underserved areas. The surgeon can switch between robots as needed while an AI handles routine aspects of each procedure.
Hotel service manager coordinating a remote fleet of robots across an entire hotel chain changing beds, cleaning rooms, toilets, hallways, floors…
Agricultural workers controlling fleets of robots for delicate tasks like fruit picking and plant care, with the human operator providing high-level guidance.
A master chef running multiple restaurant kitchens at once through robot avatars, allowing their expertise to scale beyond a single location. The robots handle cooking execution while the chef monitors quality and makes real-time adjustments to multiple kitchens simultaneously.
Construction supervisors coordinating teams of humanoid robots to work on dangerous aspects of building projects like high-rise installation or underwater construction, keeping human workers safe while maintaining human oversight of complex engineering tasks.
(This one’s debatable…) Teachers conducting personalised tutoring sessions with multiple students in different locations through humanoid robot avatars, allowing them to provide individual attention at scale while maintaining the important human element of education through an embodied presence.
Aged care robots which look after senior citizens living on their own, having conversations, cooking meals and changing beds
Sex workers…(the mind boggles at the possibilities here…)
Second order effects: A new gigwork economy of remotely piloting or supervising humanoid robots through a particularly complex / critical tasks
There are also plenty of dystopian military and violent uses which don’t take much effort to imagine (I’ll leave those for another week…)
As I asked in last week’s newsletter:
“what is the equilibrium human:robot density ratio? 1:10? 1:1? 1:1,000,000???“
While talking at the UAE Governments Summit (see above), Elon Musk coming across all Sam Altman, suddenly:
"Once you have humanoid robots and deep intelligence, you can basically have quasi infinite products and services available... those human robots can be directed by deep intelligence at the data center level. You can say you can produce any product, provide any service... there's really no limit to the economy at that point. You can make anything…
..If in the form of humanoid robots you have no meaningful limit on the number of robots and the robots can basically do anything, then you'll have a sort of a universal high income situation - anyone will be able to have as many products and services as they want.."
This is likely just a plug for Tesla’s Optimus humanoids… but they will have a lot of competition. Here are this week’s videos (🎩 Brett Adcock and Humanoid Hub for pointing to many of these…)
Unitree has updated its G1 platform algorithm… and asking for suggestions for dance moves:
G1 has also been in training using the HoST Humanoid Standing up control research:
AppTronik Apollo robot plays cards with an amputee, both using Psyonic’s Ability Hand (first time I watched this I didn’t notice the human had a robot arm…!):
Israeli startup Mentee Robotics previewed its MenteeBot v3, featuring:
Custom actuators with enhanced torso mobility | 25 kg carry capacity | Swappable battery: 3+ hours of runtime | Height: 175 cm | 1.5 m/s walking speed | Improved hand with 30N pinch force per finger
Likewise UK-based Humanoid’s HMND 01:
modular hardware for real-world automation | 175 cm | 70 kg | 1.5 m/s | 4-hour runtime | 15 kg payload | 41 degrees of freedom
Even Meta is rumoured to be venturing into AI-powered humanoid robotics with a focus on household automation, expected to provide sensors and AI software to OEMs rather than build their own robots.
Finally, a darker future looms with this video from Booster robotics … as these (ahem) *ruggedised* devices become commonplace, unaugmented meatsacks like us will have no chance if the [human+AIs] that control them can bypass their safety guardrails. (The humanoid form factor is a distraction… these are autonomous weapons).
(I often wonder about the individuals starring in these videos… likely the engineers who actually designed the ruggedised carapaces testing out the hammer swing for the umpteenth test run… the pride they must feel in the indestructible nature of the electronic machine they’ve created… and yet a complete naïveté as to how sadistic these videos look to the rest of humanity…)
📈The week in AI
⛰️Views from the Summit
A few more narratives falling out of last week’s Paris AI “Action” Summit which missed last week’s newsletter deadline:
US Vice President JD Vance's speech at the AI Action Summit outlined key elements of America's new emerging AI policy, summarised:
“AI opportunity” over “AI safety”
US must lead AI development
Opposed to excessive AI regulation
American-made chips and systems
Build infrastructure for AI
AI will enhance worker productivity rather than threatening jobs
These two paragraphs from his speech juxtapose particularly well:
“I would also remind our international friends here today that partnering with such regimes [er, who *could* he mean…?], it never pays off in the long term. From CCTV to 5G equipment, we're all familiar with cheap tech in the marketplace that's been heavily subsidized and exported by authoritarian regimes.
… partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure…“
And:
“…the Trump Administration is troubled by reports that some foreign governments are considering tightening the screws on U.S. tech companies with international footprints. Now, America cannot and will not accept that, and we think it's a terrible mistake not just for the United States of America but for your own countries…“
On the other hand, going somewhat against the flow, former Google CEO Eric Schmidt urged Western countries to embrace open-source AI development:
“If we don’t do something about that, China will ultimately become the open-source leader and the rest of the world will become closed-source”
(Go figure….)
Both the UK and US refused to sign the nebulous Paris Summit AI declaration. Also following the Summit, the UK Government made two announcements:
The AI Safety Institute is now called the AI Security Institute:
"This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse“
(No more worrying about existential AI risks, then!)
An extensive AI partnership with Anthropic for government services.
Quoting Simon Wardley for the third week in a row:”We backed the wrong approach and we're fixing it by renaming ourselves. I am so full of confidence ... not.
Oh dear, anthropic to be used throughout government. Hmmm, this should not be allowed anywhere near policy makers until the models are investigated for biases towards market benefit rather than societal benefit.
Ok, ok, I can't leave it there. I do like the UK being a Brit, so ...
FTFY1 "We're closing the AI Safety Institute and redirecting all Government support and investment in AI towards open source AI including but not limited to open weights, open code, open training data and open architectural components because we have decided that we want the UK to become a major AI player worldwide rather than an also ran."
🇨🇳AI in China
Apple is accelerating efforts to launch its AI features in China by mid-2025, relying on a novel partnership with Alibaba to effectively provide a censorship layer over AI on Apple devices:
“[Apple] has worked with Alibaba to create an on-device system that can analyze and modify Apple’s AI models for iPhone, iPad and Mac users in China. It will censor and filter AI output to comply with requirements from the Chinese government.” — Bloomberg
(This is kind of equivalent to how DeepSeek released their R1 model to the world - the version hosted by DeepSeek self-censors, while if you host it locally it doesn’t. Fascinating… for all of the Chinese governments supposed commitment to open-source AI, I suspect the source code of the censorship model won’t be released any time soon!). Alibaba's shares are up by over 40% this year... and even Jack Ma and Chinese tech business are suddenly back in favour with the Xi regime…
WeChat and Baidu have integrated DeepSeek’s AI models into their search features.
Baidu also announced significant plans for its ERNIE Bot AI model: it will be completely free from April 1st and the 4.5 series will be completely open-source starting June 30th, 2025.
🇺🇸AI in the US
OpenAI
🛣️GPT-5 roadmap Sam Altman gave a clearer indication of OpenAI’s product roadmap. All roads lead to…
“We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence. We will next ship GPT-4.5, the model we called Orion internally, as our last non-chain-of-thought model. After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks. In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model…“
Moonpig’s Peter Gostev drew it for us, a picture is still far easier to understand:OpenAI also announced significant changes to its AI “model spec” policy to “uncensor” its AI models: emphasising intellectual freedom and reduced content restrictions for ChatGPT:
“Upholding intellectual freedom
The updated Model Spec explicitly embraces intellectual freedom—the idea that AI should empower people to explore, debate, and create without arbitrary restrictions—no matter how challenging or controversial a topic may be. In a world where AI tools are increasingly shaping discourse, the free exchange of information and perspectives is a necessity for progress and innovation.
This philosophy is embedded in the “Stay in bounds” and “Seek the truth together” sections. For example, while the model should never provide detailed instructions for building a bomb or violating personal privacy, it’s encouraged to provide thoughtful answers to politically or culturally sensitive questions—without promoting any particular agenda. In essence, we’ve reinforced the principle that no idea is inherently off limits for discussion, so long as the model isn’t causing significant harm to the user or others (e.g., carrying out acts of terrorism).“Another signal of a broader Silicon Valley shift away from content moderation to “free speech” principles… seems to be heading in a direction closer to the Chinese “censorship layer” being added by Apple / Alibaba above.
Copyright A significant legal precedent for AI copyright cases emerged as a judge decided against legal tech firm Ross Intelligence in a case brought by Thomson Reuters. The summary judgement found that Ross’ training methods for its AI legal research platform infringed on Reuters’ intellectual property… however the decision distinguishes between simpler AI systems like Ross’ and generative AI. 39 more copyright-related AI lawsuits are currently working their way through US courts… which will effectively decide the legal outcomes for the rest of the world… but not before lots of lawyers have paid for their summer houses.
🔄AI chips
With the billions recently announced, there’s big money potential in leading edge AI chips, even if you aren’t Nvidia. The sector is reconfiguring rapidly:
TSMC and Broadcom are picking over the bones of Intel, reportedly exploring a deal to break up the ailing giant
Put another way: Intel and TSMC are reportedly in discussions about a potential historic US chip manufacturing alliance that could transform the semiconductor manufacturing landscape and shift more semiconductor supply chains inside the US
Arm Holdings is making a strategic pivot from its traditional licensing-only business model by planning to enter chip manufacturing, according to industry sources... which could threaten Arm's neutrality, potentially forcing clients to seek alternative chip architectures.
AI chip challenger Groq (no relation) signed a US$1.5Bn deal with Saudi Arabia to support AI infrastructure buildout in the country:
(… the same week, Saudi Arabia's Neom signed a US$5 billion deal to build a huge 1.5-gigawatt AI data centre to be built in Neom's Oxagon industrial hub, expected to be live in 2028).
🆕 AI releases
As usual the speed of the advances continues to impress:
💭Perplexity Deep Research Perplexity wasted no time launching their own Deep Research product… it’s very good and it’s available 5 queries per day in the free tier. Get in.
⤴️xAI Grok 3
xAI launched Grok 3, the latest evolution of their Grok series of LLMs. Grok 3 is still in training and being rolled out to X users. Like OpenAI o1 and DeepSeek R1, it’s a “reasoning” model which takes a long time to reach its answers. According to Musk, Grok 3 is
“an order of magnitude more capable than Grok 2”
Here are some screenshots from the livestream of the launch featuring the ubiquitous Elon Musk here. The senior xAI team (co-founders Jimmy Ba and Yuhuai Wu, and lead engineer Igor Babuschkin) seem personable enough and not *entirely* intimidated by him…
A super impressive feat from a company that was only established in March 2023 (LOL, “Others”):
The model 3 training involved a 200,000 GPU(!) cluster at xAI’s Colossus data centre in Memphis… stood up in two phases, in only 122 days and 92 days! (Spot all the gas-powered generators and mobile cooling trailers lined up outside…)
The benchmarks are up there with the other frontier models: and the first ELO score of over 1400 on LMArena blind tests:
Two of the most interesting nugget from the livestream:
Musk clarified that some of the "thoughts" from the reasoning models will be obscured to prevent distillation by competitors (eg DeepSeek)
The models are improving faster than the benchmarks can keep up:
“very soon there’ll be no benchmarks left”
So far no sign of a published research paper so no way to independently verify the benchmarking claims… but that’s not really so different to the other labs. With the move to push AI towards a national security position in the US, it’s not unlikely that research will stop being published openly.
Andrej Karpathy took Grok 3 for an early test-drive, putting it through his detailed set of personal benchmark tests - it comes out reasonably well:
“Summary. As far as a quick vibe check over ~2 hours this morning, Grok 3 + Thinking feels somewhere around the state of the art territory of OpenAI's strongest models (o1-pro, $200/month), and slightly better than DeepSeek-R1 and Gemini 2.0 Flash Thinking. Which is quite incredible considering that the team started from scratch ~1 year ago, this timescale to state of the art territory is unprecedented.“
The livestream finished with a couple of new announcements:
A new games studio at xAI
“DeepSearch” - xAI’s own variation of Deep Research… which looks very impressive, showing its detailed reasoning.
“SuperGrok” - super user tier
Being rolled out to X premium tier subscriptions immediately (and other X users later this week, apparently…)
🌐Mistral Saba French AI startup Mistral has launched Mistral Saba, a specialised LLM targeting Arabic-speaking markets which is available via API or on-premise deployment (but not open-weights…). It’s fast and light and benchmarks well against other open-source models:
Adobe Firefly video generation arrived, now supporting a bunch of generative AI video features including generating motion graphics, text/image to video and camera control. Impressive… and pricing starts at only US$9.99/month so competitive:
Animate Anyone 2 Latest video research from AliBaba which enables any person to be swapped into another video using just a full-body photo. A couple of examples:
🥼 AI research
A quick whip-through interesting AI research this week:
AI reasoning in a TED Talk OpenAI’s Noam Brown gives the clearest explanation yet of inference-time AI reasoning and why it works:
DeepSeek-R1 AI automatically generates optimised GPU kernels NVIDIA engineers demonstrated automation of generating GPU optimised kernels (code) using DeepSeek-R1 and inference-time scaling. The key insight here is that test-time scaling improves AI performance by evaluating multiple solutions during inference — essentially just a big “for loop” as shown below — with DeepSeek-R1 achieving 100% accuracy in Level-1 kernel generation problems within 15 minutes.
🌌 Torque Clustering is a groundbreaking AI training method developed by researchers at the University of Technology Sydney which mimics galaxy mergers for better learning.
⬛ DarkMind a new backdoor attack which exploits AI Chain-of-Thought (CoT) reasoning to manipulate LLMs. Today’s more advanced “reasoning” AI models are actually more vulnerable to this type of attack, which embeds "hidden triggers" within customised LLM applications such as OpenAI's GPT Store.
Here’s an example of a backdoored GPT, designed specifically for DarkMind evaluation. The embedded adversarial behaviour modifies the reasoning process, instructing the model to replace addition with subtraction in the intermediate steps:
o3 Gold More evidence of reinforcement learning results - OpenAI’s o3 model achieved a gold medal at the 2024 International Olympiad in Informatics (IOI), obtaining a Codeforces rating on par with elite human competitors.
And two notable pieces of AI-adjacent research:
⚖️Judge AI Assessing Large Language Models in Judicial Decision-Making A study from the University of Chicago Law School evaluated GPT-4o's potential as a judicial decision-maker, replicating a previous experiment conducted on 31 federal judges in the US. The study’s key finding:
“We find that GPT-4o is strongly affected by precedent but not by sympathy, similar to students who were subjects in the same experiment but the opposite of the professional judges, who were influenced by sympathy. We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.“
(So which direction do we want to go as a society here…? Is this a permanent issue or a bug fixed in the next model…?)
Meeting Delegate Benchmarking LLMs on Attending Meetings on Our Behalf … research from Northeastern University, China, Peking University and Microsoft. A glimpse of the future…
“…can LLMs effectively delegate participants in meetings? To explore this, we develop a prototype LLM-powered meeting delegate system and create a comprehensive benchmark using real meeting transcripts. Our evaluation reveals that GPT-4/4o maintain balanced performance between active and cautious engagement strategies. In contrast, Gemini 1.5 Pro tends to be more cautious, while Gemini 1.5 Flash and Llama3-8B/70B display more active tendencies.“
Statistical literacy determines AI trust a comprehensive study published in Frontiers in Artificial Intelligence revealed that people's trust in AI-driven decisions varies significantly based on their statistical literacy and the stakes involved:
“Our results suggest that statistical literacy is negatively associated with trust in algorithms for high-stakes situations, while it is positively associated with trust in low-stakes scenarios with high algorithm familiarity... We conclude that having statistical literacy enables individuals to critically evaluate the decisions made by algorithms, data and AI, and consider them alongside other factors before making significant life decisions. This ensures that individuals are not solely relying on algorithms that may not fully capture the complexity and nuances of human behavior and decision-making.“
Key recommendation from the authors:
“policymakers should consider promoting statistical/AI literacy to address some of the complexities associated with trust in algorithms“
(Yes!)
🔮[Weak] signals
Another crop of non-AI tech advances this week… how to cover all of this?!
📱TikTok back (for now)
TikTok resumed availability in Google and Apple’s US app stores as Trump’s executive order delays the ban. No comment from Google or Apple. Tick…tock.
🌐Meta-cable
Meta unveiled Project Waterworth, a massive around-the-world undersea cable initiative that will span 50,000 kilometres across five continents. Some design notes:
New technology enabling the longest 24 fiber pair cable project in the world and enhance overall speed of deployment.
First-of-its-kind routing, maximising the cable laid in deep water — at depths up to 7,000 meters
Enhanced burial techniques in high-risk fault areas, such as shallow waters near the coast, to *avoid damage from ship anchors and other hazards*.
Another sign that the largest US tech companies are going full-stack … another layer of future rent-taking being laid…
🙉Noise over Signal on X
Elon Musk-owned X is now blocking Signal.me links across its platform, preventing users from sharing contact links for Signal - the encrypted messaging service. Joining the dots:
Signal has established itself as a trusted platform for journalists receiving sensitive information from sources due to its end-to-end encryption and on-device storage
The messaging app has gained particular significance lately as federal whistleblowers utilise it to communicate DOGE-related activities to media outlets.
This is, er, totalitarian, right?
🚗🤖Autonomy
Lyft announced plans to launch robotaxi services in Dallas by 2026, partnering with Japanese conglomerate Marubeni for fleet management and utilizing Mobileye's autonomous driving technology.
BYD unveiled a new "Intelligent Driving for All" strategy that will equip every vehicle in its lineup with advanced intelligent driving systems, regardless of price point.
Possessing the world's largest vehicle-cloud database gives BYD significant advantage in autonomy development race
Tesla faces potential delays in obtaining Chinese regulatory approval for its Full Self-Driving (FSD) technology amid escalating US-China trade tensions.
3D printing
Researchers have developed a groundbreaking embedded 3D-printing technique that can produce ultra-fine fibers mimicking nature's smallest structures, achieving unprecedented 1.5-micron resolution in 3D printing and eliminating gravity constraints:

🚀🏗️Manufacturing in space
At the other end of the scale, US military research agency DARPA's NOM4D (Novel Orbital Moon Manufacturing, Materials & Mass-efficient Design) programme has brought forward its final testing phase, advancing from lab experiments to orbital demonstrations next year.
One of the universities involved, Caltech, plans to demonstrate the assembly of a 1.4-meter-diameter circular truss aboard the Momentus Vigoride Orbital Services Vehicle, launching on SpaceX Falcon 9 Transporter-16 in February 2026.

🦾Biohybrid robotics
Japanese researchers have achieved a breakthrough in biohybrid robotics by creating an artificial hand that combines lab-grown muscle tissue with mechanical components, using a new bundling technique (multiple tissue activators - MuMuTAs) enabling larger-scale biohybrid devices than previously possible. Still POC but you can see where this is going…

🛩️Transport
Amphibious aviation Tidal Flight has secured a US$100 million Letter of Intent from Tropic Ocean Airways for 20 Polaris hybrid-electric seaplanes, promising 85% fuel savings, reduced operational costs and environmental impact plus a viable alternative to expensive eVTOL for urban air mobility.
E-rickshaw Back on the ground, Pakistan's EV revolution is gaining momentum, with Sazgar Engineering Works leading the charge in e-rickshaw production. (Pakistan’s recent EV policy aims to transition 90% of all new vehicles to electric by 2040, with EVs constituting 50% of all auto sales in the country by 2030)
🧬BioTech
No more needles? Stanford scientists have developed a groundbreaking needleless vaccine delivery system that you just rub on your skin, using a common skin bacterium, Staphylococcus epidermidis.
Anti-CRISPR protein controls gene editing Researchers have unveiled the precise mechanism behind AcrVIB1, an anti-CRISPR protein that regulates CRISPR-Cas gene editing systems, opening up new possibilities for safer gene-editing and gene therapy applications:
🦠Sexome Australian researchers have discovered that humans have a unique genital microbiome, dubbed the "sexome," which is transferred during sex and could serve as a forensic tool in sexual assault investigations. Who suspected?
Materials science
💎Crystal defects enable massive storage University of Chicago researchers have developed a new memory storage technique that uses atomic-scale rare-earth crystal defects to store data, achieving terabyte-level storage in a millimetre-sized cube.
A crystal used in the study charges under UV light. The process created by the University of Chicago Pritzker School of Molecular Engineering Zhong Lab could be used with a variety of materials, taking advantage of rare earths' powerful, flexible optical properties. Credit: UChicago Pritzker School of Molecular Engineering / Zhong Lab via Phys.org
⏳ Zeitgeist
Trying to make sense of the world outside tech…spotting signals at the edges.
[Testing out some AI-generated summaries this week… does it work? Can you tell?]
⚠️Climate change
Ocean current collapse could trigger catastrophic climate shifts by 2050
New research led by former NASA scientist James Hansen, entitled Global Warming Has Accelerated: Are the United Nations and the Public Well-Informed?, reveals an alarming acceleration in global warming that could have catastrophic consequences:
Recent warming rate increased 50% since 2010, making Paris Agreement targets virtually unattainable.
IPCC climate models may be underestimating the rate of warming…
A key driver of the warming is the reduction of sulphate aerosol pollution over Northern Hemisphere oceans due to stricter shipping fuel regulations
Global warming acceleration could trigger catastrophic ocean current shutdown by 2050, threatening coastal regions.
Potential AMOC collapse would cause severe sea level rise and crop failures in Europe.
(Only a few of us are listening…this got hardly any MSM coverage in amongst the Trump flooded zone. It seems apparent that there is no coordination mechanism in place to avoid the tipping points which are inevitable… plan for the worst (by all means hope for the best…)
🦠Pandemic watch
Bird flu silently spreads to humans through cattle contact Link
The US CDC's latest report reveals evidence of undetected bird flu (H5N1) spread to humans, with three asymptomatic veterinarians testing positive through antibody screening of 150 vets across 46 US states.
🗺️Geopolitics
Russia secures first African naval base on Sudan's coast Link
Following Sudan's agreement to establish a Russian military presence on its coast, gaining strategic control over vital Red Sea trade routes and expanding its military presence in Africa amid growing US-China competition in region.
The base provides Russia with an alternative after its Syrian presence in the Mediterranean Sea has been weakened following the fall of the Assad regime.
Russia plans nuclear winter fear campaign to weaken Ukraine support Link
Russia plans to launch a propaganda campaign aimed at reviving fears of "nuclear winter" among Americans to decrease support for Ukraine, according to an Estonian Foreign Intelligence service report.
Security experts warn of rising global war risk by 2035
Global security experts are sounding alarm bells about heightening risks of major power conflicts. According to the Atlantic Council's Global Foresight 2025 Survey, over 40% of security experts polled predict a world war within 10 years:
Singapore warns US seen as 'landlord seeking rent' in Asia Link
Singapore's Defence Chief Ng Eng Hen used uncharacteristically robust language during last week’s Munich Security Conference, saying that the US image in Asia has evolved from a force of "moral legitimacy" to "a landlord seeking rent." Oof.
🎲The House always wins
Three stories backing up the lack of statistical literacy in the general population (as per earlier story about AI trust!):
Milei backs rugpull Link
Argentina's President Javier Milei is facing his administration's biggest crisis since taking office after promoting a cryptocurrency called $LIBRA that dramatically crashed in value moments after he tweeted in support in a classic rugpull move. (Even if it was an honest mistake, not a great look for someone supposed to be an economist!)
Fake Saudi crown prince memecoin scam Link
Scammers exploited investor interest in celebrity-backed crypto tokens by launching a fraudulent Saudi Arabia memecoin (KSA) while impersonating Crown Prince Mohammed bin Salman through a hacked X account (@Saudilawconf).
Sports betting surge sparks nationwide gambling addiction crisis Link
A study by UC San Diego researchers has revealed an alarming 23% surge in sports betting addiction in the US following the 2018 Supreme Court decision to legalise sports events. 94% of the bets are made digitally online.
(Anyone detect a common theme here…?)
⛪Religion as counterculture?
Religious belief surges among Finnish youth, defying trends Link
A comprehensive study of Finnish youth reveals a significant shift in religious beliefs, with belief in God rising notably among teenagers:
In 2024, 62% of boys and 50% of girls reported believing in the existence of God. Up from only a third of boys in 2019.
If the research holds up, it challenges broad assumptions about declining faith in secular societies.
One theory: religion as counterculture:
“One can also ask whether Christianity is the new counterculture. Some parents have been critical of Christianity, viewing religion as conservative and outdated. Many parents have also left the church. Perhaps for today's youth, being non-religious is no longer the counterculture, but rather, religiosity represents it.“
🎭Meme stream
Eclectica and entertainment…
🎮📖Ink Console
E-ink technology is turning up everywhere as the technology becomes more affordable and functional. Last week we saw e-ink billboards, this week: Ink Console is a open-source, retro hybrid handheld device combining e-ink technology which aims to revive classic text “choose your own adventures” like the 1980s classic Zork I. set to launch on March 1 through crowdfunding platform Crowd Supply, neat:
📊Economic bell curve
Anthropic’s new Economic Index (covered last week) doesn’t pull its punches:
(Refer below…)
⛓️🤑Gigslave
Finally….biting satire from The Onion… perfectly done, their best yet.
🙏🙏🙏 Thanks as always to everyone who takes the time to get in touch with links and feedback.
Namaste
Ben
TIL: FTFY <=> “Fixed That For You”