All lines go up📈 Mars sucks🔴 AI diffusion🔀 memory-holing the internet🧀 the compendium☠️ scent teleportation👃 LLM as CRM📉 physical intelligence🧺 two undecillion roubles💰 #2024.44
Compression / decompression
Welcome to this week's Memia scan across AI, emerging tech and future as it accelerates towards us... As always, thanks for reading!
ℹ️PSA: Memia sends *very long emails*, best viewed online or in the Substack app.
🗞️Weekly roundup
The most clicked link (just) in last week’s newsletter was: Callaghan Innovation’s Aotearoa New Zealand government prototype chatbot: GovGPT.
👀ICYMI
OpenAI’s release last week of ChatGPT Search (see below) combined with Google announcing “grounding” of its Gemini model (also see below) levelled up the field with challenger Perplexity. Let the games begin…
📈The week in AI
Straight into it with the week's AI news and releases…
☠️The Compendium (of AI Doom)
The Compendium, written by a group of researchers led by prominent AI Doomer Connor Leahy, is a comprehensive new resource which aims to present:
“…a coherent worldview on the extinction risks of AGI in a way that is accessible to non-technical readers who have no prior knowledge of AI”.
🫣It starts subtly enough:
“Humanity faces extinction from AGI.
AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity. Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe.
In order to do something about these risks, we must understand them fully.”
I haven’t gone in too deep, but it seems like an accessible compilation of familiar [linear] AI doomer arguments:
Most interesting is the section on the AI Race…
“The race to AGI is ideological.
Although the race to AGI often presents itself under an economic or geopolitical mantle, its original motivation is ideological: all relevant actors care about building AGI, be it to bring about utopia, gain power, or build god.”
…categorising actors in the AGI race into five groups:
“Utopists, who are the main drivers of the race, and want to build AGI in order to control the future and usher in the utopia they want.
Big Tech, who are the actors participating in the race mostly by supporting the utopists, who want to stay relevant and preserve their technological monopolies.
Accelerationists, who want to accelerate and deregulate technological progress because they think it is an unmitigated good.
Zealots, who want to build AGI and superintelligence because they believe it’s the superior species that should control the future.
Opportunists, who just follow the hype without having any strong belief about it.“
(Opportunists. Oof.)
They finish with some constructive suggestions for three directions for avoiding existential AGI risk:
Civic duty is the foundation of a response to AGI risk
Creating a vision and a plan for a good future
Actions that help reduce AGI risk
If AGI Doom is your thing, you’ll find plenty to energise and motivate you here. I’m not being flippant, I know some people who strongly hold these views and who are deeply concerned about the current trajectory.
But for me it’s… polemical. I’m just not convinced by the jump to “God-like AI means [human] extinction”… my interpretation of evolution is that it doesn’t work that way. There are billions of ecological niches and over a long enough time period intelligent life will surely evolve beyond baseline hominids… but unlikely in direct competition with us.
I lean far closer to German technology philosopher Joscha Bach’s position on how we just cannot fathom the motivations of our posthuman ancestors (see Mind Expanding Below):
“if you ask a chimpanzee what is going to be the greatest achievement that your great grandchildren will have done, is it going to the Mona Lisa or is it going to be the Tesla car that is something that will not matter to the chimpanzee either way“
🔀Microsoft pushes AI “diffusion”
More tangibly, two keynotes struck by Microsoft leadership this week:
The Next Great GPT? Brad Smith, Microsoft's Vice Chair and President, invokes the current US Election (??) to talk up the future impact of AI as the next great prosperity-advancing general-purpose technology (GPT). He argues from history that a country's economic success during industrial revolutions is primarily determined by its ability to "diffuse" GPTs across its economy, rather than just being at the forefront of innovation.
So…national AI strategies should be as much about accelerating AI adoption as R&D. (Leave that to us and just pay your monthly Azure bill on time, nothing to see here…)
AI for Startups Microsoft jointly authored a US policy agenda with prolific VC firm Andreessen Horowitz (a16z) to encourage AI innovation and collaboration between Big Tech and US startups. In particular:
Enabling competition and choice in AI models and tools
Supporting open-source AI innovation to increase accessibility and scrutiny
Creating an “Open Data Commons” for AI development:
“There is a role for [the US?] government to enable and craft policies that support a thriving and growing ecosystem of data around the globe through Open Data Commons—pools of accessible data that would be managed in the public’s interest. Governments should participate and lead this effort by releasing data sets in ways that are useful for AI cultural institutions and libraries. Governments should ensure that startups can easily access these data pools.“
Preserving the right to learn from copyrighted works for AI training (Yes you heard that right… the end-game for copyright law becomes clearer).
On the one hand… open-source, data commons… this all sounds pretty reasonable and supportive of the little-guy startups, right? On the other… the first step before enclosing the commons is identifying and defining them.Hopefully regulators see the bigger game at play here… yes open-source tech and open access to a data commons is ideal…. but it should come with conditions for redistribution of any extraordinary private profits resulting from using them. (And hello, there are countries outside the US as well!)
⏭️OpenAI: what’s next?
The week in OpenAI…
Reuters reported that OpenAI has shelved plans to establish a network of chip factoriesand is instead working with TSMC and Broadcom to design its own in-house AI chip, targeting a build in 2026.
Also, CEO Sam Altman and other leaders held a Reddit AMA discussing what’s next for OpenAI. Strong rumours are that “GPT-Next” is due by the end of the year, Sama deflected with vague caveats…:
A couple of other key moments from the AMA:
(@8teapi and @kimmonismus summarise better than me.)
📈AI in 2024: all lines go up
AI capex line go up:
AI Data centres1 line go up:
AI GPUs: line go up: Meta is using more than 100,000 Nvidia H100 AI GPUs to train Llama-4:
“Meta isn’t the first company to have an AI training cluster with 100,000 Nvidia H100 GPUs. Elon Musk fired up a similarly sized cluster in late July, calling it a ‘Gigafactory of Compute’ with plans to double its size to 200,000 AI GPUs. However, Meta stated earlier this year that it expects to have over half a million H100-equivalent AI GPUs by the end of 2024, so it likely already has a significant number of AI GPUs running for training Llama 4.“
AI energy consumption line go up: each H100 has peak power consumption of ~700W, so a cluster of 500,000 would require (by my calculations and ChatGPT’s) 350MW of power supply to run - running this continuously for 1 year would use over 3TWh !!! Paul Churnock (ex-Microsoft) concurs:
More concerning: embedded energy in AI use goes all the way back to the chip fab itself:
Analyst firm TechInsights calculates that EUV lithography systems consume 1,400 kilowatts per EUV tool… and rising:
“By 2030, the estimated annual electricity consumption for EUV tools alone could exceed 54,000 gigawatts, more than 19 times the amount used by the Las Vegas Strip in a year.“
AI environmental damage line go up: The environmental campaigners fighting against data centres:
"What's going to happen if we continue with business as usual is that electrical prices are going to skyrocket for everybody, including the data centre industry - and that's their biggest bill, so that's going to impact them…The water scarcity issue is also going to impact them.“
🔐Open-source AI vs. “national security”
A provocative headline from Reuters: “Chinese researchers develop AI model for military use on back of Meta's Llama“.
Meta’s [unenforceable] Llama 3.2 licence prohibits use for:
“Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State…“
There are two ways to read this article:
EITHER: Meta’s open source AI strategy is a national security risk for the US and Zuckerberg should be hung out to dry by the US security state.
OR…. simply replace the words “Meta’s Llama” with, “Linux”. Nothing to see here, it’s only an open source piece of infrastructure. (And in many ways China using US AI tech inside its own military operations provides as many advantages as disadvantages for the US…!)
🥼New AI research
Drinking from the firehose as usual… in no particular order:
NVIDIA HOVER robots living in a simulation….a new compact 1.5M-parameter model trained in NVIDIA's accelerated simulation environment that can control humanoid robots through various input methods (XR devices, motion capture, or joysticks) and transfer seamlessly to real-world operation without additional training.
Llama Berry A group of open source researchers is actively working on replicating OpenAI’s o1 model (“Strawberry”) building on Meta’s Llama:
AI decision-making in organizational management Academic research from China analyzing 176 studies from 2018 to 202, identifies two AI-human interaction modes: substitutive and collaborative decision-making.
AI governance in practice report
Anthropic hired an “AI Welfare Officer” in the same week that a report from Eleos AI argued that it is time for Taking AI Welfare Seriously:
“Our new report argues that there is a realistic possibility of consciousness and/or robust agency—and thus moral significance—in near-future AI systems, and makes recommendations for AI companies.”
🆕New releases
Trying out a new “in brief” format:
ChatGPT Search from OpenAI (covered in Monday’s strategy note)
Gemini Search Grounding on Google AI Studio (also covered in Monday’s strategy note)
Google LearnAbout: enter any topic you want to learn about and dive in:
(Video via @itsPaulAI)
Claude Desktop app and Dictation (underwhelming after using ChatGPT Advanced Voice Mode for hours in my car🗣️:
Synthflow Voice 2.0 realtime voice integration built on top of OpenAI AVM. Demo of restaurant booking:
Gladia real-time transcription API: Live transcription and insights at < 300 milliseconds, compatible with any tech stack and telephony protocol like SIP.
MaskGCT *another* SOTA open-weights Text-to-Speech model option:
Suno personas - lets you save the essence of a song - vocals, style, vibe - and reimagine it across all your other song creations:
(Missed this last week) Midjourney editing: generate images in a specific area of an image…. this will be big:
Runway Advanced Camera Control incredible, all AI generated:
Red Panda / Recraft *Yet another* SOTA AI image generation model, this one “thinks in design language” and is pretty nifty with text:
SmolLM2 - the 135M 8-bit model runs at almost 180 tokens/sec on an iPhone 15 Pro:
BitnetCPP, featured a few weeks ago… amazing performance on a standard x86 CPU:
Oasis: the first(?) playable AI-generated game.
“Oasis generates frames based on your keyboard inputs. You can move and jump around, break blocks, and build and explore a brand new map every game.“
Claude, watch this video for me: Ethan Mollick tests out Claude computer use:
📉LLM as CRM … short large CRM software companies now??
🎥AI movie making
Two examples of what’s possible now with current tools plus prompting skillz…
Pixar-Like cartoons:
(Video via @kimmonismus)
The Matrix remakes: (OK you need Mazyar Sharifian’s compositing skillz too… but soon)
Now would someone please hurry up and make the fanfic version of cancelled Series 2 of The Peripheral with AI!?
🎓Learn about AI learning
I go on enough about AI and LLMs each week… I understand better how they work after spending 90 mins on this Stanford CS229 lecture from Yann Dubois:
🔮[Weak] signals
Non-AI, mostly-tech signals from near and far futures...
🔄Kestra
I’ve been looking for something like this for a while… Kestra is an open-source workflow automation platform (open-source alternative to Zapier or Make):
🏢Hybrid work: data
HBR reports One Company A/B Tested Hybrid Work. Here’s What They Found:
Online travel company Trip.com (which I used extensively on my recent nomading around Europe and Asia, a great service) recently conducted an extensive A/B test on hybrid work with 1,600 employees in China. The company randomly assigned employees to a three-day or five-day in-office schedule for six months, with the following results:
No difference in productivity, performance reviews, or promotions between hybrid and office-based groups.
35% lower attrition rate for hybrid workers, especially among women and long-distance commuters.
As a result, hybrid work saved the company millions in staff turnover costs.
Factors contributing to success:
A rigorous performance management system
Clear, coordinated in-office schedules
Executive support for hybrid work
The experiment changed managers' perceptions, with productivity estimates shifting from a 2.6% decrease to a 1% increase after the six month period.
Nice to see some data for once!
🚫China sanctions Skydio
OK this is one to watch… the Chinese government imposed sanctions on Skydio, the largest drone manufacturer in the US (and a key supplier to Ukraine's military). Chinese suppliers, including Skydio's sole battery provider, are now banned from providing critical components — the sanctions are seen partly as retaliation for US approval of drone sales to Taiwan.
Skydio is urgently seeking alternative suppliers, with some potential options in Asia, including Taiwan… but China increasingly controls global battery supply chains which gives it significant leverage…
Sanctions work both ways…
🚗🤖📈Autonomy rising
Autonomous taxi company Waymo hit over 300,000 driverless rides per month in California in August, up from only 12,000 one year before. Watch this one diffuse…!
🔗Crypto
A16Z published their State of Crypto 2024 report:
“7 key takeaways
1. Crypto activity and usage hit all-time highs
2. Crypto has become a key political issue ahead of the U.S. election
3. Stablecoins have found product-market fit
4. Infrastructure improvements have increased capacity and drastically reduced transactions costs
5. DeFi remains popular — and it’s growing
6. Crypto could solve some of AI’s most pressing challenge
7. More scalable infrastructure has unlocked new onchain applications“
Some interesting data inside:
Meanwhile in the more conservative world of Central Bank Digital Currencies (CBDCs), the Switzerland-based Bank for International Settlements (BIS) unexpectedly announced it was withdrawing from mBridge, the leading cross-border CBDC payments initiative, after four years of involvement in the project. This followed concerns that the mBridge network could potentially be used as a model to create a BRICS Bridge system enabling Russia to bypass USD payments systems and evade US financial sanctions. Speculation is now that China may take the lead on the project…
🦾Robotics
Another week of impressive robot demo videos… this week with a bit of a household chore vibe:
Boston Dynamics new Atlas robot goes full-on autonomous in a simulated manufacturing environment:
🧺Startup Physical Intelligence trained a robot to do the laundry:
It’s actually a really hard set of tasks to automate. Co-founder Chelsea Finn lets slip the recipe:
“Pre-train on lots of robot data, fine-tune on high-quality data.
- Pre-training teaches the robot how to to react to many scenarios
- Post-training tells it what strategy to use”
In a similar vein, researchers have developed a completely open source cleaning robot (software+hardware) prototype for under US$250, operated using GPT-4o voice commands:
🔋Energy storage by the container load
Chinese multinational Envision Energy unveiled the world’s most energy dense, grid-scale battery energy storage system at 541 kWh/m2 — 8-MWh, 1,500-2,000-volt battery packed into a standard 20-foot container:
👃Scent teleportation
Osmo labs claim to have invented “scent teleportation” using AI to analyze the chemicals in a scent, and then “reprint” it somewhere else:
(Video: @Osmo_labs)
🕶️🚫AR Adblocker
Not exactly original, but one of the more compelling demod for AR glasses I’ve seen:
(Video: @kimmonismus)
🧀Memory-holing the internet
The history of the internet is developing gaps in its memory… according to Pew Research, already more than a third of webpages from 2013 cannot be accessed online any more:
Furthermore:
after a recent cyberattack, the shoestring-budget Internet Archive has not been archiving since 8 October.
(As reported in Memia 2024.05 and Memia 2024.37), earlier this year Google quietly retired its web page cache functionality.
Plus the Alexa page ranking service was retired by Amazon in 2022.
One side-effect: the ability of censors to memory-hole the internet is growing. Time to hurry up and publish the Internet Archive as a data commons for AI training… then I’m sure it will get funding!
🧠Mind expanding
⚠️Global catastrophic and existential risks
My friend and colleague Matt Boyd at Adapt Research, together with Nick Wilson from Otago University have been leading significant work here in Aotearoa with two recent reports on national resilience to catastrophic risks:
Lost at Sea: Shipping in NZ through a Catastrophic Risk Lens
The Critical Minerals That Matter: Aotearoa/NZ’s Basic Needs in a Global Catastrophe
Matt also presented at the recent Cambridge Conference on Catastrophic Risk 2024 in September:
Meanwhile, in the US, RAND researchers published their Global Catastrophic Risk Assessment report for 2024 from the US perspective:
Overall, global catastrophic risk has been increasing in recent years and appears likely to increase in the coming decade
For supervolcanoes and asteroid and comet strikes, risk should remain constant or decrease in the next decade.
For the remaining threats and hazards, the risk appears to be increasing in the next decade because of current or expected human activities.
For artificial intelligence, the uncertainties are sufficiently large that it is difficult to determine the extent or magnitude of changes in risk with any confidence.
(That last one… such insight!)
♟️Building an AGI to play the longest games
As mentioned above… Joscha Bach is one of my favourite thinkers about AI (and intelligence generally)… in conversation with Dan Faggella on the Trajectory podcast. 2 hours well spent:
💭Manifold Podcast: Samo Burja
Also popping up in my longlist of regular podcast listens is Manifold Podcast hosted by physicist and entrepreneur Steve Hsu.
Here he is in conversation with Samo Burja, founder of Silicon Valley consulting firm Bismarck Analysis which “investigates the political and institutional landscape of society”.
(Quietly in awe of Samo for this:)
“The company is designed to facilitate my intellectual life. I designed it as the most cognitively stimulating job I could find and as one that would be most educational both for myself and for the other analysts.“
⏳Zeitgeist
Once around the non-tech world treading lightly...
🗳️Extreme election
The US is going to the polls now to decide its next President (and Congress…).
Likely to be just the beginning of a tumultuous few months — it’s unlikely either side will concede in a hurry… hang on to your seats🫣:
⛈️Extreme weather
Imagery of extreme weather events around the world are just getting crazier… with coverage particularly high when they hit relatively prosperous parts of the world.
Valencia in Spain was the latest to be hit with incredible flash flooding after one weather station in Chiva recorded 491 l/m² in just eight hours - the equivalent of a year's worth of rainfall. At least 200 people have died so far.
(Video via @WxNB_)
The aftermath:
(Scientists have been warning that more deadly floods like those in the semi-arid areas in North Africa could happen in Southern Europe… and the need for Mediterranean cities to adapt to these extremes.
World Weather Attribution looks at the evidence: Extreme downpours increasing in Southern Spain as fossil fuel emissions heat the climate.Meanwhile in Oklahoma, tornadoes struck:
Yellow Dot studios covers the week’s extreme weather report from around the world…it’s happening everywhere:
And to round things out, global sea ice extent is now at a record low:
🏭Extreme GHG reductions
On the positive side of the ledger, EU greenhouse gas emissions fell by over 8% in 2023, driven by growth in renewable energy.
💰Two undecillion roubles
Great headline from the BBC: Russia fines Google more money than there is in entire world:
“a Russian court has fined Google two undecillion roubles - a two followed by 36 zeroes - for restricting Russian state media channels on YouTube.
In dollar terms that means the tech giant has been told to pay US$20,000,000,000,000,000,000,000,000,000,000,000.”
Ludicrous…
🎭Deepfake parade
In Dublin, Ireland, thousands of revellers poured onto the streets expecting a Halloween parade after news spread on social media ... except that the whole event had been made up by a Pakistan-hosted website that creates AI-generated news.
A signal of things to come without more reliable epistemic technologies…?
🌌When galaxies collide
Two images from Hubble (visible and ultraviolet light only) and James Webb Space Telescope (infrared light only) combined for this image of two spiral galaxies colliding:
🧘Memetic savasana
What’s been catching in my net this week…
🚀Starship Diwali
India’s answer to SpaceX (… when the fireworks go off…yikes!)
(Video: @ssaratht)
🔴Mars Sucks
Poignant short video from Vivamos Visual Collective:
🛍️The shopping conspiracy
Netflix’s new documentary Buy Now! The Shopping Conspiracy exposes hidden tactics and covert strategies used to keep consumers locked in an endless cycle of buying… compelling trailer:
💾Compression / decompression
AI, late 2024:
🐙Octopus dreaming
This is stunning… (🎩 sharing Stephen Reid).
🙏🙏🙏 Thanks as always to everyone who takes the time to get in touch with links and feedback.
¡Nos vemos!
Ben
LOL I’ve been doing so much research into AI data centres in the last few weeks in an international client context that I’ve almost given up using the British / Kiwi English “centres” against US “centers”… 🤐