🌐Strategy note #9: What comes after the news? (Part 2)
A full toolbox of Epistemic Technologies
This is the second in my series of strategy notes exploring the future of “news” - read the first here:
(As with most of my writing these days, this post is co-authored with several AI models, mostly Claude 3.7 Sonnet latest. See the recipes at the end…)
In my last post I looked across ten trends in news media which are reshaping the modern information environment:
Dying business models
Declining trust in news media
Oligarchic media ownership
Algorithmic, biased, news filtering
Tech platforms control distribution
Pull-backs from content moderation and fact-checking
AI-generated news content
Alternative news platforms
Low media literacy
Enclosure of the knowledge commons
Many of these trends go against what might be considered the “utility function” of news media for individuals, societies and societal systems:
“The utility function of news serves multiple purposes across individual, social, and systemic levels:
For individuals:
Information acquisition - Provides knowledge needed for daily decision-making
Risk management - Alerts people to potential threats or opportunities
Sense-making - Helps interpret complex events and their implications
Entertainment - Satisfies curiosity and interest in novel information
For societies:
Shared knowledge base - Creates common understanding of current events
Social cohesion - Facilitates conversation and connection around shared information
Democratic function - Enables informed civic participation and accountability
Cultural transmission - Perpetuates and evolves societal values and norms
For systems:
Market efficiency - Distributes information that influences economic behaviour
Power monitoring - Scrutinises authorities and institutions ("watchdog" function)
Agenda-setting - Influences which issues receive public and policy attention
Historical documentation - Creates the "first draft of history"
The relative importance of these functions varies depending on the context, the specific news outlet, and how individuals consume and process news information. Different stakeholders (audiences, journalists, publishers, advertisers, governments) may prioritise different aspects of this utility function based on their own interests and values.”
In this post I’ll explore the range of emerging Epistemic Technologies which will potentially augment — or wholesale replace — traditional news media to achieve these utility functions.
What is an “Epistemic Technology”?
Once again deferring to Claude as an on-demand, first-shot encyclopaedia (my emphasis in bold):
“An epistemic technology is a tool, method, or system that helps humans create, organise, store, retrieve, and distribute knowledge. These technologies fundamentally change how we access, evaluate, and build upon information, influencing our cognitive processes and knowledge acquisition capabilities.
Epistemic technologies include:
Physical tools like writing systems, books, libraries, and filing systems that have historically transformed how we record and access knowledge
Methodological approaches like the scientific method, logical reasoning, or statistical analysis that provide structured ways to develop and evaluate knowledge
Digital systems including search engines, databases, AI systems, and collaborative platforms that have dramatically expanded our ability to process, share, and build upon information
These technologies don't just store information but reshape our relationship with knowledge itself. They influence what questions we ask, how we process information, what we consider valid evidence, and how we collaborate to build shared understanding.
The concept of epistemic technologies emphasises that our knowledge systems aren't neutral containers but active participants in shaping how we think and what we can know. Each new epistemic technology—from writing to the printing press to the internet—has profoundly transformed human intellectual capabilities and social organisation.”
In particular anticipating the forthcoming AI Slop Tsunami … what are the epistemic technologies we should be looking for in future?
Truth and the Underlay
I was already exploring these questions way back in Memia 2020.17 … at the time the debate was focused on how to counter the waves of mis- and dis-information related to the Covid-19 pandemic:
“At the heart of this debate lie at least two axes between free and controlled expression, and state vs private control:
…But is it really up to a small number of Silicon Valley companies to set the rules, govern, design and operate massive content moderation/censorship apparatus, especially across international borders? (And do they even want to do it - for one the costs are enormous, even if AI will be able to do more and more over time… and don’t mention the US$52 million settlement for content moderators who developed PTSD on the job.)”
(Well we all know how that turned out… X and Meta have abandoned their content moderation / “fact checking” apparatus, instead reverting to crowd-sourced “Community Notes” … which in their current guise is turtles all the way down…)
Continuing to recycle that original post from 2020:
In response, a number of tech bros have started calling for open source content moderation systems…effectively based upon a “truth layer” for the internet:1
Balaji is on to something here. If the Internet could create something as ambitious as Wikipedia, surely it should be possible to build a 'truth layer' for the web. Open Standards > Editorial Control P.S. FB & Twitter would throw infinite money at this if it worked.It shouldn’t be tech companies per se getting into fact checking. It should be open source technology. Free, universally available code and data for epistemology. Take a piece of text, parse it, extract assertions, compare to explicitly specified knowledge graphs and oracles. https://t.co/gDOEmZn7S4Balaji S. Srinivasan @balajis(This might upset some more humanities-leaning persuasions, but clearly there are people in the business who see this a solvable technology and data problem, not one of ethical principles).
Several avenues to explore:
Firstly, a refresher on Truth and Trust: how do we know what is true? by Jeff Giesea
Snopes is the original internet fact checking service, since before Google (it says on their website😜).
The Society Library is extracting the ideas, arguments, claims, and evidence from internet media to construct comprehensive, browseable databases of society’s ideas, ideologies, and world-views.
For a practical explanation of how they do this, read this absolutely excellent article Deconstructing the Logic of “Plandemic” - And why it’s so hard to talk logically about COVID-19 in general:
“We extracted 448 claims from Plandemic. So essentially, we identified 448 teeny, tiny little debatable units of logic that are used as reason to support other claims and arguments in the film, which in turn rely on other claims and arguments to support them. Plandemic essentially implies that there are 448 questions that could be asked, and at least twice as many positions to be defended…”
The Underlay2 is a free open source system for structuring, storing, and aggregating open, distributed graph data. Its goal is to make machine-readable public knowledge accessible to all as a public good. (Sort of like Wikipedia but an API):
“It provides a common interface for searching, accessing, vetting and building upon public knowledge from diverse, sometimes conflicting sources.”
Read the Underlay whitepaper: The Future of Knowledge for the underpinning principles.
Wolfram Alpha is a similar effort with a longer pedigree, but based on proprietary technology and algorithms.
Alternatively, take a Market based approach? Idea Markets take a radical decentralised approach to Make Lying Expensive:
Then 2 years later in Memia on Sunday 13.02.2022 I wrote about Splintered realities and epistemic security in light of the wave of disinformation driving the anti-government protests outside the New Zealand Parliament at the time. Who remembers this?
… the BlueSky Protocol initiative, being nurtured by Twitter, is likely to provide decentralised infrastructure to offload content moderation from tech company control (and responsibility - they definitely don’t want that role!). So at that point, some kind of decentralised internet consensus (possibly with formally embedded roles for nation-state governments) will be used to determine what gets moderated/censored in each territory where the protocol’s client social networks operate.)
It will be interesting to see how deeply governments will get involved in this initiative: I’d prefer to avoid this:
The destabilising threat of information pollution?
On the same topic, in response to this week’s newsletter link on Estonia promoting media literacy for “disinformation inoculation”, Matt Boyd from Adapt Research linked back to his prescient piece in 2018: Keeping our eye on the laser phish: Information pollution, risk, and global priorities, which contains this graph👀:
"Children need to grow up with information literacy. I don’t mean just how to interpret a media text, or how to create digital content. I mean they need to learn how to distinguish real from fake, and how information spreads due to the system of psychological heuristics, network structure, frequency and source biases, and the content appeal of certain kinds of information. These are critical skills in a complex information environment and we have not yet evolved defenses against the current threats."
Obviously I echo these sentiments, but feel that a “high school education” won’t get anywhere close enough for what’s coming down the road. Technology will be needed to counter technology: we are likely seeing only just at the start of an age of torrential amounts of AI-generated synthetic media. Learning “how to distinguish real from fake” online won’t even be possible in future without a full toolbox of technological help.”
A full toolbox of technological help
Fast-forward to 2025: now, after over nearly 2.5 years of mainstream access to human-writing-equivalent LLMs, the online information environment is becoming ever-more info-hazardous. It’s risky out there… from now on, arguably we should never go out online without carrying our epistemic infrastructure toolbox with us:
The diagram above (click to magnify) is my latest evolution of what I think the “future of news” is going to look like, namely:
Increasingly automated News Production and News Sensemaking compete in a Darwinian battle to sift the signal from the noise / attempted manipulation
News Consumption becomes a highly filtered, personalised experience backed up by a World Model which maintains as broad a consensus on what is actually going on - and what is likely to happen - as is technically possible.
All of this is backed up by Epistemic Infrastructure: open-source, decentralised AI and communications technology which enables consensus-building at scale.
For the rest of this post I’ll break each of these components down and illustrate with examples…
Keep reading with a 7-day free trial
Subscribe to Memia to keep reading this post and get 7 days of free access to the full post archives.