<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Overcoming Bias]]></title><description><![CDATA[This is a blog on why we believe and do what we do, why we pretend otherwise, how we might do better, and what our descendants might do, if they don't all die.]]></description><link>https://www.overcomingbias.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 09:26:00 GMT</lastBuildDate><atom:link href="https://www.overcomingbias.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Robin Hanson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[overcomingbias@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[overcomingbias@substack.com]]></itunes:email><itunes:name><![CDATA[Robin Hanson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Robin Hanson]]></itunes:author><googleplay:owner><![CDATA[overcomingbias@substack.com]]></googleplay:owner><googleplay:email><![CDATA[overcomingbias@substack.com]]></googleplay:email><googleplay:author><![CDATA[Robin Hanson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Coming Hackastrophe]]></title><description><![CDATA[For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in.]]></description><link>https://www.overcomingbias.com/p/the-coming-hackastrophe</link><guid isPermaLink="false">https://www.overcomingbias.com/p/the-coming-hackastrophe</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Tue, 05 May 2026 15:52:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/341fc924-5611-4fa9-b63e-a4908b1f7a4b_615x420.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. &#8230; Claude Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. &#8230; Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. (<a href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/">More</a>)</p></blockquote><blockquote><p>Finding bugs was also hard, so the worst flaws stayed hidden, sometimes for decades. It wasn&#8217;t a great system. But the difficulty on both sides created a kind of d&#233;tente that held. Now, thanks to new A.I. tools, anyone can write code. Soon, bad actors could use those same tools to find out what&#8217;s wrong with code. The d&#233;tente is over. (<a href="https://www.nytimes.com/2026/04/15/opinion/mythos-open-souce-internet.html?searchResultPosition=2">more</a>)</p></blockquote><blockquote><p>Use strong passwords that are unique across every site, preferably through a trusted password manager. Better yet, when a site offers a passkey, take it. &#8230; For accounts without passkeys, use an authenticator app for two-factor authentication, not text messages. Always keep all your software up to date, and uninstall unnecessary apps. (<a href="https://www.nytimes.com/2026/04/28/opinion/cybersecurity-mythos.html?searchResultPosition=1">more</a>)</p></blockquote><p>OK, I&#8217;m a few weeks late to this party, but not too late to give many of you news: <em>We may soon face a period (a few years?) of greatly reduced software availability.</em></p><p>For many decades, we have known how to write pretty secure software. It takes a bit longer, and security considerations must be central to early design efforts, but it is possible. However, developers have usually been in too much of a rush to market to do this. So most software systems today are riddled with security holes. What has saved them so far is that it takes humans a lot of work to find and exploit such holes.</p><p>However, there now exist powerful AI systems that are far better at finding and using such holes. Soon (within a year or two?) many AI firms will have such tools, and they will spread to be widely available. Yes, such AI systems can also work to patch such holes, but computer security <a href="https://en.wikipedia.org/wiki/Mark_S._Miller">experts</a> tell me that the nature of insecure systems is to make it much easier to find and use than to patch such holes. Attack beats defense.</p><p>Software firms would then more eagerly rewrite their code to use more secure designs, and AI could help them to do this. But this takes time, and as there isn&#8217;t a lot of secure software out there now, AI hasn&#8217;t had big datasets ready to help them learn how to do this well. So it will take some time to replace weak with strong software.</p><p>So there may soon be a period, starting within a few years, maybe lasting a few years, when most actual software systems can cheaply be hacked. This will make such software firms vulnerable to ransomware, and make customers wary of using their products. Customers, firms, and App stores, will respond by cutting back on what software systems they offer, and by simplifying them by dropping many features.</p><p>As our world has come to rely on software for a great many things, it seems quite concerning that we might soon have to make do with substantially less software. How vulnerable are crucial systems like electricity, cars, traffic lights, voting systems, and payment systems? I don&#8217;t think we know. Beware the coming Hackastrophe.</p><p>Note: such an event would likely make the public much more willing to regulate AI. And if credit card firms get overwhelmed with false sales, that could make crypto more attractive.</p>]]></content:encoded></item><item><title><![CDATA[On Politics And Governance]]></title><description><![CDATA[The key innovation that has powered the modern era is: organizations.]]></description><link>https://www.overcomingbias.com/p/on-politics-and-governance</link><guid isPermaLink="false">https://www.overcomingbias.com/p/on-politics-and-governance</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sun, 03 May 2026 16:41:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a7f7806a-5533-4918-97a3-d9c4ddcc4668_642x428.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The key innovation that has powered the modern era is: organizations. We solve a great many problems by creating an org, setting it tasks, giving it powers and resources, and putting some key &#8220;masters&#8221; in charge.</p><p>Besides participating as suppliers, customers, employees, or targets of such orgs, there are two other key ways we engage such orgs: <em>politics</em> and <em>governance</em>. In politics, we take sides among the different alliances of masters and tasks, struggling for who will dominate. In governance, we try to hold masters accountable for achieving tasks, and seek new better ways to choose, reward, and monitor them.</p><p>Low status folks have long been advised to keep their head down and stay out of both politics and governance. Higher status folks, in contrast, are somewhat encouraged to do politics, if they are willing to risk suffering repression when their allies lose. We like democracy as more of us can more safely be political, and thus see ourselves as high status, though politics becomes less safe as political polarization rises.</p><p>However, most folks are well advised to stay out of governance, at least when that involves any substantial chance of holding masters more accountable, and thus cutting into their spoils. Masters coordinate to block cuts to their spoils. (Yes, some spoils come via achieving promised tasks, but most don&#8217;t.) In contrast, masters don&#8217;t  mind and even like governance changes that don&#8217;t risk stronger accountability. Such as making it more popular, inclusive, decentralized, more intensive participation, etc. </p><p>How much should you fear masters displeased by your meddling in governance? Greatly! Org masters, and their allies and wannabes, are the fiercest predators of our world. Smart, energetic, and well-connected, they are wolves in sheep&#8217;s clothing, smiling broadly, speaking gently and grandly, but holding their fangs and claws ready in shadows to strike when ready. </p><p>Alas, our world has long suffered from poor governance. So much so that for most problems we know how to solve, we don&#8217;t actually solve them. We got better enough at governance to allow the modern world to have big orgs, but just barely.</p><p>Today, our civilization faces problems so huge that we will mostly likely fall, as did the Roman Empire, to be replaced by insular fertile cultures like the Amish and Haredim. Better governance seems our best hope here, and promising alternatives do exist, ones that can be tested at small scales before deploying on larger scales. Alas such efforts are mainly blocked by spoil-protecting masters. Will enough of us risk their displeasure to force such innovation experiments in time?</p>]]></content:encoded></item><item><title><![CDATA[Figure Stuff Out Together]]></title><description><![CDATA[We vary in our motives and priorities in thinking.]]></description><link>https://www.overcomingbias.com/p/figure-stuff-out-together</link><guid isPermaLink="false">https://www.overcomingbias.com/p/figure-stuff-out-together</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sat, 02 May 2026 17:07:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/20556aba-4814-4e65-9d41-c7d122bed4ff_866x578.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We vary in our motives and priorities in thinking. For example, some try to impress, some try to sell others on pre-existing positions, some try to show loyalty and support to teams, and some try to figure stuff out. As we have norms against the other motives, when asked, many of us claim to have this last widely admired motive.</p><p>Yet, strikingly, few in public discussions present themselves as trying to figure things out together with their convo partners. Such as by posing problems and questions, reframing these to avoid sloppiness, offering alternative options and answers, noting puzzling or contrary consequences, and admitting when one&#8217;s prior convo moves are undermined by new points made.</p><p>Yes, presenting a figuring-stuff-out-together convo persona often imposes some costs relative to other possible personas. But the more eager that we are to suppress other possible interpretations of their motives, the more eager we should be to pay such costs, to assert our preferred persona.</p><p>I have to conclude that while we usually don&#8217;t want to directly admit that we seek to impress, sell, or support, we don&#8217;t actually much mind observers inferring such motives in us. Few actually have that much respect for people those who try to figure stuff out together. </p>]]></content:encoded></item><item><title><![CDATA[On Prediction Market Regulation]]></title><description><![CDATA[(This is my comment re CFTC call for comments on prediction markets.)]]></description><link>https://www.overcomingbias.com/p/on-prediction-market-regulation</link><guid isPermaLink="false">https://www.overcomingbias.com/p/on-prediction-market-regulation</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Wed, 29 Apr 2026 20:04:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8e9b679b-e2d6-4543-bd67-dc53736d04b6_1200x349.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>(This is my comment re CFTC <a href="https://www.cftc.gov/PressRoom/PressReleases/9194-26">call</a> for comments on prediction markets.)</p><p>As an economist, not a lawyer, I write here on public interest, not what is legal.</p><p>Like all financial markets, prediction markets can serve many functions, such as moving and cutting risk, collecting and sharing information, and the fun of action and proving yourself in competition. For decades, that risk function was the only one U.S. regulators allowed as a justification, but I&#8217;ve long argued for a huge potential info value, far more than what we now realize. I want these markets to grow toward that potential. While I personally don&#8217;t mind people having fun, I see that others mind, and we might have to compromise there.</p><p>Focusing on that info function, prediction markets have many issues in common with other info institutions, like gossip, academia, and journalism. All info institutions can induce folks to (A) reveal info better kept secret, (B) reveal secrets people promised to keep, (C) waste time and money that could be used productively, (D) make misleading contributions to get favorable treatment, (E) change the world to get favorable treatment, and (F) reward participants unequally. </p><p>I admit these are real issues, but I say we should treat the various info institutions similarly, unless we find specific reasons to treat them differently. For example, if you wouldn&#8217;t forbid govt employees from talking to reporters, for fear they&#8217;ll reveal govt secrets, also don&#8217;t forbid them from trading just due to similar fears.</p><p>On (A), an example is election-day who-wins predictions, which many say discourage voting. But as the risk of over-regulation here is severe, the first amendment should protect prediction markets as an info institution, especially markets on politics and policy. Just as protests are protected, since there are things you can say via protests you can&#8217;t say via mere words, trades should also be protected, as there are things you can say with trades you can&#8217;t say via words or protests. Putting your money where your mouth is adds punch to your words. </p><p>On (B), orgs have legit interests in keeping secrets, but outsiders often have legit interests in exposing them. Many of history&#8217;s most lauded journalism stories were enabled by org leaks. There&#8217;s a tradeoff here, and requiring everyone to work to help all orgs keep their secrets goes too far. We have strong rules on the books now re prediction market &#8220;insider trading&#8221;, but note that such rules for stocks have had limited effects. At public firms announcements, half of the price change happens beforehand, and half of that is from insider trading. We shouldn&#8217;t expect prediction market rules to succeed much better, or to result in much worse harms.</p><p>On (C), other financial markets already allow as much pure &#8220;gambling&#8221; as anyone could want, and compared to prior ages we today let people devote great time and money into non-productive fun of many sorts, including news, making risky choices of who to date, and making risky choices of careers like acting, music, or athletics.</p><p>On (D), speculative markets are actually far more resistant to manipulation than other info institutions. When traders expect more efforts to manipulate a price, they respond so that prices on average become MORE accurate. Also, in head-to-head comparisons with other info institutions, with the same question, time, participants, and resources, speculative markets have been consistently about as accurate or much more accurate.</p><p>On (E), life insurance has big enough stakes and easy enough personal influence that we reasonably regulate it to prevent murder for money. But we see almost no cases of traders successfully sabotaging firms to profit from stock trades; firms seem too hard for individuals to influence compared to the stakes. And when prediction markets have been made on events that individuals can influence, it seems traders have been well aware of this fact, and saw this fact as adding to their fun.</p><p>On (F), other info institutions also give unequal rewards for intelligence, education, effort, and good social connections. Yes, we could create amateur-only markets, but few would want to trade there; most want to try their hand competing with the best.</p><p>Due to their great info potential, let&#8217;s approve prediction markets by default, especially when they can inform topics that matter, and only restrict them when we see clear evidence of harm, applying similar standards and scrutiny as we do for other info institutions.  </p>]]></content:encoded></item><item><title><![CDATA[Where You Are Most Wrong]]></title><description><![CDATA[What are you the most wrong about?]]></description><link>https://www.overcomingbias.com/p/where-you-are-most-wrong</link><guid isPermaLink="false">https://www.overcomingbias.com/p/where-you-are-most-wrong</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Mon, 27 Apr 2026 12:18:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4a9ee2cd-734a-42d6-834e-74e6b2cc3150_1000x667.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What are you the most wrong about? You know the least about stuff far away from you in distant galaxies, but as you have few opinions about that, and it hardly affects you, who cares?</p><p>But what are you the most wrong about where you do have opinions, and where they are consequential for you? Consider seven factors that say when you are BLINDED:</p><ul><li><p><strong>[B]ound:</strong> When you are judged by your group on your confident and unthinking belief in and loyalty to particular claims, you won&#8217;t study them well.</p></li><li><p><strong>[L]ow-Impact:</strong> When you are wrong about factors relevant for collective choices, your vote barely moves them, and so you have little incentive to think about them to make them better.</p></li><li><p><strong>[I]ndefinite:</strong> When concepts come from a high dimensional space where it seems hard to pin them down, separate them, or to define or measure them.</p></li><li><p><strong>[N]on-Connected</strong>: When you see relevant concepts as coming from a whole separate realm that has no logical connections to all the usual realms where you know things.</p></li><li><p><strong>[D]evalued:</strong> When you declare yourself to be largely indifferent to the consequences for you, as something else matters much more to you.</p></li><li><p><strong>[E]vidence-Poor:</strong> When you actually have little relevant data to draw on, and the best data that you have supporting your opinion is the mere fact that some groups like yours have continued to exist and while holding this opinion.</p></li><li><p><strong>[D]ynamic:</strong> When the topic is about what changes to be making to your group&#8217;s collective choices, either recently or in the near future, the mere fact that your group exists no longer offers even weak evidence for those choices.</p></li></ul><p>The max mistake topic area, with all of these factors, is: <em>the adaptiveness of your morals.</em></p><p>Your group suspects that you are evil if you do not see their morals as obvious, and even suspects you if you had to think to come to agree with them. Morality is a collective choice, where you are punished for deviating, so to have an impact you&#8217;d have to change your group&#8217;s shared moral opinions. Moral concepts tend to be hard to pin down, and today most see moral claims as sitting in a disconnected realm where all our usual non-moral claims are not relevant.</p><p>On the topic of the cultural and DNA adaptiveness of your group&#8217;s morality (and norms and status markers), most people say they care much less about the adaptiveness of their morals than about the &#8220;moral truth&#8221; of their morals. Figuring out theoretically which morals are more adaptive is actually quite hard, and so our best evidence is empirical: which successful societies have had which morals. But the fact that your society seems inclined to change its morals lately in a particular direction is far weaker evidence for the adaptiveness of that direction.</p><p>The topic where you most need careful thought is also where your community most punishes such thought. This is our big blind spot on which our civ will likely fall. </p>]]></content:encoded></item><item><title><![CDATA[Intellectual Populism Trend]]></title><description><![CDATA[Consider the accepted social ranking of who is how much of an intellectual.]]></description><link>https://www.overcomingbias.com/p/intellectual-populism-trend</link><guid isPermaLink="false">https://www.overcomingbias.com/p/intellectual-populism-trend</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sun, 26 Apr 2026 14:23:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/40664030-46d8-4546-b07c-059bf433634b_799x499.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Consider the social ranking of who is how much of an intellectual. Think of this ranking as made by a weighted average of the opinions of other intellectuals. If we look at how this weighting changes across intellectual levels, there will be a median level, where half of the weight comes from opinions above that level, and half below. </p><p>I asked <em><a href="https://chatgpt.com/share/69ee1eea-e120-83ea-9fd1-e6b10bc7aa72">ChatGPT</a></em> (5.5) and <em><a href="https://claude.ai/share/4a254291-ecb6-46bb-8a3b-b853036827f0">Claude</a></em> (4.7) to give percentile estimates for the median level who judges who are the very best intellectuals, for the West in various years. They gave median 99%,99.5% for year 1000, median 96%,97% for year 1750, median 93%,90% for 1900, and median 88%,80% for 2025. </p><p>We have thus seen an increasing populism in who among us judges who are our very best intellectuals. Which is plausibly a source of intellectual decay. Especially as it is often noted that we usually find it hard to distinguish between mental quality levels above our own.</p>]]></content:encoded></item><item><title><![CDATA[My Best Idea: Decision Markets]]></title><description><![CDATA[Many (Poincar&#233; 1908, Schumpeter 1911, Ogburn 1922) have said that, as there are so many good ideas out there, most innovation is just simple combos of prior good ideas.]]></description><link>https://www.overcomingbias.com/p/my-best-idea-decision-markets</link><guid isPermaLink="false">https://www.overcomingbias.com/p/my-best-idea-decision-markets</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sat, 25 Apr 2026 16:58:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/44b429f0-1dac-4cd8-b95f-1174dac68559_1920x2463.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Many (Poincar&#233; 1908, Schumpeter 1911, Ogburn 1922) have said that, as there are so many good ideas out there, most innovation is just simple combos of prior good ideas. This seems true of my best idea.</p><p>April 25, 1996, thirty years ago today, I first <a href="https://mason.gmu.edu/~rhanson/policymarkets.html">posted</a> my best idea: <em>decision markets</em>, i.e., speculative markets that advise specific decisions by estimating decision-conditional outcomes. A.k.a., &#8220;futarchy&#8221; as applied to governance. It&#8217;s not my deepest, grandest, beautiful, or hardest won insight, just the one with the biggest expected impact.</p><p>My idea was a simple combo of two other long-well-known ideas.</p><p>The first prior idea I built on is that speculative markets do quite well at aggregating info. This was explored in theory (Emory 1896, Gibson 1889, Bachelier 1900) and in data (Cowles 1933, Working 1934). Even so in 1996, US regulators in practice only allowed risk-hedging, not info aggregation, as an &#8220;economics rationale&#8221; to allow markets to exist. (The allowed &#8220;price discovery&#8221; rationale was tied to helping other markets hedge risks.)</p><p>In 1984 I left grad school in physics and philosophy of science at U Chicago to go to Silicon Valley to do AI research, and on the side work with <em>Xanadu</em>, trying to invent the World Wide Web. Around 1988 I first started to have doubts about the <em>Xanadu</em> vision of reforming public convo by making criticism easy to find, and wondered what else we could do instead. So I started to think and write about the big potential of making speculative markets to aggregate info on far more topics. Like most everyone who first enters this space, I was first thinking mainly in terms of markets on the usual topics we see in mass media, punditry, and public policy debates.</p><p>The second prior idea I built on is that info is mainly valuable by informing specific decisions. For many centuries we&#8217;ve seen calculations of the value of certain specific info for specific decisions. And then we developed more general theory (Ramsey 1928, Hosiansson 1931, Blackwell 1951, Savage 1953, Schlaifer 1959). At Caltech social science grad school 1993-1997, I learned decision theory and the standard value of info calculation. Then wondering where speculative markets could add the most info value, ~1996 I realized that this would likely come from markets estimating specific outcomes given specific decision choices.</p><p>As I was one of the first to write on the big potential of prediction markets, many who entered this space over the years approached me. At which point I usually pitched this decision market concept. Which usually pushed them away, as they were focused, as I was initially, on those mass media and punditry topics. But I have doggedly persisted.</p><p>Most all innovations combine simple elegant ideas with messy details that make those ideas work. Mine is no different. To find the right messy details, one needs concrete trials and experiments trying different detail versions. It has been hard to find orgs willing to do this, as org decision making is usually quite political. But in the last few years we&#8217;ve thankfully started to <a href="https://www.metadao.fi/">see</a> <a href="https://futarchy.fi/">some</a> <a href="https://www.overcomingbias.com/p/hail-jeffrey-wernick">trials</a>.</p><p>As an econ professor who specialized in governance, I can assure you that the world is greatly structured by the fact that we typically have pretty incompetent governance. Imagine a governance that, when assigned a goal, would reliably achieve that if it is in fact feasible. This would radically reshape our whole world. (Yes, even if we soon get powerful AIs.) As decisions markets plausibly enable such competent governance, this is why I estimate their expected impact to be so very great.</p>]]></content:encoded></item><item><title><![CDATA[Why Focus On Mid-Level Goals?]]></title><description><![CDATA[Human action plans are often organized around goal hierarchies, with lower-level subgoals helping to achieve higher-level goals.]]></description><link>https://www.overcomingbias.com/p/why-focus-on-mid-level-goals</link><guid isPermaLink="false">https://www.overcomingbias.com/p/why-focus-on-mid-level-goals</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sat, 25 Apr 2026 01:55:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c24d4719-393b-40f0-a92a-bd2e5dabe57c_850x479.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Human action plans are often organized around goal hierarchies, with lower-level subgoals helping to achieve higher-level goals. For example, a plan to achieve a travel goal may use a subgoal of flying for part of a trip, which has a subgoal of getting on a plane, which has a subgoal of sitting down when you get to your row. Many subgoals below that are planned and achieved unconsciously. Different plans involve different goal trees, and we often search in a large space of possible trees when we pick plans.</p><p>Many parameters correlate simply with this high-to-low goal axis. For example, lower-level goals and actions tend to take less space, time, and other resources. They are less likely to conflict with other goals, and more likely to be time-consistent. They are more easily evaluated for success, better described by simple abstractions, more reliably controlled, and more easily optimized by hill-climbing. They seem more observable, reversible, and substitutable, give faster feedback, and are more easily automated.</p><p>However, other related parameters depend on this key high-to-low goal axis in less simple ways; they instead peak at some mid-level, and fall away from that in both directions. For example, we have more conscious awareness of, give more conscious attention to, and make more deliberate choices re mid level goals. We can more clearly articulate them and their relations to other goals, and we can more easily teach others to manage them. People coordinate with each other more here, and our blame, credit, norms, and laws focus more here. There is more cultural variety of behaviors at these mid levels; other behaviors are more set by DNA.</p><p>A noteworthy exception is that such mid-peaking parameters often peak at much higher levels in large for-profit orgs, and in other large orgs, like militaries, with strong incentives tied to concrete goals. Such orgs often can and do articulate, measure, credit, and blame the behaviors of top people who manage high-level goals.</p><p>A simple interpretation of these patterns is that cultural evolution <em>of coordinated behaviors</em> faced a key tradeoff. Let me explain.</p><p>As thinking and talking takes time, there is a lowest level of goals and actions where we can discuss them as we choose and do them, so that such talk greatly influences those actions. While humans can and do watch and learn details of others&#8217; behaviors that are at much lower-levels, we mostly do this non-verbally and unconsciously.</p><p>However, to enforce norms, including the norms that say that we should keep our promises, we humans need to be able say to others in sometimes-verifiable words what we and others have or have not been doing lately. So that we can complain about such actions, and recruit others to exert social pressures toward norm enforcement. To defend ourselves against such accusations, our conscious minds were created to manage key stories of what we&#8217;ve been doing lately and why.</p><p>So cultural evolution got into the habit of having us think and talk consciously about goals near this lowest-articulable level, and also to notice, copy, and teach chunks of behavior near these levels. And in addition, we mostly manage our norms, status markers, and key coordination mechanisms near such levels. As this cultural evolution process is pretty random and uncoordinated, efforts to abstract these norms and chunks most naturally expressed at these mid levels into higher level goals don&#8217;t usually achieve much clarity or coherence. Also, we seem reluctant to explicitly name cultural adaption itself as a big higher level goal.</p><p>So why didn&#8217;t we instead define and manage our social coordination using much higher levels goals? The simple correlations above say that such higher goals would tend to be less modular, less observable, and less easily described using abstractions. Making it harder for us to see and describe them, and to enforce norms about them.</p><p>However, with the invention of money and for-profit orgs, the world has now found new ways to use modular observable goals at quite high-levels. When we allow such orgs to manage key areas of life, they have shown remarkable abilities to effectively coordinate our behaviors. The problem is that, in many minds at least, their wider use would violate other key norms that we have inherited from cultural evolution.</p><p>Notice that cultural natural selection of individual behaviors seems insufficient to evolve better norms and status markers, as these are features of key game-theoretic equilibria, where individual deviations are punished. We need instead to have collective deviations of entire cultures, i.e., units with much stronger internal than external conformity pressures.</p><p>Alas this process has been greatly hindered in the last few centuries by decreasing variety and selection pressures, and increasing rates of environmental change and internal cultural drift. Which is plausibly causing such norms to decay, plausibly leading to civ collapse and replacement in a century or two.</p><p><strong>Added 25Apr</strong>: There is a theory of &#8220;<a href="https://en.wikipedia.org/wiki/Prototype_theory#Basic_level_categories">basic level categories</a>&#8221; saying we have a natural level of abstraction across our concepts in general. <a href="https://x.com/StefanFSchubert/status/2047943926707827154">HT</a>. </p>]]></content:encoded></item><item><title><![CDATA[My Class And Goals]]></title><description><![CDATA[I just went to my mom&#8217;s funeral, and so was reminded about my family, and of the question of what exactly one wants to do with one&#8217;s life.]]></description><link>https://www.overcomingbias.com/p/my-class-and-goals</link><guid isPermaLink="false">https://www.overcomingbias.com/p/my-class-and-goals</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Wed, 22 Apr 2026 17:45:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/87fbad80-7b5a-4fce-847f-62606ef9d366_819x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I just went to my mom&#8217;s <a href="https://obituaries.saddlebackchapel.com/bonnie-hanson">funeral</a>, and so was reminded about my family, and of the question of what exactly one wants to do with one&#8217;s life.</p><p>For money, my dad was a programmer, and my mom made presentation graphics for a finance firm. On the side they were missionaries, a pastor, and a writer. When she could retire from making money, my mom became a writer full-time, contributing to 30 of the 275 Chicken Soup books; ~20M people have probably read one of her essays there.</p><p>My two bothers were most recently a court bailiff and a pool cleaner for money, and on the side a musician and pastor. Their wives were a sales clerk and a legal secretary. My wife was a clinical social worker and her brother was a govt lawyer. ChatGPT (5.4) and Claude (4.7) estimate ~45-55, 50-60 as percentile ranks for this family overall in terms of job prestige.</p><p>I&#8217;m now a university professor, and my two sons are a programmer and an investment banker, so the three of us together get ~75,90 percentile estimates. Making my family solidly middle class, and me and my sons upper middle.</p><p>Workers often face a conflict between how their job has been define by the world and their training, and what their managers tell them to do on that job. Usually people succeed more when they accept boss framings, and higher class folks more tend to have this and other more successful habits.</p><p>Both LLMs say that this also happens more specifically in academia, where there&#8217;s a conflict between the job defined as intellectual process, i.e., helping the world better understand key abstract topics, and the job defined as what it takes to get prestige and resources. Lower class folks tend more to pursue that first definition. My class background is substantially lower than that of most academics, and I fit this pattern, as I see my job more in terms of intellectual progress, less in terms of resources and prestige.</p><p>At my mom&#8217;s funeral, I was reminded that such events involve much praising of the dead on various metrics. Which raises the question: what do you aspire to be praised on at your funeral, and in future historical mentions? It also a meta question: why don&#8217;t we write periodic essays on what we are trying to achieve in our lives, so that at our funerals folks can discuss how well we achieved our stated goals? Yes of course they could also discuss how well we achieved their other goals for us, but our own goals also seem quite relevant.</p><p>I would of course prefer that, at my death and after, and even well before, people praise me for all the usual virtues. But compared to others, I put a much bigger weight on intellectual progress. I want people to say, because it&#8217;s true, that I helped the world gain insight on important neglected potent topics. Important because that&#8217;s what matters, neglected because it is far easier to find big insights on those topics, and potent because the big win is when others build on my insights, and integrate them into larger shared systems, as part of a long process of civilization accumulating insight. And myself having insights isn&#8217;t that valuable compared to communicating them in ways so let others see and build on them.</p><p>At my funeral, please do ask yourselves how well I did at this.</p>]]></content:encoded></item><item><title><![CDATA[Power Futarchy]]></title><description><![CDATA[A simple way to apply futarchy to for-profit firms is profit-futarchy: make markets that estimate total firm market value given key firm choices, like who is CEO, what are key acquisitions, or what are key firm policies.]]></description><link>https://www.overcomingbias.com/p/power-futarchy</link><guid isPermaLink="false">https://www.overcomingbias.com/p/power-futarchy</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Tue, 21 Apr 2026 02:08:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4fb97c68-8af4-4b7d-bed7-91725a0e14cf_1450x814.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A simple way to apply futarchy to for-profit firms is <em>profit-futarchy</em>: make markets that estimate total firm market value given key firm choices, like who is CEO, what are key acquisitions, or what are key firm policies. Then do what such markets advise. But a big problem with this approach is that top people, like the CEO, often do not see their personal success as maxed by firm success. For example, they tend to be wary of losing control over key firm choices, even if that would make such choices more profitable.</p><p>So CEOs block the application of futarchy to firms. You might think that investors could just force CEOs to use futarchy, if that would max investor gains. But investors also can&#8217;t seem to prevent the adoption of poison pills, which also cut investor gains. It seems we must accept that top managers have power sufficient to induce firm outcomes that don&#8217;t max profits. Investors do not in fact fully control firms.</p><p>Okay, then what if we flip this script, and set decision markets to the task of directly achieving the selfish managerial ends that likely drive managerial power politics? Create a metric of the total success of an individual manager over their future career, and then make advisory <em>power-futarchy</em> markets that estimate this personal success given key choices under that manager&#8217;s power. And to discourage sabotage, give everyone who that might be able to act to greatly hurt this success a positive stake in that success, a stake they aren&#8217;t allowed to trade to below zero.</p><p>Would this supercharge power politics, via better informing political strategies? Plausibly this would improve both offensive <em>and</em> defense political choices, and also make political info more symmetric. Managers could less often win via strategies that rely on rivals not noticing their plans until too late. So might power-futarchy actually cut harms from firm politics? Maybe, relative to the alternative of no markets at all, helping managers have successful careers also on average helps firms to max profits.</p><p>Of course such markets may advise top managers to not create power-futarchy markets to aid their subordinates several levels below them. Such markets might even say to instead give such subordinates futarchy markets tied to key firm or division outcomes. If so, that might usefully limit the scope of power-futarchy. Yes, this might over time undermine support for power-futarchy, but maybe not before current managers achieved great success from it.</p><p>Some kinds of power politics strategies may be hindered by open markets estimating their power effectiveness. But we needn&#8217;t have such markets regarding all possible managerial choices. Though, yes, the choice to not create such a market on some key choice might be taken as a bad sign about the politics behind that choice. No doubt there would be many new tricks to be found when playing power-futarchy.</p>]]></content:encoded></item><item><title><![CDATA[Remake or Replace Tribes]]></title><description><![CDATA[Tribes contain factions.]]></description><link>https://www.overcomingbias.com/p/remake-or-replace-tribes</link><guid isPermaLink="false">https://www.overcomingbias.com/p/remake-or-replace-tribes</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sun, 19 Apr 2026 18:41:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/32194d88-07bb-42b2-8e4c-9d324c2a2f8c_600x401.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Tribes contain factions. <em>Tribe</em> members mostly interact with and emulate other members of their same tribe, while <em>faction</em> members do these things more often with members of other factions. Tribes tend to have distinct moral norms and status markers, while factions tend to share the norms and status markers of their tribe. Factions often differ on status, income, professions, and on symbolic markers like food, clothes, languages, holidays, and artistic styles. Factions also often disagree on directions to change shared tribe policies and norms. The distinction between tribes and factions is a matter of degree.</p><p>Our dominant world culture hates tribes, but loves factions, especially factions who we <a href="https://www.overcomingbias.com/p/types-of-partiality">see</a> as &#8220;down&#8221;. We hate groups who disagree with world elite consensus on school, medicine, democracy, gender equality, sexual freedom, legal due process, rules of just war, and norms of good parenting. And we hate tribe supporters for their self-favoritism and habitual hostility toward outsiders. About these things we see our dominant world tribe as just right, and the others evil.</p><p>But we love factions within this main tribe who embrace distinct symbols, and who fight for tribe norm reforms. We call this love &#8220;tolerance&#8221;. At least we love factions who we can plausibly see as &#8220;down&#8221; relative to &#8220;up&#8221; rivals. (We presume &#8220;up&#8221; illicitly hurts &#8220;down&#8221;.) We hate &#8220;up&#8221; faction members who promote their factions, and accuse them of actually representing hated tribes. We moderns tend to channel our instinctive human tendencies to be tribal into our factional conflicts. Not noticing how we need tribes far more than factions.</p><p>The big problem is that in history our moral norms and status markers came <a href="https://www.overcomingbias.com/p/cultural-network-structure">mostly</a> from cultural group selection acting on tribes, not factions. By crushing all but one dominant tribe, we now mostly block such evolution from preventing the decay of shared norms, or their adaption to changing context. We now see this most clearly in the decay of norms supporting fertility, but such decay is plausibly also happening across all our key norms. Selection acting instead on factions can&#8217;t do this remotely as well. If such decay continues long, our civilization will fall, to be replaced by others.</p><p>Unfortunately, we find it hard to see this problem, as our moral norms and status markers seem to us as just obviously true, and thus good bases for any analysis. In contrast, we can see and appreciate fights among factions, as we can frame these in terms of our &#8220;obvious&#8221; shared norms. But that doesn&#8217;t help much to ensure that the winners of faction fights are more adaptive.</p><p>Instead of trying to repress competing tribes, as we usually do, we might try to instead promote them. But even that seems quite insufficient, as the main underlying reason that the world has over centuries been merging toward one big tribe is the increasing ease of distant trade, travel, and talk. Such merging has achieved great scale economies of production and innovation, and a great reduction in conflict harms, such as via war, due to increasingly shared norms. Most people really like having a  world community with shared norms..</p><p>There are a few today, like the Amish and Haredim, who care enough to treat themselves as distinct tribes, and are willing to forgo many gains of world cultural integration to achieve this. Such folks insulate themselves culturally from the large world, and so are the folks today whose descendants are mostly likely to replace our dominant world culture. But few groups today are this devoted to becoming tribes. Most of the folks today interested in cultural variety, like &#8220;network state&#8221; folks, are not remotely this devoted, and so have little chance of creating new tribes.</p><p>I can see only three ways for our main world civ, which I treasure in many ways, to avoid being replaced like this. The <em>first</em> solution is to somehow greatly raise the status of tribes, relative to factions. Convince the world to fragment into far more tribes, not just factions. Tolerate and even encourage groups having quite deviant views on democracy, gender equality, etc. to favor themselves and isolate from outsiders.</p><p>The <em>second</em> solution is to leave the world mostly integrated into one big tribe, but to find new ways to control and govern how key moral norms and status markers are changed to become more adaptive. Such as via competent governments held strongly accountable to increase adaption futures <a href="https://www.overcomingbias.com/p/toward-adaption-futures?utm_source=publication-search">estimates</a>, or via using a competent futarchy to <a href="https://www.overcomingbias.com/p/futarchy-futurism?utm_source=publication-search">pursue</a> sacred adaption-achieving goals like when a million people live in space. </p><p>The <em>third</em> solution is to vastly increase the role of for-profit orgs in setting our moral norms and status markers. The evolution of firm cultures has long been quite healthy, as firms form quite distinct groups facing strong capitalists selection pressures. And for-profit orgs competing to give customers key numbers and observable outcomes have quite consistently improved on such outcomes. Each area they came to control, such as buying and running governments, or paying parents to make kids they could in effect sell, would likely become more adaptive.</p><p>As you can plainly see, these are all big long-shots. Our situation is quite desperate. And not likely to get better until a lot more people start to think about it.</p>]]></content:encoded></item><item><title><![CDATA[Cultural Network Structure]]></title><description><![CDATA[How did our society decide how much to count things like education and artistic taste when evaluating prestige and status?]]></description><link>https://www.overcomingbias.com/p/cultural-network-structure</link><guid isPermaLink="false">https://www.overcomingbias.com/p/cultural-network-structure</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sun, 19 Apr 2026 02:18:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5054a314-8a9f-4739-ab91-61d8f66ba4e1_1350x588.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>How did our society decide how much to count things like education and artistic taste when evaluating prestige and status? How did we pick key moral norms and values, such as democracy, gender equality, legal due process, rules of just war, and norms of good parenting? Yes, such choices are weakly influenced by our DNA, and also by cultural evolution selection pressures on individuals. But mostly these things came from cultural evolution of groups.</p><p>You may have heard that such *group selection* never happens, but that&#8217;s wrong. Not only do most cultural evolution scholars see group selection as a key force, group selection also seems to important in DNA evolution, where species are groups. The fact that more species today descended from fragmented habits like rivers, coral reefs, and rainforests, where habitats were smaller, suggests that group selection of species has actually mattered more for DNA than individual selection within species.</p><p>I&#8217;ve said previously that healthy cultural evolution for stuff like norms status markers depends on four key parameters: enough cultural variety, strong enough selection pressures on cultures, slow enough internal cultural drift, and slow enough rates of environmental change. But I have to admit that this first &#8220;variety&#8221; parameter is a sloppy way to talk about it. Counting the number of cultures would make sense if, as with species for DNA, there was only one clear scale at which people are joined into cultural groups. But in fact cultural behaviors cluster together at many different scales.</p><p>However, I&#8217;ve been doing some reading, and have found that for decades cultural evolution scholars have had a less-sloppy substitute concept: &#8220;network structure&#8221;. If you look at the details of who people interact with, and who they are likely to copy their behaviors from, the shape of the network of such ties matters a lot for cultural group selection.</p><p>For example, the network feature that most promotes group selection seems to be &#8220;modularity&#8221;, roughly how many more ties there are within clusters, compared to between clusters. It also matters how similar are people within clusters, how much overlap there is between interaction and emulation networks, how well prestige tracks adaptiveness, how much conformity pressure there is for a behavior, and how much that behavior effects visible outcomes that people care about.</p><p>Each different type of behavior can have its own different network, and its own different coordination scale, requiring group selection at that cluster scale or above in order to select adaptive versions of that behavior. But it seems clear that relevant scales for many kinds of behaviors have greatly increased over the last few centuries, greatly reducing the effective &#8220;variety&#8221; for the purposes of cultural evolution. And this is plausibly cutting the effect strength of group selection, likely enough to cause net maladaptive change to our norms and status markers.</p>]]></content:encoded></item><item><title><![CDATA[Seeking Culture Epics]]></title><description><![CDATA[Most stories are small, about short periods in the lives of a few people or small groups.]]></description><link>https://www.overcomingbias.com/p/seeking-culture-epics</link><guid isPermaLink="false">https://www.overcomingbias.com/p/seeking-culture-epics</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Tue, 14 Apr 2026 19:53:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/934acbdd-f344-4536-bbe9-c55ecdc59bbe_637x367.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most stories are <em>small</em>, about short periods in the lives of a few people or small groups. But some stories are <em>big</em>, about bigger people (e.g., Gods), groups, or timescales. The types of our typical big stories have changed greatly across history.</p><p><em><strong>Power Fights</strong></em> - Most stories are about conflict, and so most big stories are about fights. And long ago, most big stories (e.g., <em>Illiad</em>) focused on powers and alliances fighting within worlds that were relatively stable, especially re tech, and within a context of stable morals. As those didn&#8217;t change much, stories didn&#8217;t care much about them.</p><p>The simplest stories of this type focused on one particular fight, with a start, middle and end. More complex stories, on longer timescales, might depict a sequence of fights with relative peace in between. Even more complex versions might have old powers leave, new powers enter, and changing alliances between powers.</p><p><em><strong>Moral Fights</strong></em> - Starting with religious stories, but then spreading to most centuries ago, the sides in fights acquired stronger moral colors. These fights were not just about power (i.e., dominance) but also moral persuasion (i.e, prestige). The simplest versions had good heroes fight bad villains (e.g., <em>Lord of the Rings</em>). More complex versions had many fighting sides, or all sides seeing themselves as good.</p><p>Some moral fight stories have a small group of activists trying to spread their new moral view to a wider world. A common feature here is that the world at story end likely has more or less good morality, depending on who wins the fights.</p><p><em><strong>Unstable Tech</strong></em> - Our modern world often has tech and business changing fast on the timescales of big fights. Tech changes often favor particular sides of fights, and can call into question common assumptions in prior moral positions. Many science fiction stories highlight how tech changes can influence who wins, and how they can force one to reconsider basic moral commitments.</p><p>The simplest such stories present a world with quite different tech to ours, but where that tech doesn&#8217;t change much during the story (e.g., <em>Dune</em>). This helps readers see how tech differences might translate to fight and moral differences. More complex stories focus on one particular big tech change (e.g., <em>Frankenstein</em>), and show that one change affects who wins in fights, and key moral categories. The most complex stories show long fights in the context of a long history of many big tech changes.</p><p><em><strong>Unstable Morals</strong></em> - I&#8217;ve lately become unhappy with science fiction, as I came to understand the basics of cultural evolution. Science fiction&#8217;s big or fast changing tech, even with shifting powers and alliances over centuries, are usually set in the context of quite stable morals. Yet in fact over the last century or so key values, norms, and morals have changed about as fast as tech, and due to pretty random and plausibly out-of-control cultural evolution. A similar failure happens when historical fiction sets characters with modern values as heroes against villains with old-style values.  </p><p>So I&#8217;d like to see authors try to write big stories, of whole civilizations over long timescales, that more realistically depict cultural instability. Yes it can be comforting to see key characters long continuing to fight for the same shared moral causes, even as their powers, alliances, and tech change greatly. And it can be disturbing to see key morals changing as fast as tech, and nearly as arbitrarily. But the switch to <em>Unstable Tech</em> type stories similarly resulted from the disturbing realization that fast changing tech often upended our conflicts. And we seem to have managed that switch okay. </p>]]></content:encoded></item><item><title><![CDATA[Why Ban Sports Bets?]]></title><description><![CDATA[Sports betting is in the news today, with the rise of Kalshi and Polymarket. Critics point to many issues, but I think most are excuses; what really bothers most is just typical sports bets. On reflection, I&#8217;m a bit puzzled by this. Let me explain.]]></description><link>https://www.overcomingbias.com/p/why-ban-sports-bets</link><guid isPermaLink="false">https://www.overcomingbias.com/p/why-ban-sports-bets</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Mon, 13 Apr 2026 20:26:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0ef76732-3d7f-4021-87ed-62b8865b3747_986x555.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sports betting is in the news today, with the rise of <em>Kalshi</em> and <em>Polymarket</em>. Critics point to many issues, but I think most are excuses; what really bothers most is just typical sports bets. On reflection, I&#8217;m a bit puzzled by this. Let me explain.</p><p>Traditional societies have discouraged, regulated, and banned many kinds of pleasures. Such as sleep, idleness, fancy or plentiful food, fancy clothes, travel, humor, music and dancing, gossip and small talk, drugs and intoxication, fiction, gaming, gambling, bragging, gossip, fighting, spanking, and many forms of sex including prostitution. They feared such pleasures distracting from work and piety.</p><p>Our world still bans many things, but pleasure isn&#8217;t usually a central consideration; we are far more indulgent and approving of pleasure. Yet we still do ban a few pleasures, including recreational drugs, dogfighting, corporal punishment, loan sharks, dwarf-tossing, gambling, and sex that is paid or with minors. Drugs, dogfighting, dwarf-tossing, corporal punishment, and loan sharks seem to be about physical harms, and also shame and empathy. Sex has long evoked deep complex opaque feelings.</p><p>But sports bets don&#8217;t involve shame, physical harm, or deep opaque feelings. We mostly approve of sports, and of people putting lots of time and energy into playing and watching sports. And sports bets complements those activities, making them more interesting, engaging, and better informed. </p><p>Yes, we dislike money all else equal, but we let money touch many adjacent areas. Yes, sports bets can waste time and money, but so do a great many allowed pleasures. Yes, they involve risk, but we let people take big risks in deciding who to date, and in longshot careers like acting, music, or athletics. Yes, sports bets resolve faster, but you can bet just as fast and big in ordinary financial markets. Yes, bookies once charged high fees, but new markets have far lower fees.</p><p>I guess I lean toward explaining banned sports bets as just a random exception to our usual historical trend, which seems a weak but good sign re how long we&#8217;ll let these new sports betting markets continue to be legal. Not my thing, but I usually don&#8217;t mind others having fun via their things. </p><p><strong>Added 14Apr:</strong> Many point to the possibility of commitment problems, where people are tempted in the moment to do stuff they would want to commit ahead of time not to do. But it isn&#8217;t that hard to set up commitment mechanisms, and when we do few actually avail themselves of such options.</p>]]></content:encoded></item><item><title><![CDATA[Project Hail Mary]]></title><description><![CDATA[&#8220;The science in Project Hail Mary is all pretty firmly grounded.]]></description><link>https://www.overcomingbias.com/p/project-hail-mary</link><guid isPermaLink="false">https://www.overcomingbias.com/p/project-hail-mary</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Thu, 09 Apr 2026 00:59:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/79bcd887-6cac-412b-9bad-5bc4fd54f405_275x183.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#8220;The science in <em>Project Hail Mary</em> is all pretty firmly grounded. There&#8217;s some BS all the way down at the quantum level, where Astrophage cell membranes can keep neutrinos in&#8230; But outside of that, everything else just follows established physics and science.&#8221; - Andy Weir, <a href="https://www.scientificamerican.com/podcast/episode/the-real-science-and-the-fun-fiction-behind-project-hail-mary/">author</a> of <em>Project Hail Mary</em></p></blockquote><p>Unrealistic science fiction can be great, but folks should sometimes point out the the unrealism of particular stories, especially stories that are very popular, and widely said to be realistic, including by their authors.</p><p>Other have pointed to implausible <a href="https://sciencemeetsfiction.com/2021/06/15/the-science-of-project-hail-mary/">physics</a> and <a href="https://tragedyandfarce.blog/2025/04/27/the-illiberalism-of-project-hail-mary/">politics</a>, but after reading two dozen reviews, I don&#8217;t find anyone else mentioning my three comments on <em>Project Hail Mary:</em></p><p><strong>Rare Event: </strong>In the story, a big dimming of our Sun and a dozen nearby stars happens over decades. This must be a very rare sort of event, or we&#8217;d have noticed this scenario before out there among the stars. It also can&#8217;t last that long or spread that far each time, before reverting to the usual star appearance. </p><p><strong>Close Alien:</strong> Our hero meets an alien from a star roughly 20 lightyears from Earth, who is at a very similar level of tech development to us. For example, they haven&#8217;t yet discovered radiation or relativity. Say no more than a century different. In a 14Gyr old universe that level of time correlation seems crazy unlikely. Also, to have aliens that spatially close be typical, our universe must be chock full of civilizations. Which then must quite reliably die fast to produce our empty looking universe.</p><p><strong>Similar Alien: </strong>They have different bodies and sensors, but once they manage to talk, our hero and alien get along better than would two random humans from human history. The alien&#8217;s culture is much like our hero&#8217;s culture, which is quite different from most other human cultures in history. This is worse than most historical fiction, which puts modern hero characters in old worlds. </p>]]></content:encoded></item><item><title><![CDATA[When AI Day of Reckoning?]]></title><description><![CDATA[The world has invested lots in AI over the last few years, and many expect a crash soon.]]></description><link>https://www.overcomingbias.com/p/when-ai-day-of-reckoning</link><guid isPermaLink="false">https://www.overcomingbias.com/p/when-ai-day-of-reckoning</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:23:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d926b41a-ba0f-4288-b0a0-f3de1dcb6ed6_500x557.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The world has invested lots in AI over the last few years, and many expect a crash soon. Most attempts to use AI in firms seem to be failing. But is that just what we should expect from early applications? When should we look where for clearer evidence that recent AI is or is not going to justify its investment?</p><p>Since 2023, it has been widely reported that LLMs seem quite good at coding, and that this seems their most promising application area. Global spending on software is ~$1-2T/yr, so potential big saving there soon might plausibly justify last year&#8217;s ~$0.5T/yr AI investment.</p><p>For many decades the demand for software has been large and elastic. Most firms have many software projects they&#8217;d like to do, which they don&#8217;t do mainly due to cost. And market leaders tend to be firms whose software investments went well. So if the total cost of software fell by a factor of two, total spending on software should more than double. That&#8217;s supply and demand. And AI getting a big cut of that increased spending soon might justify its investment.</p><p>Of course fielding useful software involves many tasks, including identifying opportunities, securing funding, overseeing projects, defining requirements, marketing, user support, and writing, testing, and maintaining code. Just making code writing much cheaper doesn&#8217;t obviously make total software cost much cheaper. Much depends on just how many of these other tasks can be made cheaper as well.</p><p>If AI is going to have a big impact, when should we expect to see it? Software projects typically take ~6-9 months from conception to delivery, though orgs can take 1-3 years to reorganize workflows, incentives, etc. to accommodate new techs. Legacy software may not be replaced for up to 3-10 years.</p><p>So it seems that one should expect to see substantial AI-driven changes to the scale of the software industry within roughly 3 years of a widespread consensus that AI makes it much cheaper. Which is about now if that consensus happened 3 years ago. Or in about 3 years if that happened one year ago.</p><p>The number of U.S. software workers increased by ~50% in the last decade, probably mostly due to falling costs. So we should expect an even faster growth in software spending if AI is in fact causing a big increase in the rate at which its costs fall.</p><p>And if we don&#8217;t see such a big increase in the next few years, that will suggests that AI does not actually cut software costs nearly as much as advocates hope. Which is of course the usual scenario for hyped new techs. And should lead to a crash. </p><p>Of course that&#8217;s the short run; we might still plausibly see a &#8220;general purpose tech&#8221; impact that takes several decades to play out, as we&#8217;ve seen previously for techs like steam, electricity, personal computers, and the internet. </p><p><strong>Added 13Apr:</strong> After a long convo with Bram Cohen, here is my revised estimate. About as likely as not, AI will help the software industry grow in spending by a factor of 2-3 over the next decade, and gain 10-20% of software revenue, for $0.4-0.6T/yr revenue. AI will also gain as much revenue from all other industries combined, so total: ~$1T/yr.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Our Uphill Battle]]></title><description><![CDATA[I recently said our civ will fall if we do not find finish the industrial revolution, and apply the industry trio of math, big orgs, and capitalism to more areas of life.]]></description><link>https://www.overcomingbias.com/p/our-uphill-battle</link><guid isPermaLink="false">https://www.overcomingbias.com/p/our-uphill-battle</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sun, 05 Apr 2026 20:22:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fd0963b5-177c-4dd6-abc5-39536938001c_500x382.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently <a href="https://www.overcomingbias.com/p/finish-the-industrial-revolution">said</a> our civ will fall if we do not finish the industrial revolution, and apply the industry trio of math, big orgs, and capitalism to more areas of life. Especially our fast activism-driven evolution of values, morals, and norms.</p><p>But watching a <a href="https://www.pbs.org/show/henry-david-thoreau/">documentary</a> on early activist H.D. Thoreau brought home to me just how huge an ask this seems. Our modern world has come to deeply adore and revere changing its morals fast via <a href="https://www.overcomingbias.com/p/is-modernism-due-to-youth-culture">youth movements</a>, and a great many features of our modern world support this new pattern.</p><p>For example, youths are generally more risk-taking, emotionally expressive, eager to impress potential mates, less invested in prior arrangements, and better able to bond together into groups. Which attracts youths to the chance to skip the usual dues to rise fast in status as leaders of new tightly-bonded emotional youth movements.</p><p>Helping further, we legitimized fashions, seeing those who first adopt new popular changes as more virtuous. And we put kids together in high school and college, where they have more time for activism, bond into their own youth cultures, and are taught to see the world more abstractly and thus morality more simply and universally. Also, better communication tech has let them coordinate faster across wider distances.</p><p>Finally, the modern world has widely adopted the views (a) that morality is a whole separate realm where the usual adult knowledge and experience are less relevant, (b) that moral opinions should from come <a href="https://www.overcomingbias.com/p/authenticity-as-grace">authentically</a> from within, and (c) that youthful opinions on morals tend to be less corrupted by habit and self-interest.</p><p>All of this has created a perfect storm encouraging youth to repeatedly make and join new internal-feelings-driven moral crusades, movements maximally suspicious of opposing older adults with ties of interest and habits to the existing order.</p><p>Could we apply industry to more strongly to manage this process? For example, by <a href="https://www.overcomingbias.com/p/culture-guiding-futarchy">paying</a> big orgs to create, suppress, and influence such movements to achieve key <a href="https://www.overcomingbias.com/p/toward-adaption-futures">metrics</a>. Yes, big orgs do substantially influence youth movements today, but mostly from behind the scenes. And these are mostly not for-profit orgs, and our world is pretty hostile to for-profit orgs operating outside their usual scopes, especially in sacred areas like moral activism. Social media feed algorithms seem to be the main form of this now, but I doubt they could do that much more than they do now.  </p><p>We should do our best to try, but damn does this look hard.</p>]]></content:encoded></item><item><title><![CDATA[More Fatal Conceits ]]></title><description><![CDATA[In The Fatal Conceit (1988), F.A.]]></description><link>https://www.overcomingbias.com/p/more-fatal-conceits</link><guid isPermaLink="false">https://www.overcomingbias.com/p/more-fatal-conceits</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Sat, 04 Apr 2026 13:14:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8808075f-d177-48cf-9d43-04e1e1d523d7_853x521.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <em><a href="https://en.wikipedia.org/wiki/The_Fatal_Conceit">The Fatal Conceit</a></em> (1988), F.A. Hayek argued that cultural evolution has bequeathed to us a capitalist &#8220;extended order&#8221; of money, property rights, and competitive markets, all with matching morals, and that socialism is bad because it appeals instead to dysfunctional moral instincts that this order had suppressed, while flattering us into thinking that we can apply reason well to more things than we actually can. Socialism replaces many capitalist choices with choices from deliberate &#8220;rational&#8221; bureaucratic government agencies. Capitalism, in contrast, typically makes use of more info than can our reason, and was also designed using more info.</p><p>Hayek, however, seems fine with using reason to choose within big firms, and he admits that cultural evolution (a) has often induced simpler societies to prevent such capitalism, (b) has often induced governments to greatly hinder capitalism in their later civilization periods, and (c) seems a proximate cause of the recent rise of interest in socialism. So why not estimate that the levels of capitalism and reason use that we seem to be drifting toward are in fact the most adaptive? Why see all that as a mistake?</p><p>Hayek seems to actually rely here not on cultural evolution, but instead on his theoretical economic analysis, together with empirical correlations between capitalism and places and periods that have had especially large wealth and growth. Which allows him to conclude that allowing cultural evolution to push us far enough away from capitalism now would plausibly result in the fall of our civilization, causing many deaths and much suffering. Which would be bad more because suffering is bad, and less because cultural evolution would go awry.</p><p>Behind Hayek&#8217;s argument there, however, seems to be a judgment that our modern world looks especially vulnerable to appeals to deeply embedded ancient moral instincts, and to flattery about our abilities to reason. However, as he never says this explicitly, Hayek never offers arguments for why we should expect to be more vulnerable to such things now.</p><p>This is where I offer cultural drift <a href="https://www.overcomingbias.com/p/our-big-oops">analysis</a> as a complement to Hayek&#8217;s story. At the level of cultural features that we can only vary effectively in large groups, over the last few centuries our civilization has drifted toward less variety, weaker selection pressures, and faster rates of change of culture and environments. All of which does plausibly make us more vulnerable to flattery and simplistic moral appeals undermining our commitments to morals supporting capitalism.</p><p>However, such analysis also predicts that these same forces make us vulnerable to many more fatal conceits, i.e., to decay in many other key features of our shared culture. Does Hayek also fear and warn against excess trust in reason and moral instincts there? Is it feasible for us to reason well enough to usefully overturn other non-capitalist morals that we have inherited from cultural evolution? Hayek said:</p><blockquote><p>Rebellion against private property and the family was, in short, not restricted to socialists. &#8230; Limits of space as well as insufficient competence forbid me to deal in this book with the second of the traditional objects of atavistic reaction that I have just mentioned: the family. I ought however at least to mention that I believe that new factual knowledge has in some measure deprived traditional rules of sexual morality of some of their foundation, and that it seems likely that in this area substantial changes are bound to occur. (p.51) &#8230;</p></blockquote><blockquote><p>Nor do I dispute that reason may, although with caution and in humility, and in a piecemeal way, be directed to the examination, criticism and rejection of traditional institutions and moral principles. &#8230; I wish neither to deny reason the power to improve norms and institutions nor even to insist that it is incapable of recasting the whole of our moral system in the direction now commonly conceived as `social justice&#8217;. We can do so, however, only by probing every part of a system of morals. (p.8)</p></blockquote><p>So Hayek is relatively open to rationality overturning traditional morals in one big area of life, and is in principle open in many other areas. So let me say this clearly: our usual styles of rational analysis deployed over the last few centuries seem to have been quite inadequate to the task of changing morals while preserving or enhancing their cultural adaptability. Maybe we could up our game, but that does look quite hard.</p>]]></content:encoded></item><item><title><![CDATA[Nations Double-Down on Status]]></title><description><![CDATA[Years ago I noticed that when my kids tried out a new game, those who won more wanted to play it again.]]></description><link>https://www.overcomingbias.com/p/nations-double-down-on-status</link><guid isPermaLink="false">https://www.overcomingbias.com/p/nations-double-down-on-status</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Thu, 02 Apr 2026 17:24:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5d304154-787b-4aff-985d-82eaea36b523_1000x667.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Years ago I noticed that when my kids tried out a new game, those who won more wanted to play it again. And parents often try to make sure kids win at stuff they want kids to do more. We come to like things in part due to seeing ourselves win at them.</p><p>Nations seem similar. Yes, nations value some activities more, and engage in those more as a result. But nations often double-down on stuff after seeing themselves as win at it in ways that they personally respect, and expect others to respect. Nations continue to do that stuff lots in part to remind the world of how grateful it should be for their contribution.</p><p>For example, the US has seen itself as pioneering and greatly advancing democracy, free speech, medicine, higher education, basic research, legal due process, mass production, mass media, space exploration, entrepreneurship, the internet, and global military suppression of nazism, communism, and terrorism. This helps explain continued record US spending on medicine, education, military, and legal process.</p><p>Other nations act similarly. For example, Britain doubles down on law, parliaments, and anti-racism. France doubles down on liberties and fancy food. India doubles down on yoga and spirituality, Russia on war, sacrifice, and anti-decadence, and China on development.</p><p>If you want a nation to do more of X, maybe praise what they&#8217;ve already done on X.</p>]]></content:encoded></item><item><title><![CDATA[Authenticity as Grace]]></title><description><![CDATA[Last week I realized that today&#8217;s rapid cultural evolution, mediated greatly by youth movements, seems encouraged by the common modern norm favoring &#8220;authenticity&#8221;.]]></description><link>https://www.overcomingbias.com/p/authenticity-as-grace</link><guid isPermaLink="false">https://www.overcomingbias.com/p/authenticity-as-grace</guid><dc:creator><![CDATA[Robin Hanson]]></dc:creator><pubDate>Thu, 26 Mar 2026 01:52:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/48a8aa1a-e8c2-4391-ba8e-30ec16f4b1b1_1536x988.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I realized that today&#8217;s rapid cultural evolution, mediated greatly by youth movements, seems encouraged by the common modern norm <a href="https://x.com/robinhanson/status/2035831339790782520">favoring</a> &#8220;authenticity&#8221;. Youths ask their hearts how society should change. So I just read two books on the subject, Lionel Trilling (1972) <em><a href="https://en.wikipedia.org/wiki/Sincerity_and_Authenticity">Sincerity and Authenticity</a></em>, and Charles Taylor (1991) <em><a href="https://www.hup.harvard.edu/books/9780674987692">Ethics of Authenticity</a></em>. I also read Rousseau (1755) <em><a href="https://en.wikipedia.org/wiki/Discourse_on_Inequality">Discourse on Inequality</a></em>, as many call that the first modern advocacy of authenticity.</p><p>Authenticity having your behaviors driven from within you, instead of letting others influence them. Follow your heart, you do you, go with your gut, that sort of thing. It is such a widely accepted norm that the authors who write books on it don&#8217;t actually argue for it much; they instead use it to argue for other stuff. My reading was a waste.</p><p>But, why exactly is authenticity such a good thing? Yes, there&#8217;s this <a href="https://marginalrevolution.com/marginalrevolution/2016/05/scott-alexander-on-robin-hanson.html">quote</a> about me, &#8220;Robin Hanson is more like himself than anybody else I know.&#8221; And, yes, my <a href="http://hanson.gmu.edu/home.html">webpage</a> has long said: &#8220;I&#8217;m not a joiner; I rebel against groups with &#8216;our beliefs&#8217;.&#8221; So as a matter of practice I seem to be authentic. Yet I still don&#8217;t see why it&#8217;s good, per se.</p><p>The modern world changes faster, and gives us more options, which puts a premium on agency; we can&#8217;t just ride along with our slowly changing peasant village anymore. But that means you need make choices, not that they need to come from within.</p><p>We&#8217;ve long taken controlling more as a sign of status, so others controlling you lowers your status. But what would this effect be stronger in the modern world?</p><p>Maybe in the modern world imitation and social pressures have become easier to see. In the old stable peasant village you acted like everyone else, but so did everyone, and you were not noticeably following any particular other visible models. However, in the modern world choices are more varied and contested, and so we can more easily see who in particular is pressuring or influencing who else in particular.</p><p>That wouldn&#8217;t necessarily be bad, except that looking too obviously &#8220;try hard&#8221;, like you are trying to choose actions to impress and please others, shows an unimpressive lack of confidence. Just as the most impressive dancers make their dancing look &#8220;effortless&#8221;, maybe the most impressive social displays are those that seem to come naturally, with little noticeable effort.</p><p>Cultural evolution says that most everything that comes from inside of you was stuff that went there before, from your prior cultural exposures. But seeing you trying to please and conform looks quite different to observers than your seeming to just do stuff from within, even though all of that stuff inside resulted from your prior efforts to please and conform, perhaps as a child. It is like the difference between a dancer who is visibly struggles to do her dance routine, and one for who the routine looks effortless, enjoyable, and even invented on the spot.</p>]]></content:encoded></item></channel></rss>