Eliezer yesterday:
If I had to pinpoint a single thing that strikes me as “disagree-able” about the way Robin frames his analyses, it’s that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they’re less expensive to build/teach/run. … The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.
Lots of folks complain about economists; believers in peak oil, the gold standard, recycling, electric cars, rent control, minimum wages, tariffs, and bans on all sorts of things complain about contrary economic analyzes. Since compared to most social scientists economists use relatively stark mathy models, the usual complaint is that our models neglect relevant factors, and make false assumptions.
But of course we must neglect most everything, and make false assumptions, to have tractable models; the question in each context is what neglected factors and false assumptions would most mislead us.
It is odd to hear complaints that economic models assume too much humanity; the usual complaint is the opposite. Unless physicists have reasons to assume otherwise, they usually assume masses are at points, structures are rigid, surfaces are frictionless, and densities are uniform. Similarly, unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal. Products usually last one period or forever, are identical or infinitely varied, etc.
Of course we often do have reasons to be more realistic, considering deals that may not be enforced, people who die, people with diverse desires, info, abilities, and endowments, people who are risk-averse, altruistic, or spiteful, people who make mental mistakes, and people who follow “behavioral” strategies. But the point isn’t just to add as much realism as possible; it is to be clever about knowing which sorts of detail are most relevant in what context.
So to to a first approximation, economists can’t usually tell if the agents in their models are AIs or human! But we can still wonder: how could economic models better capture AIs? In common with ems, AIs could make copies of themselves, save backups, and run at varied speeds. Beyond ems, AIs might buy or sell mind parts, and reveal mind internals, to show commitment to actions or honesty of stated beliefs. Of course:
That might just push our self-deception back to the process that produced those current beliefs. To deal with self-deception in belief production, we might want to provide audit trails, giving more transparency about the origins of our beliefs.
Since economists feel they understand the broad outlines of cooperation and conflict pretty well using simple stark models, I am puzzled to hear Eliezer say:
If human beings were really genuinely selfish, the economy would fall apart or at least have to spend vastly greater resources policing itself … group coordination mechanisms, executing as adaptations, are critical to the survival of a global economy.
We think we understand just fine how genuinely selfish creatures can cooperate. Sure they might have to spend somewhat greater on policing, but not vastly greater, and a global economy could survive just fine. This seems an important point, as it seems to be why Eliezer fears even non-local AI fooms.
PS: I agree the irony here is huge! It would be extremely frustrating to be constantly bombarded with claims that your 'homo economicus' assumption makes you irrelevant in the 'real world'. Then, to hear almost the reverse claim would be infuriating!
Nevertheless, I just don't feel comfortable with how casually you account for the change from biological humans to self-modifying AI, keeping even the legal system of property rights of their human creators. For my part, I would need some extremely strong arguments to convince me that humans can comfortably rely on legacy property rights to ensure their long term survival. Given the scope of possible actions superintelligent entities of unknown motives could take, assuming property rights for humans remain in the long term seems like science fiction.
"Cameron, don't you think economists might know something about how behavior would change without status or luxury desires?"
Robin, I expect there is work at the fringes of economics that would give valuable insight into that situation. Could you point me at a significant paper on that explicit topic that you consider worthwhile and makes the kind of assumptions and reasoning that I may benefit from?
Unfortunately, I also know that the disadvantage of expertise is that it tends to make people overconfident in their understanding of things outside their field. When it comes to commenting outside the bounds of their professional knowledge, I expect experts in economics to overrate the importance of their field. It's what humans do.
Economic research and understanding is incredibly biassed towards actual human behavior. Even work that deals with societies of specific counterfactual entities will be biassed. People are less likely to publish conclusions that would be considered 'silly' and are more likely to publish theories that validate the core dogmas of the field. What incentive does an economics researcher to publish a paper that concludes "allmost all of our core political values as a profession wouldn't apply in this situation"? That's the sort of naivety that leaves someone either burnt out or ostracized soon enough.