Einstein once said that a theory should be as simple as possible, but no simpler. Similarly I recently remarked one’s actions should be as noble as possible, but no nobler. Implicit in these statements are constraints: that a theory should be supported by evidence, and that actions should be feasible. Sure you can find simpler theories that conflict strongly with evidence, or actions that look nobler if you ignore important real world constraints. But that way leads to ruin.
Similarly I’d say one should reason only as abstractly as possible, with the implicit constraint being that one should know what one is talking about. I often complain about people who have little tolerance or ability to reason abstractly. For example, doctors tend to be great at remembering details of similar cases but lousy at abstract reasoning. But honestly I get equally bothered by folks who trade too easily in "floating abstractions," i.e., concepts whose meaning is prohibitively hard to infer from usage, such as when most usage refers to other floating abstractions.
For example, most uses I’ve seen of "proletariat" or "exploitation" seem like floating abstractions to me, though within particular communities these concepts may have a clearer meaning. Now of course most any well defined abstraction might seem like it floats to those who haven’t absorbed the right expert explanation. But if there is no clear meaning, even to experts, then the concept basically floats.
Now there are communities who say their concepts acquire clear enough meanings after one has absorbed decades of readings, even though experts can’t really summarize those meanings any better than to tell you to read for decades. But even if they are right, that way also seems to me to lead to ruin. The intellectual progress I see comes mostly from the modularity that becomes possible with clearer meanings. But that is a physicist/economist/compsci guy speaking – you may hear different from others.
Eliezer has just raised the issue of how to define "intelligence", a concept he clearly wants to apply to a very wide range of possible systems. He wants a quantitative concept that is "not parochial to humans," applies to systems with very "different utility functions," and that summarizes the system’s performance over a broad "not … narrow problem domain." My main response is to note that this may just not be possible. I have no objection to looking, but it is not obvious that there is any such useful broadly-applicable "intelligence" concept.
We agree "intelligence" is clearly meaningful for humans today. When we give problems to isolated well-fed sane humans a single dominant factor stands out to explain variation, and that same factor also helps to explain variation in human success in the wider world. But it is far from the only factor that explains variation in human success. For that we tend to think in terms of production functions where IQ is just one relevant factor.
In the computer world we clearly have a useful distinction between hardware and software, and we have many useful concepts to distinguish software, but "intelligence" is not really among our best concepts there. I’d say it is just an open question how much more widely "intelligence" can be meaningfully applies.
If your goal is to predict our future over the next century or so, then the question is what are the most useful abstractions for reasoning about the long term evolution of systems like our world now. The obvious candidates here would be the abstractions that biologists find useful for reasoning about the long term evolution of ecosystems, or more plausibly the abstractions that economists find useful for reasoning about the long term evolution of economies.
"Intelligence" has so far not been central to these concept sets, but of course these frameworks remain open to improvement. So the question is: can one formulate a clearer more broadly applicable concept of intelligence, and then use it to improve the frameworks we use to think about the long term evolution of economies or ecologies? This may well be possible, but it has surely not yet been demonstrated.
Silas, you've made my entire weekend. You have no idea what a pleasure it is for me to be defended rationally; the only thing better is to be attacked rationally, but that wouldn't have been possible here.
Please note that you can't close tags except by editing the original comment
The precise statement is that it depends on the web browser. It does improve the display from Safari. I think that there was a change between Firefox 2 & 3, so people have habits from 2 that are no longer useful.