On the significance of Eliezer Yudkowsky
Mar. 9th, 2025 03:54 amThere's a question that I occasionally see asked on the internet -- what's the big deal about Eliezer Yudkowsky? Oh, he suggested that artificial intelligence might be dangerous? That's not original! People have been talking about robot uprisings for as long as they've been talking about robots. Why is this Yudkowsky guy getting credit for it?
And I think this question is interesting for what it reveals about how much things have changed since, oh, 20 years ago, let's say. Yes, people have been talking about robot uprisings for as long as they've been talking about robots. But the question you have to ask is, who was saying artificial intelligence might be dangerous? Because back then, the general assumption among... technical people, science fans, transhumanists, etc -- the general assumption among that crowd was that of course artificial intelligence would be a good thing, just like technology generally is a good thing; and that if you thought otherwise, if you thought actually it might be dangerous, you were a backwards luddite. (This is roughly the view expressed now by the "e/acc" crowd.) Moreover, the arguments for it being dangerous were largely based on anthropomorphization. Yes, there were people talking about the dangers of artificial intelligence -- but the sort of person who might go work on artificial intelligence, or become any sort of expert on it, would never listen to that sort of person, and with good reason; they generally weren't worth listening to. They really were often anti-technology luddites, and their arguments were generally pretty bad.
(Frankly, I'm not even the best person to be relating this -- someone older, who had an adult's view of all this, would be better. I was prompted to finally write this post due to a conversation with an acquaintance today who occasionally attends OBNYC, but who also attends New York skeptics' meetings, and mentioned how the latter is a very different, substantially older crowd. I like to point out sometimes that a lot of Yudkowsky's writing is fairly continuous with older writings on rationality by people like Feynman, Sagan, etc; but this is only somewhat true, as evidenced by this split!)
So the significance of Eliezer Yudkowsky -- I mean, certainly not the only significance, likely not the main signifiance, but the significance for the purposes of this question -- isn't that he proposed that artificial intelligence could be dangerous; it's that he A. convinced of the risks of artificial intelligence the sort of people who were inclined to dismiss such, and B. did so by pointing out that the arguments for AI being safe are in fact based on anthropomorphization and rebutting these arguments in detail.
But it seems that people just coming to this conversation now often don't realize all this! They don't think there's anything usual about technical people, transhumanists, etc., considering AI to be dangerous -- they think of it as continuous with earlier arguments about robot uprisings, rather than considering those earlier arguments as something such people would have dismissed.
Somewhat similarly -- and this is something Yudkowsky himself has often remarked upon -- you get people who've never bothered to actually read Yudkowsky calling him an anti-progress luddite, which is of course not remotely correct. Indeed, the fact that he was not a luddite, and had credibility as a transhumanist, lent credibility to his eventual turn against artificial intelligence, made it influential rather than something reflexively dismissed! The thing (well, one of the things, but the relevant thing) that was unusual about Yudkowsky (but, thanks to him, is not unusual anymore) was his stance of being pro-technology and pro-progress but against the development of artificial intelligence specifically. But some people who comment on things based only on impressions assume that it still must be the case that anyone opposed to artificial intelligence must be a luddite. This is basically the opposite way one might miss Yudkowsky's significance; instead of failing to realize that the past was not like the present, one could fail to realize that the present is no longer like the past. (Well, OK, that's not quite right, because these people aren't so much assuming that the present is like the past, as they are assuming that the present is like the obvious thing you expect. It isn't!)
(Things are also different from 15 years ago in that it seems like transhumanism as it used to exist seems to be much reduced, because discussion of artificial intelligence has largely subsumed it all! Not entirely, but to a pretty good extent.)
Anyway, yeah, some context for those confused about such things...
-Harry
And I think this question is interesting for what it reveals about how much things have changed since, oh, 20 years ago, let's say. Yes, people have been talking about robot uprisings for as long as they've been talking about robots. But the question you have to ask is, who was saying artificial intelligence might be dangerous? Because back then, the general assumption among... technical people, science fans, transhumanists, etc -- the general assumption among that crowd was that of course artificial intelligence would be a good thing, just like technology generally is a good thing; and that if you thought otherwise, if you thought actually it might be dangerous, you were a backwards luddite. (This is roughly the view expressed now by the "e/acc" crowd.) Moreover, the arguments for it being dangerous were largely based on anthropomorphization. Yes, there were people talking about the dangers of artificial intelligence -- but the sort of person who might go work on artificial intelligence, or become any sort of expert on it, would never listen to that sort of person, and with good reason; they generally weren't worth listening to. They really were often anti-technology luddites, and their arguments were generally pretty bad.
(Frankly, I'm not even the best person to be relating this -- someone older, who had an adult's view of all this, would be better. I was prompted to finally write this post due to a conversation with an acquaintance today who occasionally attends OBNYC, but who also attends New York skeptics' meetings, and mentioned how the latter is a very different, substantially older crowd. I like to point out sometimes that a lot of Yudkowsky's writing is fairly continuous with older writings on rationality by people like Feynman, Sagan, etc; but this is only somewhat true, as evidenced by this split!)
So the significance of Eliezer Yudkowsky -- I mean, certainly not the only significance, likely not the main signifiance, but the significance for the purposes of this question -- isn't that he proposed that artificial intelligence could be dangerous; it's that he A. convinced of the risks of artificial intelligence the sort of people who were inclined to dismiss such, and B. did so by pointing out that the arguments for AI being safe are in fact based on anthropomorphization and rebutting these arguments in detail.
But it seems that people just coming to this conversation now often don't realize all this! They don't think there's anything usual about technical people, transhumanists, etc., considering AI to be dangerous -- they think of it as continuous with earlier arguments about robot uprisings, rather than considering those earlier arguments as something such people would have dismissed.
Somewhat similarly -- and this is something Yudkowsky himself has often remarked upon -- you get people who've never bothered to actually read Yudkowsky calling him an anti-progress luddite, which is of course not remotely correct. Indeed, the fact that he was not a luddite, and had credibility as a transhumanist, lent credibility to his eventual turn against artificial intelligence, made it influential rather than something reflexively dismissed! The thing (well, one of the things, but the relevant thing) that was unusual about Yudkowsky (but, thanks to him, is not unusual anymore) was his stance of being pro-technology and pro-progress but against the development of artificial intelligence specifically. But some people who comment on things based only on impressions assume that it still must be the case that anyone opposed to artificial intelligence must be a luddite. This is basically the opposite way one might miss Yudkowsky's significance; instead of failing to realize that the past was not like the present, one could fail to realize that the present is no longer like the past. (Well, OK, that's not quite right, because these people aren't so much assuming that the present is like the past, as they are assuming that the present is like the obvious thing you expect. It isn't!)
(Things are also different from 15 years ago in that it seems like transhumanism as it used to exist seems to be much reduced, because discussion of artificial intelligence has largely subsumed it all! Not entirely, but to a pretty good extent.)
Anyway, yeah, some context for those confused about such things...
-Harry