Paul Krugman Flirts With Futurism


In a recent post, the famous liberal economist Paul Krugman asks us to "[c]onsider for a moment a sort of fantasy technology scenario." Even for a fan of Krugman's writing like I am, the appearance of that word "scenario" should be chilling enough to those who know how often it portends think-tank non-thinking is on the way. That it is preceded by the word "technology" removes all doubt, and as for the word "fantasy," well, whenever the words "technology" and "scenario" are combined one should more or less just consider that one implied. (Dearest futurologists, I keed! I keed!)

The futurological fantasy in Krugman's piece follows immediately, and it is very much a conventional one: imagine "we could produce intelligent robots able to do everything a person can do." I know it must seem highly uncharitable to refuse a thought experiment its avowedly speculative initial stipulation right off the bat, but I do want to remind readers up front that we cannot in fact produce robots "able to do everything a person can do," and I insist on this not because I think Krugman mistakenly thinks we can (when obviously he does not) but because I worry that when we imagine otherwise -- even for the sake of a thought experiment directed to the exploration of a separate matter -- we do great damage to our capacity to imaginatively identify here and now with people who can do what only people can. Part of what happens when we speculate futurologically about "intelligent robots" is that we risk denigrating what it is about intelligent people that makes them not robots, and hence risk becoming a bit more cavalier about our responsibilities to ensure people flourish as such. This is hardly something I think Krugman would consciously countenance, but in my experience an indulgence in futurological speculation can render otherwise sensible, humane people far more credulous and insensitive than they would be under normal circumstances. And, indeed, we have good reason to think this might be a particular weakness for the usually reliably sensible and humane Krugman -- about which I will say more nearer the end of this piece.

What Krugman proposes is that if we could (as we cannot) produce intelligent robots "able to do everything a person can do, [then c]learly, such a technology would remove all limits on per capita GDP, as long as you don’t count robots among the capitas." As to that latter stipulation, that such robots wouldn't count as capitas, as people, obviously that is the farthest imaginable thing from clear, and the first danger of this whole exercise, as I already elaborated in the first place, is that a futurological scenario ostensibly about GDP is actually placing us in a frame of mind in which we are contemplating the viability of treating beings "able to do everything a person can do" as non-persons, as instruments. Not to put too fine a point on it, it seems to me a robot that could do literally ALL that people can do would necessarily have to be included "among the capitas." If we are to indulge in a thought experiment in which prosperity means nothing but a slave economy (as we know it does not), then why not endorse the tried and true method that requires merely mistreating people as though they were robots, rather than demanding making a go at the whole unwieldy implausible actual production of intelligent robots that are then to be mistreated as unintelligent robots anyway? Of course, again, Krugman is not advocating for a slave economy (although I daresay you can find folks at Fox News who would say otherwise), nor frankly would he likely countenance the treatment of actually intelligent robots that could do literally anything people can as slaves either.

Fortunately for us all, this is a dilemma with which none of us is actually confronted in the least, for nobody is making anything even remotely like intelligent robots in the first place. Krugman admits this right off the bat: "Now, that [ie, intelligent robots to the rescue] [i]s not happening -- and in fact, as I understand it, not that much progress has been made in producing machines that think the way we do." Let us pause here and say what Krugman does not. Because it isn't just that "not that much progress has been made" in producing artificial intelligence, it is that since just before World War II when the idea of coding artificial intelligence first seriously captured the imagination of certain techno-utopians (I leave to the side a long pre-history of fascinating automatons and con-artists, even though these have in my view much more in common with contemporary adherents of AI and robo-utopianism than is commonly admitted, even among their skeptics) enthusiasts for this idea have been predicting with stunning confidence the imminent arrival of AI pretty much every year on the year, year after year, and have been doing so with never the slightest diminishment in their conviction, despite being always only completely wrong every single time.

And there is more: Very regularly, these adherents of AI have often spoken of "intelligence" in ways the radically reduce the multiple dimensions and expressions of intelligence as it actually plays out in our everyday usage of the term, and often they seem to disparage and fear the vulnerability, error-proneness, emotional richness of the actually incarnated intelligence materialized in biological brains and in historical struggles. It is one thing to be a materialist about mind (I am one) and hence concede that other materializations than organismic brains might give rise in principle to phenomena sufficiently like consciousness to merit the application of the term, but it is altogether another thing to imply that there is any necessity about this, that there actually are any artifacts in the world here and now that exhibit anything near enough to warrant the term without doing great violence to it and those who merit its assignment, or to suggest we know enough in declaring mind to be material to be able to engineer one any time soon, if ever, given how much that is fundamental to thought that we simply do not yet understand.

One might like to think that this awareness is embedded in Krugman's admission that AI "isn't happening," but of course, were he to take this lesson to heart he would little likely have invited us down this garden path in the first place. And, true enough, he takes back his admission that AI "isn't happening" almost immediately after admitting it: "[I]t turns out that there are other ways of producing very smart machines." Let us be quite clear: If by "very smart" machines Krugman means very useful machines well designed by intelligent people, then his statement is obviously true (but we would still then have no reason to entertain his "fantasy technology scenario"), but if by "very smart" machines he means machines actually exhibiting something like intelligence then this statement remains just as not true as it was a minute ago. That is to say, it is not at all true. And for all the reasons I mentioned before, this is an untruth that it matters enormously to be clear about, because in attributing intelligence unintelligently we risk loosening the indispensable attribution of intelligence to those who actually incarnate it.

Krugman writes of the new "very smart machines" he envisions:

In particular, Big Data -- the use of huge databases of things like spoken conversations -- apparently makes it possible for machines to perform tasks that even a few years ago were really only possible for people. Speech recognition is still imperfect, but vastly better than it was and improving rapidly, not because we’ve managed to emulate human understanding but because we’ve found data-intensive ways of interpreting speech in a very non-human way. And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly.

I do hope readers have taken note of the terrible argumentative burden being borne in this passage by the word "apparently" -- a burden that is especially noteworthy given how little evidence is offered up to render the claim, you know, actually "apparent." For whom but a "true believer" in the old-fashioned project of AI would pretend the enraging ineptitudes of Autocorrect and Siri, say, suggest in the least that "we are moving toward something like my intelligent-robots world"? And before you take umbrage at my suggestion that we might have a "true believer" here, do take note of that personally possessive "my" Krugman uses to describe a non-existing world of the future for which he is quite uncharacteristically disdaining the empirical evidence of our -- you should note that pronoun, too -- actually existing world, peer to peer.

Indeed, I must protest the glib suggestion that one can still describe with the very human word "interpretation" what Krugman is actually referring to when he speaks of "data-intensive… very non-human ways of… speech." This conflation of non-human data sifting with human interpretation looks to me not merely as bad as the straightforward falsehood of proposing, as so many AI dead-enders do and as Krugman seems to deny, that we have actually "emulated understanding" in code, but frankly the claim about machine "interpretation" seems to me actually just another form of making exactly the same proposal.

Now, Krugman's whole discussion is a response to a piece by Robert J. Gordon proposing that "[g]lobal growth is slowing -- especially in advanced-technology economies. This column argues that regardless of cyclical trends, long-term economic growth may grind to a halt. Two and a half centuries of rising per-capita incomes could well turn out to be a unique episode in human history." In that piece, Gordon provides a handy little table summarizing the thrust of his argument and its assumptions, which Krugman reproduces in his response as well. Here is the key passage:

The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions:

IR #1 (steam, railroads) from 1750 to 1830;
IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
IR #3 (computers, the web, mobile phones) from 1960 to present.

It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.

Krugman agrees both with Gordon's proposal of three key transformative technoscientific ensembles and with Gordon's insistence that the second ensemble was much more transformative than the third (in which we are presently caught up).

Krugman's facile "intelligent robot scenario" is proposed precisely to suggest an as yet unrealized but presumably imminent (it isn't) amplification of the third ensemble that would render it even more transformative than the prior ensembles. Again, for those who might complain that I am uncharitably refusing Krugman the suspension of disbelief owed to any thought experiment, I want to point out that Krugman wants to draw factual conclusions from his exercise in imagination: "And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly. This in turn means that Gordon is probably wrong about diminishing returns to technology." Needless to say, one does not extrapolate from fanciful initial stipulations to factual prophetic utterances (a commonplace error of the futurological genre). But what may be worse is that Krugman's very framing of the thought-experiment actually disables its thinking, in imagining the possibility of engineering nonpersons as capacious as persons, it deranges our imagination of the possibilities for engineering via policy a prosperity capacious enough to be enjoyed by all persons.

Now, I have long been a champion of Krugman's thesis that contemporary market fundamentalism represents a kind of Dark Age of Macroeconomics in which public discussion of economic policy exhibits a basic illiteracy of Keynes(-Hicks) insights akin to the comparable policy illiteracies driving "intelligent design" into biology classrooms, climate-change denialism, abstinence-only education, more guns as the solution to gun violence, and so on. But I have to wonder if Krugman's futurology in this instance is mobilized in part in an effort to defend an article of Keynesian faith actually much better left behind with the Dark Ages as well, the faith expressed in Economic Possibilities of Our Grandchildren that a prolongation of progress ensures prosperity for us all without the muss and fuss of social struggle and stakeholder politics simply via compound interest.

Quite apart from the extent to which Keynes was endorsing too much imperialism for comfort in that early argument of his, the deeper problem is that he was also endorsing, as so very many twentieth century intellectuals did, as "inevitable progress" what amounted to the inflation of a petrochemical bubble that so vastly amplified the forces available to human agency that it created an impression that the monomaniacal application of brute force could overcome all problems. This wasn't true. In fact it often lead to catastrophically greater problems (the Dust Bowl, antibiotic resistance, car culture, desert cities depleting aquifers, rising GDP conjoined to rising stress and suicide and reports of dissatisfaction, etc.), but even if it were true it was never going to last forever in an actually finite world, indeed it was never going to last long enough to smooth away the criminal unevenness distributing its benefits and its costs while it lasted. And it is beginning to look like the only thing worse than finitude pretending to infinitude as resources run out is the possibility that the waste and pollution accompanying this false infinitude might actually manage to destroy the world before destroying the world by running out.

I agree with Krugman that Gordon's illustrative table is useful to a point, but I want to point out that accepting it too wholeheartedly easily obscures as much as is illuminated. Although petroleum makes an appearance in Gordon's second ensemble, for instance, it seems to me it should be foregrounded considerably more, and that coal should probably appear just as prominently in his first ensemble. This would immediately clarify that part of what is lacking in the third ensemble is a comparable shift to, say, renewable energy the absence of which goes a long way to explain why the third ensemble really hasn't had anything like the transformative substance of the first and second. Recalling the famous introduction to Keynes' Economic Consequences of the Peace and its lament for another irrationally exuberant "Long Boom"-esque celebration of the networked globalism at the turn of a prior century, enabled then by what Tom Standage has termed the Victorian Internet of telegraphy, one really is forced to question whether the Gordon's third ensemble isn't really just the continuation of the second after all. Indeed, to the extent that the internet is still powered by coal and implemented on petrochemical devices -- and to the extent that one accepts my premise that especially the petrochemical epoch amounted to the inflation of a ruinous meta-bubble misconstrued as a naturally progressive modern civilization -- then it is really hard not to wonder if Gordon's third ensemble represents anything but a more hysterically hyperbolic variation of the preceding fraud, a "digitality" literally enabling outrageous global financial fraud, and tragic race to the bottom globalization backed by military force, all the while distracting attention from barbaric economic exploitation and environmental catastrophe with promises of virtual heavens and robot paradises.

When I suggest that part of what makes the third ensemble vacuous is the lack of renewable energy investment I might seem to be providing my own variation on Krugman's robotic supplement to renew hopes for progress, but I would remind both Gordon and Krugman of Yochai Benkler's provocative suggestion that the substantial impact of digitization is precisely anti-industrial in its effects. For Benkler what is taken to be unique to industrial-model organization is a reliance for productivity on capital-intensive infrastructure investment which in turn ensures concentrations of authority that countervail what might otherwise be the democratizing force of comparatively more disseminated prosperity. As it happens, I do indeed still believe in the possibility of progress, but I would not characterize it as industrial but absolutely anti-industrial in character, a matter of distributed but ecosystemically-embedded investment, open-accessible but democratically accountable authority, networked but situated knowledge production, peer to peer. Political struggle in the direction of equity-in-diversity, and historical struggle toward the solution of shared problems still looks to me like progress, but it is a matter of taking up democratic effort, not abdicating agency in a false hope for techno-transcendence.

Krugman genuflects a bit unconvincingly toward such political realities in an aside:

Ah, you ask, but what about the people? Very good question. Smart machines may make higher GDP possible, but also reduce the demand for people -- including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots. And then eventually Skynet decides to kill us all, but that’s another story.

Of course, there is nothing so conventional among futurologists of the most embarrassingly Robot Cultic kind to propose altogether flabbergasting wish-fulfillment fantasies, involving sooper-genius brain upgrades, living forever in shiny sexy robot bodies, wallowing atop nanobotic treasure piles or in Holodeck heavens, and so on and so forth, but then to attempt to boost their credibility as Very Serious intellectuals by piously warning us of the dangers of clone armies, robotic uprisings, Robot Gods eating humans as computronium feedstock, and so on. That is to say, they provide a little disasterbatory hyperbole as a "balance" to their techno-transcendent hyperbole.

While these hoary sfnal conceits made for some diverting fiction when they first appeared in pulp decades ago and still can be jolted into life with great writing, great acting, great special effects doing some serious heavy lifting, I cannot pretend to find much in the way of original insight in this sort of stuff let alone, for heaven's sake, thoughtful policy-making. Of course, these literary expressions are most powerful when they provide critical purchase on our current predicaments: the rhetorical force of the genre depends on the narrative machinery through which what is proffered under the guise of future prediction or projection provides in fact the alienation needed to re-imagine our inhabitation of the present differently, more capaciously, more critically. When futurological scenarists go on to republish simpleton sketches of the scenery of literary sf and then treat this most dispensable furniture as an analytic mode involving literal prediction and projection of "the future" (which doesn't exist, and can only become the focus of identification at the cost of dis-identification with the present) the result debauches the literary form it steals from while at once it deranges the policy form it seeks to promote itself as.

Notice that one of the things one is not talking about when one is talking about perpetual GDP growth via intelligent robots (or the Very Serious non-worry of plutocratic slavebot plantation societies) is how incomparable wealth concentration was abetted through the upward distribution of profitable productivity gains of automation in the context of the destruction of organized labor in the United States in the aftermath of the great but incomplete gains of the middle class in the aftermath of the New Deal -- the sort of topic about which Paul Krugman has quite useful things to say when he isn't impersonating a futurological guru. In other words, when one is talking futurologically one tends to be talking about things that don't and won't exist rather than talking about things that do, or at any rate talking about things that do exist only in highly hyperbolized and symptomatic ways that render them unavailable to useful critical engagement. This is so, even though, as here, the actual reality of automation provides the disavowed real world substance on which the futurological fancies of intelligent slavebots probably ultimately depend for much of their intuitive force anyway.

Needless to say, I find little comfort in Krugman's jokey futurological offer of a Terminator flip-side to his transparently consumer-capitalist robo-utopia as ideological guarantor of eternal progress, and I am not at all edified to see someone I otherwise admire quite a lot (I've read all of his books, including the textbooks and memoirs, and of course I will continue to do so with great pleasure and to my great benefit) stooping so low. I'll return the favor with the low blow of reminding readers that as a kid Krugman wanted to be Hari Seldon of Asimov's Foundation novels when he grew up, and regards economics as a sort of poor but perhaps serviceable substitute for Seldon's futurological pseudo-discipline of "psychohistory" -- which Krugman imagines as a discipline integrating economics, political science, and sociology (and no doubt "Big Data") -- "a social science that gives its acolytes a unique ability to understand and perhaps shape human destiny." Interesting word choice, there, acolytes! While it is of course enormously important for human beings to try to understand the times in which we live, the meaning of events that beset us, the history which we take up, the legacies with which we will come to grapple later in life as will generations who follow after us, I do not agree that there can be a political science of free beings, I do not agree that there is a human destiny that beckons the clear-sighted but an open futurity inhering in the ineradicable diversity of stakeholders to the present, I do not agree that thinking what we are doing is the least bit about making profitable bets or making better prophesies. I think the skewed perspective of futurology may sometimes seem to be a matter of talking about robots but it is really more a matter of talking as if we are robots.

Here is Krugman's final thought: "Anyway, [this is] interesting stuff to speculate about -- and not irrelevant to policy, either, since so much of the debate over entitlements is about what is supposed to happen decades from now." May I suggest by way of conclusion myself that the primary relevance of this sort of speculation to future policy outcomes is precisely the deranging impact of this genre of speculation on policy-making in general. Consider the way in which futurological daydreams about longevity gains have provided the rationale for pernicious suggestions that the retirement age be delayed -- even though expectations of longevity for actual people at retirement age haven't increased at all for most people who have to work for a living, although no doubt superannuated senators and wonks in their cushy posts may feel their prospects past sixty-five are long. Consider the way in which futurological daydreams of megascale geo-engineering projects provide corporate-military rationales for democratic denialism in the face of anthropogenic climate catastrophe -- rationales in which the very corporate-military actors who exacerbate and deny climate change now are re-cast as convenient imaginary saviors from climate change, envisioned as operating very profitably of course and no doubt much less accountably due to the conditions of emergency, recklessly proposing hosts of unilateral interventions into ill-understood climate systems, willy-nilly, and at vast industrial scales, with who knows what consequences… as all the while they decry democratic environmental politics of education, regulation, incentivization, and public investment as hopelessly corrupt, dead on arrival, emotionally overwrought.

I am far from denying the necessity of policy makers to make recourse to consensus science in crafting effective legislation, making sound investments, planning for likely problems and opportunities. Every actually legibly constituted scientific and academic and professional discipline has a foresight dimension -- but there is no analytic discipline evacuated of or subsuming all specificity that produces "foresight in general" (as futurology tends to pose as), just as there is no literary discipline devoted to testable hypotheses (as the futurological "scenario"-form tends to pose as) rather than to meaning-making through salient narrative, figurative, logical association. Further, there is no such thing as "The Future" qua destination or Destiny, nor such forces as "trends" one can ride to that destination or Destiny: there are only judgments and narratives that provide purchase on the present and only to that extent provide some measure of guidance as the present opens onto the next present. There are few economists who provide us a better grasp through the application of empirically grounded models of the complex, dynamic policy terrain of international economics, uneven technodevelopment, and liquidity traps than Paul Krugman does at his best. He is invaluable in the work of understanding where we are going from the present, and as such he has no reason to pine after the prophetic utterances of futurology.

We have no reason to think intelligent robots are on the way in any sense remotely relevant to responsible policy concern. And it isn't economists (or pop-tech journalists or, least of all, futurologists) we should be reading to gain a sense when intelligent robots are proximate enough to assume real world relevance, it will be biologists, neuroscientists, engineers. But we have every reason to think that were intelligent robots to arrive on the scene they would do so only after who knows how many intermediary steps had been made, at every single one of which there would have been quandaries for policy to address shaped by the stakeholders to the changes of the moment, the shaping of which would articulate in turn the terms and stakes on which the next change then depended. The imagined distances and destinies of the futurologists exert little force and provide little insight on the complex vicissitudes of technoscientific change and technodevelopmental struggle, and their pristine lines of techno-teleologic rarely have much at all to do with the shape and substance and stakes that drive the way to eventual outcomes. There is plenty for policy-makers to grapple with as we are beset by dumb automation in the hands of plutocrats, and every moment devoted to wish-fulfillment fantasies of intelligent robot friends and foes is a moment stolen from matters actually at hand, many of them sufficiently urgent that our failure to be equal to them guarantees as nothing else could that futurological fancies never find their way even to some fragmentary fruition.

This is a revised version of post that first appeared at Amor Mundi.