Partager via


The Artificial Intelligence Oxymoron

I'm reading Stephen Levy's wonderful book about Google, In the Plex.  In the account of the development of AdSense, an engineer explains the process of having software understand a Web page well enough to match its content with an advertisement.  He says that fundamentally the problem is that of enabling a machine to accept a stream of words as input and correctly predict the words that come next.  Google's indexing of the entire Web and some of the rest of the world's information gives it a tremendous amount of material from which its software can learn to accomplish that feat. 

Anyone who has read Jacques Derrida will note with interest how Google's computer scientists have arrived at the same perspective on semantics that philosophers did with Derrida's publication of On Grammatology: that meaning is not complete at the current location in a stream of language, but is always only being completed by the subsequent location.  Derrida, of course, called this essential aspect of semantics "differance." 

In the next paragraph of In the Plex, the engineers speculate on the prospect of the machine ascending to artificial intelligence by using the repository of data at its disposal to acquire more knowledge and linguistic understanding.  That's the point at which their thinking departs from Derrida's. 

They're absolutely right to recognize that intelligence is linguistic, that it is demonstrated by fundamentally linguistic capabilities.  What computer scientists generally don't see is that the use of language, and therefore, the capacity for intelligence is essentially organic. 

That's because the use of language is driven by desire.  If the meaning of a stream of language is always within the context of what comes next in that stream, then it is desire that drives a user of that language onward in that stream. 

An understanding of the nature of that desire is to be found somewhere between the writings of Karl Marx, Charles Darwin and the early Jean Baudrillard: most likely the latter.  Jacques Lacan explained how desire draws infants into language. 

Machines have no desire.  So while they can be made to appear as if they have linguistic capability, they simply don't.  They may be able to take a stream of language as input, and predict how that stream will continue, but they will never care about whether the stream actually does continue or not. 

So we can program machines to win at poker or chess.  We can't make they care about winning or losing, though, and therefore, we can't ever make them play those games or any others, just make the moves. 

That's why I find "artificial intelligence" to be an oxymoronic term.  Intelligence is linguistic and linguistic capability is irreducibly organic.  Nothing about intelligence is artificial, and nothing artificial can be intelligent.