Daniel Dennett on the desirability of general AI

He sees only risks and no rewards from generalized (i.e. conscious) artificial intelligence:

WE DON’T NEED artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users.

. . . . .

So what we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods.

It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us, literally. The human use of human beings will soon be changed—once again—forever, but we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory.

Will AI Achieve Consciousness? Wrong Question

Even if generalized AI is not the explicit goal, it may be the natural consequence of building devices that can fend for themselves without human intervention (in, for example, interstellar space). After all, it seems likely that human generalized intelligence evolved only as a necessary by-product of human survival needs, not as a specific goal.

Avoiding the creation of generalized AI (even if we wanted to) may be more difficult than simply deciding against it. And that’s the concern.