There was a small item in the news recently which would seem to deserve more attention and consideration than it has received. It appears that many newspapers, including prestigious ones like the Los Angeles Times, are now using computer programmes to write stories for their financial pages. I gather that these articles are, for the most part, dry, statistic laden reports about movements on the stock exchanges and other commercial activities, which would be of interest only to those who follow financial markets in order to make investments based on financial trends. Yet, these articles are said to be surprisingly well written and convey complex information with a clarity and precision that many human writers would be unable to match. Although there is nothing in the way of personal expressiveness to be found in such reports, this can hardly be considered a defect for the requirements of this kind of journalism. But the obvious question lurking behind this development is could programmes that generate texts for this type of news reporting also be used for other kinds of writing? Software designers are only too confident to assure us that they can. But even for computer illiterates like me, it's quite easy to imagine all sorts of texts being produced by computer programmes. For something which strongly resembles a computerised style based on stylistic predictability already affects many other forms of journalism and popular fiction, to say nothing of political speeches. And the so called predictive texts that serve so well for sending messages from mobile phones could easily be extended to more complex kinds of writing. Indeed, such programmes could become so proficient that the labour of writing would come to seem an increasingly pointless activity. But this raises the important question that prompts this post. If computer programmes can do our writing for us, who or what is supposed to do our thinking and feeling for us?
Many thinkers, including my hero Raymond Tallis, believe that computers merely process information by human design and can't actually be said to think at all. Others, such as the MIT computer scientist, Ray Kurzweil, believe that computer technology is rapidly approaching what he calls a singularity after which computers will completely surpass the conscious capacities of humankind. According to this line of thinking, computers will no longer follow our commands, we will obey theirs, as ever more comprehensive programmes will take over and determine our affairs. If this sounds like a scenario out of a science fiction horror movie, it is plausible enough to alarm Stephen Hawking, Bill Gates and Elon Musk, none of whom can be described as ignorant techno-phobes. But perhaps the important concern for these informed observers may not be about whatever debatable consciousness computers might possess so much as it about our increasing reliance on computers in virtually every sphere of human activity. Even now, we are beginning to see and experience the world only through the portals that computer technology provides. Computers, then, wouldn't have to be truly conscious in order for them to determine the consciousness that we have of the world. But the question that inevitably follows on from this is how will the advance of computer technology affect the consciousness that we have of ourselves? Craftsmen have always identified with the tools that express their skills. A carpenter manifests his will through his hammer. A soldier may see himself as his rifle. And even if he uses a more sophisticated instrument for writing (as I'm doing now with my laptop), a writer often identifies with his pen. But computer technology is so pervasive that it influences almost every human activity and penetrates deeply into our understanding of how we act in the world. Identifying with computer processes, then, is by no means restricted to the professionals who devise them. For even if we reject the notion that computers are conscious, it is all too easy to regard them as identical to the consciousness that we possess. It almost requires a deliberate effort to remember that human consciousness is not a programme which comes packaged in flesh and bone hardware. Still, any conscious activity that we happen to engage in can appear to be the true purpose of consciousness itself, particularly when doing seems to attain to being. But of all activities, thinking takes pride of place when it comes to both being and doing. "I think, therefore I am," Descartes asserted. So does this mean that if computers think for us and do it better by being more logical and precise with vastly more information at their disposal, then they could also take possession of our being? Because almost everything we do requires information and because information can be processed much more efficiently by computers, many people believe that any information we possess can serve no higher purpose than to meet the biological requirements that our genetic inheritance imposes on us. Information, according to this view, only serves the interests of individual survival and the perpetuation of the human species. Yet, few, if any of us actually experience ourselves in this way. Of course, biological necessity drives our instinctual urges to eat and have sex and it is these drives that sustain us both individually and as a species. Yet our drive to be is hardly arrested by the satisfaction of these base desires. For our urge to be is also an urge to create. And while our creations may not always satisfy us and may even cause us great misery and regret, making something actual from the merely potential is perhaps the most distinctively human thing we do. Moreover, we must create, not just cities, institutions, machines and works of art; we also need to create reasons for doing what we do. And those reasons may have nothing to do with our survival. But we don't usually think of the matter in this way. We usually think that needs descend on us from our circumstances and drive us to action in a straightforward causal sequence. We overlook that we might have responded differently and could have created other possibilities for ourselves. We forget that our intentions lie buried within all our experiences of the world. And before we acted on those intentions there was indecision and perhaps even doubt. Will computers ever become sufficiently conscious to experience such indecision? Or is indecision itself a sign of a feeble consciousness, uncertain of its foothold in the world? One of the attractions of computers appears to lie in the belief that some day they will become so hyper-conscious that they will never face indecision at all. Presented with a problem, computers will simply arrive at the correct solution with relentless logical force. Unfortunately, what constitutes both a problem and its solution depends on the values upon which any judgement is made. In the beginning (the biblical echo is intended), these judgements must be made by human programmers, presumably to serve human interests. Later generations of computers may, however, arise without direct human intervention, which could, conceivably, terminate their interest in human affairs. But could computers ever generate their own interests, independent of the instructions of human programmers? And if they have their own interests, could those interests ever conflict? Finally, if computers ever become truly conscious, what would they want to become conscious of? These are questions that have stimulated science fiction writers ever since Karel Capek first conceived of robots almost one hundred years ago. And as in science fiction, my questions about the consciousness of computers are really about us. Arriving at solutions with relentless logical force is widely considered an ideal, particularly by people who believe that computers will surpass us in their capacities of consciousness. But although computers will undoubtedly help us advance in knowledge, trying to eliminate the ambivalence inherent in being human may not represent an advance of consciousness at all. Indeed, recognising the inherent ambivalence of consciousness may be one of the essential properties of higher consciousness. Rejecting that ambivalence on a dream of reaching absolute logical certainty then would hardly be an advance. It would only be an abdication of being human.
0 Comments
Leave a Reply. |
Details
Archives
November 2015
|