Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Monday, 30 October 2017

Robots, The Turing Test, Defining Life, & All That Jazz




With the rapidly increasing technological capacity of artificial intelligence, there are all sorts of thoughts and opinions doing the rounds about robot intelligence, and whether robots will ever have intelligence on a level with humans, and whether they can ever have feelings that are qualitatively comparable to the feelings we experience.

There is a sense in which things like technology are combinatorial; that is, where new technologies arise from existing ones - the constructive aspect of technology and human endeavour is an example of creating systems out of other aspects of the self-same systems. The term for this is called 'autopoietic' (it's from the Greek, it means 'self-creating'). Of course, these systems don’t create themselves on their own - they require the agency of human minds, a little like how coral reef creates itself from itself with the agency of small organisms.

Now when one thinks of nature and the huge potential with which she endows us, we see a similar situation occurring - only in the grand scheme of things the mind is a stupendously complex thing doing the creating: it builds on its neurological composition, on its innate cognition, on experiences, on receiving information, on analyticity, on processing, and it even has facilities (specifically, the dopamine neurons in our angular cingulated cortex) through which we can become aware of patterns and formations which, when conflated with memory and experience, alert us to deviations which help with perceptions like 'trust' and 'reliability'.

Some biologists define life in a similar way - as an autopoietic system - that is, one whose constituent products sustain the structural integrity in order to propagate its genetic material. But that doesn't really tell us very much, because it fails to zoom in on what is a whole ladder of vast complexity regarding sentience and consciousness.

We have often asked whether other animals have consciousness like us, and whether a machine could ever think in the same way humans think (in this blog post here I elaborate on why I think it's impossible to replicate the human brain). But here’s the fundamental problem; the trouble with consciousness is that it is so hard to define that it is all but useless to claim such a thing to be existent in something that isn't human.
 
In Language, Truth and Logic, Ayer gave his own version of the Turing Test by considering a man and machine. To determine whether a machine is conscious it would have to pass tests that look for the presence of the same kind of consciousness that humans exhibit.

In the past I've volunteered to interact with one or two Turing-type programs in which participants were able to engage with a 'mind' that responds to your typing, and you it, in order to see if volunteers can distinguish between a computer pretending to be a person, and an actual person on the other end of a keyboard. My conclusion was that it was quite evident that it is easy to tell a computer with its rule that transforms my comments into a reciprocated sentence. 

Just like Searle's Chinese Room experiment, a program that simulates the ability to understand Chinese is not proof that the machine understands the meaning of what it returns - it can simply be programmed to respond by use of correct symbols. The qualitative difference between this kind of rule-based response and a genuine human is immense.

Incidentally, from feedback I received, I was one of only three people to successfully *out* the computer as being non-human. I am not sure how the other two volunteers managed it, but my method was to try to use a linguistic trick that I conjectured a human would comprehend that a robot unfamiliar with subtle human nuances may not. So I asked the computer to give a view on a little syllogism I created:

Nothing is better than eternal paradise
A weekend in Great Yarmouth is better than nothing
Therefore a weekend in Great Yarmouth is better than eternal paradise

You can see what I did there with the wordplay on 'nothing'. I figured that a human would get it but a robot would not, and that turned out to be the case as I exposed the program doing the typing as being non-human.

My gut feeling is that one of the major hurdles in our ever being able to declare a robot as being alive in any meaningful sense is that the term 'life' is either subjectively definitional as a human construct, or it is far too complex to be properly defined (sometimes both).

Consider the question of whether a virus is classed as a living organism. Definitions of this kind are somewhat imprecise simply because we are relying on our own definitions of what is living and what is not. When dealing with such issues, we mustn’t ever get caught in the trap and forget that the things we are defining are done so within the definition of the languages employed, and that definitions become less fuzzy as we become more precise about where to draw the line. The chances are the question of whether a virus is a living organism or not will not apply to the science of thirty or forty years henceforward.

The notion of things like species, genetic similarity, and even in many cases ‘life’ itself, are humanly constructed notions that help us classify and categorise what we have discovered. The reason people disagree so much on abortion issues is because they can't agree on what constitutes life - it is not because they disagree on whether murder is bad. Some people argue that life begins at the point of conception because they consider life to be 'information', and they maintain that the information to create a human is already present upon fertilisation. In that case, if we copied the information onto a CD (it would fit) I assume they must then believe that the CD is alive. Further, if a cell is life, does that mean a virus is alive? Such issues show how people get into epistemic difficulties.

A virus has the genetic information necessary for its own growth and propagation, but it requires the machinery from a host cell to do so. Thus if we define ‘life’ as autonomous growth and reproduction, then by that definition a virus is not truly alive: a virus is acting in nature’s physical laws but it is not answerable to human descriptions – it is humans that have defined living organisms as being able to adapt to their surroundings, and being able to achieve homeostasis, and being able to identify with proteins, and having a characteristic genetic code, and having the ability to reproduce. 

Viruses do fit some of the criteria; they do have genetic material and they have both living and nonliving characteristics, but as we’ve already said, they do not survive without the metabolic machinery of host cells for survival and propagation.

That was the definitional ambiguities, but now to complicate matters further, consider a thought experiment I wrote a while back, which tries to cover the second part of the equation - the vast complexity of life:

Think of the notion of removing atoms one by one in the physical world, and imagine we have a method of physically doing so with ultimate computational precision and high speed capacity. If I reduce bit by bit a plane or a car or a microwave to a random aggregation of atoms and then reassemble them exactly as they were, then I would have a fully working plane or a car or a microwave, because neither of these systems is biologically alive. But if I did the same to an insect, a bird or a human (at several trillion atoms at a time), there would come a point when its being 'alive' would cease. 

If I reassembled those atoms exactly as they were I would never reconstitute life, because once a thing dies it cannot be brought back to life. At least that is our current understanding of biological systems. But do we believe this only because of our limitations in reassembling the atomic or sub-atomic structure back to full constitution?

In other words, if, when a young bird died by hitting a tree, I had the apparatus to reassemble its structure into the exact atomic form before it flew into the tree, would it be alive as it was before? I think the idea of life as being explicable in terms of matter, information and computation is interesting, because it leads to the question of whether it can be reconstituted with the ability to reassemble matter, or whether there is some law in nature that would preclude this.

To know if the bird's life can be reconstituted after death with the right retaining of atomic structure we have to know what life is. There are practical problems with this. In the first place, we are alive, but we cannot step out of this state of being alive to measure its true complexity, and we can't therefore look back in on this perspective and know whether our judgements of the relations between life's constituent parts are accurate. 

Yes, on the face of it, we know the difference between a live mouse and a dead one, but we don't know if the complex internal arrangements of substances that make up the mouse's state of being alive can be brought back once that state has ceased, or whether there are barriers unknown to us.

Our definition of ‘life’ seems to me to be far too simple to capture all the goings on. What my thought experiment indicates is two things; 1) definitions of life probably are arbitrary and humanly constructed to remain consistent with the utility of definition. And 2) the physical universe probably conceals enough complexity to render those definitions nigh on impossible to ascertain once we begin the componential process.

Consequently, then, if we do ever get to the stage where some people think they can define life sufficiently to ascribe those properties to the cognition of artificial intelligence, you can bet your bottom dollar that there'll be all sorts of quarrels and protest groups much like there are now in the abortion debates, the genetically modified food debates and the cloning debates.

 
Further reading -----    Why Robots Won't Make Most Of Us Unemployed

                                    Apocalyptic Chickens Coming Home To Roost
 

Saturday, 21 November 2015

You Don't Expect This Kind Of Confusion In The Financial Times



I am quite baffled how a Financial Times writer can exclaim that as the technology industry is becoming a more prominent user of capital that this means it is (to use her term) "shrinking the economic pie". You'd expect to see this sort of claim in the Guardian or the New Statesman but not the Financial Times.

This is a quite bizarre confusion about the nature of economic growth. We use capital to consume. If we need to use less capital to consume then we can consume more with our supply of capital. This is the primary definition of economic growth. Given the foregoing, it's quite obvious that increased technological capacity is not going to shrink the pie. The more efficient our information technology becomes the less capital is needed, because the more efficient the technology becomes the lower the marginal cost of production, not to mention all the other ways that increased technology frees us up to do so many other things.

It seems the writer Izabella Kaminska, whose profile perhaps tellingly says "Everything she knows about economics stems from a childhood fascination with ancient economies like that of the Roman Empire", is thinking of the economic pie in terms of how improved technology affects GDP - where for example, some of our resources that used to be spent on computers, telephones, etc used to be costlier for us. For example, whereas once it would have been more costly to speak to your mum at the other side of the city, or cousin Betty in Australia, increased technology (text messaging, Facebook, Skype, email, instant messenger) means we don't spend as much. Yes, if you're going to measure calls to mother and cousin Betty in terms of GDP then of course lower sums show up on that part of the balance sheet.

But see how this plays out in terms of life enhancement more broadly. Just think of all the things we can do with our online capacity: Facebook, Skype, and perhaps best of all our Google access to all the world's knowledge. And just think of all the ways that those technological enhancements add to our GDP, both directly and indirectly. Given that our economic growth is based on consumption, the goods and services we can now consume are obviously not causing the economic pie to shrink - they are enhancing it, because they are increasing consumption (and if you don't know why consumption is the mother of all economic growth, I explain why here).

I emailed Izabella Kaminska to state that I just cannot understand why such a claim was made in a respected journal like the FT, when it's so obviously untrue. As yet I've had no reply, and I doubt I will - but if I do, I'll let you know with an 'Edit To Add' addendum.

Tuesday, 11 November 2014

Why Robots Won't Make Most Of Us Unemployed



Two articles came to my attention today; one in The Daily Mail Fail forewarning of robots bringing an end to half of today's jobs by 2025, and one in The Guardian lamenting an apparent decrease in social mobility. I’ve put them both together in a blog post because the error of reasoning that underlies their contentions is more or less the same in both cases – failure to understand the elasticity of future changes.

Let’s start with the Mail’s fear of enhanced technology bringing an end to half of today's jobs by 2025. It’s certainly true that augmentations in technology will mean an end to many roles currently undertaken by humans (one need only think of all the jobs we used to do that are now being done by machines). But that doesn’t necessarily mean what the doomsayers believe it will mean – you see, as history shows quite clearly, humans have the capacity, imagination and skill to do other things.

Imagine if you were having this conversation with a journalist at the beginning of the Industrial Revolution, and he told you how fearful he was that these new farming, printing and transportation machines would bring a gradual end to humans’ ability to work. You'd simply have to tell him that a lot changed after the Industrial Revolution, and that those changes saw more people on the planet than ever before, and more jobs than ever before. The key reason why there probably is nothing to worry about is that what constitutes ‘work’ (where work means earning a living) changes with growing societies and increasing technological advancements.

In the early 19th century you wouldn’t have been able to imagine how people could earn a living, say, making films or television programs, doing stand up comedy, providing complex domestic litigation, designing cars, driving taxis, flying planes, building speedboats, producing Kindles, playing football, working at a bowling alley, advertising on websites, fixing telephone lines or analysing DNA or quantum mechanics.

The same is true of this generation – the future ‘work’ that lies ahead is currently bound by technological limitations and unawareness of the activities that are currently not jobs but will be one day. As technology increases and those robots do things we used to do, we go on to do things we never used to do. In other words, we lose jobs thanks to technology (and make our lives a little easier in the process) and we create jobs thanks to ingenuity. That the Mail (and it turns out, The Guardian and The Telegraph) have been so short-sighted in their panic is poor on their part.

I now turn to The Guardian's take on the apparent decrease in social mobility. Dr John Goldthorpe, collaborator in the analysis, said:

“For the first time in a long time, we have got a generation coming through education and into the jobs market whose chances of social advancement are not better than their parents, they are worse.”

In illustrative terms, a similar thing is happening to the above, except the effects are being seen here in the labour market. That is to say, as job-types differ, we see different flows in social mobility too. Once upon a time, owning lots of land was the key to having the most social mobility. Now, it is not so important. There is a decrease in social mobility in some quarters of society, but they are offset by increases in social mobility in other areas of the labour market.

Moreover, let us not forget another important fact that was omitted in the article - that general prosperity and well-being are about more than just earnings and career prospects. A proper analysis must factor in all sorts of other pertinent things like improvements in technology, services, scientific capabilities, access to knowledge, health and medicine, worldwide communication, and so forth, that give this generation so many huge advantages that their parents and grandparents never had.

All this does, though, lead me on to what I think is the most important issue related to the above – that we have a generation of young people for whom upward social mobility may be like a passing cloud they can never catch up with. The spectres highlighted in the articles above are not so much a worry for the reasons the writers claimed – they are a worry because there are a large number of young people who lack the basic literacy, numeracy, social awareness, family support, mental health, hope and aspiration to be a force in the job market, or in many cases, get a foot on the ladder at all.

Social mobility, like natural selection in biological evolution, is a strong genetic factor in human progression. People at a young age look to climb the social ladder, increasing their skills and earnings along the way - which means that people with better abilities are generally (not always, but often) in higher positions.

Thanks to what were at the time up and coming advancements, like stream powered cotton mills, coal mining, increased agricultural machinery and major increases in the production of metals, textiles, and many other manufactured goods, there were thousands of job opportunities emerging for people at the incipient stages of the Industrial Revolution - creating a new middle class and transferring lots of wealth from the richer faction of society down to those now taking part in industry. As this continued, 19th century social mobility rose fairly consistently throughout all the UK population, as opportunity to work begat further opportunities to work. While it was far from all rosy, there were the embryonic foundations for what is now a very prosperous modern Britain. Similar developments are occurring throughout the world in countries that are currently as poor as Britain was a century or so ago.

There is currently a generation of young disenfranchised people that may not have the kind of leg-up that their 19th century counterparts had. Yes, it's true that in terms of quality of life and access to things, the current crop have it astronomically better than those before them - but a labour market that continually manages to replace more menial human tasks with machine capability may well struggle to find jobs for many of today and tomorrow's youth who lack the basic literacy, numeracy, social awareness, family support, mental health, hope and aspiration to climb the ladder. And if that is the case - it is there that the State will have to show its mettle and give them much more of a helping hand than is currently the case.
 

Friday, 6 December 2013

Apocalyptic Chickens Coming Home To Roost


Last weekend I was in a car that could be programmed to park itself - a feat that would have been unimaginable a century ago. Of course, the irony is, 'unimaginable' is precisely the wrong word because by 'unimaginable' I actually mean something like: 'only imaginable because of our ability to conflate science and fiction to create science fiction'. As technology continues to enhance our lives, it's evident that science fiction and empirical science are interesting bedfellows, engaged in an on-off love affair.

Sometimes science fiction becomes scientific reality, as in the case of self-driving cars, which would have been purely fictional in previous decades (other things that spring to mind are robot limbs, invisibility, space-travel, cloning and shape-shifting, to name a few). And sometimes scientific discovery fuels the imagination for science fiction, as in the case of special relativity, quantum physics, electromagnetism, genetic viruses and evolution by natural selection.

Some people, though, take their considerations too far. Influenced by a few blockbuster sci-fi films (Terminator 3 springs to mind, although I'm sure there are others) some people express concern that our robot creations will one day take on a life of their own and develop or evolve a level of sinister malevolence that will engender world domination and bring ultimate doom on mankind.

The people worried about this are confusing fiction with reality. It makes no sense to talk of human-created machines being more sinister than humans, in the same way that it makes no sense to talk of a beaver dam being more proficient than the capabilities of beavers, or a Thomas Hardy novel that's more romantically tragic than the author could produce with his own creative mind.

When it comes to the man-machine matrix, the worst we could create in machine-form is limited to the worst that is in us - there is no possibility of computers doing anything more sinister or destructive than humans could construct themselves. So when people worry about future robot creations precipitating our doom, they are really worried about other humans precipitating our doom - which, if history is anything to go by, isn't an entirely unrealistic fear.


/>