TFBW's Forum

Discuss
It is currently Wed Aug 22, 2018 3:03 am

All times are UTC




Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: Interesting find
PostPosted: Wed Mar 21, 2007 3:15 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
Hi,

Found the typing monkey site from a Google search. Great articles. I very much like the thinking around here.

But it looks like there hasn't been any activity of late so I'll see if I get a response to this before making any proper posts.

Cheers,
Bernie


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 21, 2007 4:00 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
There's not a lot of activity, true, but I'm here and always ready for a chat.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Mar 21, 2007 4:45 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
Well that reply passed the Turing Test as far as I'm concerned.

I came across the monkey stuff as part of some research for a short story I'm writing at the moment and I wanted to be a bit more sure of the position of those whose toes I intend to step on. You aren't one those but if I say that my main character is called Professor Dork Richins that might give you some idea.

I will tell much more about it when it is finished.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Mar 22, 2007 12:37 am 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
Heh. "Prof. Rich Dork" would work too. Having a lecturer called "Professor Dork" would be priceless.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Mar 22, 2007 1:52 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
I didn't want to call him Rich as that could imply money. So how about Professor Richins Dork?

Do you have a little bit of time and any inclination to throw some ideas around?

And would you mind taking this to email if so?

Bernie


Top
 Profile  
 
 Post subject:
PostPosted: Thu Mar 22, 2007 4:01 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
So... how about "Professor Dick Dork?" Alliteration is always fun.

I'll go to email if you really prefer that, but I'd rather talk in open forum if it's all the same to you.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Mar 22, 2007 4:20 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
I'd thought of Dick Dork and may well go with it even if it is a little casual.

The reason for going to email was so as not to spill too much of the story before I had finished it.

So without revealing much I have a question that I need to get a fairly satisfactory answer to.

Prof. Dork thinks life came about through the right ingredients and the right cooking of the soup. And Man came about as a continuation of that and is nothing more than the evolution of the first thing that wriggled out of the soup. He also believes that with the identification of the right brain processes and the translation of these into software (Frankenware in the story) he can replicate all the intellectual and thought processes of a man with an IQ of around 140.

What would Prof. Dork consider to be a good test for strong AI? Note that I am not asking what you would consider to be a good test. I need this from his viewpoint.


Top
 Profile  
 
 Post subject:
PostPosted: Fri Mar 23, 2007 12:46 am 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
Well, I would expect him to be satisfied with Turing's test for AI, conducted with a certain degree of rigour. For preliminary testing, I'd unleash the AI on a bunch of unsuspecting Internet chat rooms, just to see if anyone picks up on it. The real proof of the pudding would be to get a bunch of people to interact with the AI and two real human control subjects via a computer interface. Tell the people that the party they are communicating with may be an AI, and ask them to judge each party accordingly. See if the final results show any tendency for people to pick the AI as an AI relative to the humans.


Top
 Profile  
 
 Post subject:
PostPosted: Fri Mar 23, 2007 1:34 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
Yes that was my thinking too.

So here is a $64,000 question for you;

What would be your test for strong AI?


Bernie


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 24, 2007 3:34 am 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
I'm not sure that I have a definition for "strong AI". Here's a thought, though: there's a kind of AI that I believe to be impossible, namely the kind that can lead to a "singularity". So let's define "strong AI" as being the kind that can produce an AI more intelligent than itself. There's your test: your AI must be able to exhibit general intelligence, measured in the same way we test humans for intelligence, but it must also be capable of designing an AI no less intelligent than itself. Its ultimate proof, I guess, is a PhD thesis which advances the art of artificial intelligence.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 24, 2007 1:17 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
TFBW wrote:
I'm not sure that I have a definition for "strong AI". Here's a thought, though: there's a kind of AI that I believe to be impossible, namely the kind that can lead to a "singularity". So let's define "strong AI" as being the kind that can produce an AI more intelligent than itself. There's your test: your AI must be able to exhibit general intelligence, measured in the same way we test humans for intelligence, but it must also be capable of designing an AI no less intelligent than itself. Its ultimate proof, I guess, is a PhD thesis which advances the art of artificial intelligence.


I don't understand what is meant by "singularity". I found quite a few definitions but didn't spot one that looked like it applied here.

So if that hasn't thrown out my understanding of the rest of what you said then I think your test might be similar to mine except that you demand a much higher standard than I would. But if an AI could pass my test I think it would have the potential to pass yours. I would require nothing more than a demonstration of understanding.

So let me give some of my definitions. Knowledge is not data. It is more like evaluated data. Knowledge is understanding of data. A database program does not understand the data. It merely stores it and retrieves it in ways useful to the user as determined by the coder. The user and the coder understand the data but the program does not. It is purely mechanical. I do not believe that that which understands is mechanical which is why I believe the strong AI types are barking up the wrong tree.

So how do you test for understanding?

Let's assume that some of the data held in the machine is the text of Lewis Carol's greatest works and ask "Do you find this amusing?" "Which parts do you like the most?" But it need not be so complicated as this.

Let's say the data contains all the books of a regular small public library. "Which writer or writers do you agree with the most? Why?" And let's not forget the fiction section, "Which books do you like the most? Why?".


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 24, 2007 1:54 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
It strikes me that you're looking for a sense of aesthetics. That's an interesting angle, but it's not the same as mine. Mind you, programs can be designed to "evaluate" specific domains of data in the sense you describe. You probably need to refine your definition a bit as regards "strong AI".

With regards to the singularity, I mean a "technological singularity". This is something which engages in positive feedback until it reaches some kind of infinite or maximum. In the case of AI, if we are capable of making an AI smarter than ourselves, then this AI should also be capable of making an AI smarter than itself, and so on ad infinitum. In other words, the logical outcome is a technological singularity.

My test for strong AI, ultimately, is that it results in this "technological singularity", whatever it is.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 24, 2007 3:50 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
TFBW wrote:
It strikes me that you're looking for a sense of aesthetics. That's an interesting angle, but it's not the same as mine.


Consider testing the understanding of someone you know. If understanding is not data then asking for the data won't get you anywhere. They either remember it or they don't. Now a machine could obviously "remember it". Likewise merely asking for the definitions of words could be handled with a look up table in a program. With a person you might get an exact dictionary definition which again could just be memory. But ask them to give an example of the concept and you will see whether they understand the concept. I'm not sure that a machine could not be programmed to do this but it would be much much harder than just looking it up.


TFBW wrote:
Mind you, programs can be designed to "evaluate" specific domains of data in the sense you describe. You probably need to refine your definition a bit as regards "strong AI".


I'm using the mechanistic idea that there is nothing about a human that could not be artificially recreated in a machine as far as mental processes go and the idea of D. Niall Dennett that consciousness is but an illusion similar to the concept of a centre of gravity. And our dear Prof. Dork's idea that all is material in one way or another. If these characters were right they should be able to produce an AI that is to all intents and purposes the same as a human's but contained in a different package.


TFBW wrote:
With regards to the singularity, I mean a "technological singularity". This is something which engages in positive feedback until it reaches some kind of infinite or maximum. In the case of AI, if we are capable of making an AI smarter than ourselves, then this AI should also be capable of making an AI smarter than itself, and so on ad infinitum. In other words, the logical outcome is a technological singularity.


Hmm. I don't think "smartness", as such, is the defining human quality. If you mean understanding then I'm with you. We might have to get more precise with definitions here.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 25, 2007 7:54 am 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
Quote:
But ask them to give an example of the concept and you will see whether they understand the concept.

Isn't this kind of understanding implicit in a Turing test? It takes an understanding of language to hold a conversation: a complete dictionary in memory is not sufficient. On the other hand, simply asking for examples of concepts is not sufficient, because a database can be supplied with examples of concepts in addition to definitions.

What you're looking for is a capacity for synthesis: the ability to take a bunch of different things and construct something new out of them. A machine that can give an example of "envy" is impressive only if you know that it had to synthesise the example, rather than simply dish up one that was cooked earlier. A Turing-test conversation is good for this, because it's spontaneous. The AIs in question usually try to fake a capacity for understanding by making conceptually related remarks: mention "George W. Bush", for instance, and it might say, "I don't want to talk about politics." But there's only so many of these relationships you can hard-code up front, so it becomes obvious whether you're dealing with a genuine synthesiser or a large cheat-sheet pretty rapidly.

Quote:
...the idea of D. Niall Dennett that consciousness is but an illusion similar to the concept of a centre of gravity.

"D. Niall Dennett" indeed. :)

To whom is consciousness an illusion? The outside observer? Perhaps, in the sense that the outside observer can't tell the difference between a conscious agent and a zombie. Consciousness is only genuinely apparent to the conscious being itself, and it's certainly not clear to me how one could experience an illusion of consciousness without actually being conscious!

Quote:
I don't think "smartness", as such, is the defining human quality.

The kind of smartness I have described is a strong kind of acid test for the ability to both analyse and synthesise. If a person can analyse the function of a brain, and synthesise a better brain based on the understanding so gained, then that synthetic brain should be capable of exactly the same self-analysis and synthesis.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 25, 2007 1:11 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
Quote:
But ask them to give an example of the concept and you will see whether they understand the concept.

TFBW wrote:
Isn't this kind of understanding implicit in a Turing test?


Maybe. He said that all you would have to go on would be the words from the AI so that all you could do would be to have a conversation. But he didn't say anything about the content of the conversation and so that is a wide variable.



TFBW wrote:
It takes an understanding of language to hold a conversation: a complete dictionary in memory is not sufficient. On the other hand, simply asking for examples of concepts is not sufficient, because a database can be supplied with examples of concepts in addition to definitions.


So try this; First ask the AI if it has any knowledge of Little Red Riding Hood and, if so, find something it doesn't know about. Then tell it the story. Then ask for the moral of the story.

TFBW wrote:
What you're looking for is a capacity for synthesis: the ability to take a bunch of different things and construct something new out of them. A machine that can give an example of "envy" is impressive only if you know that it had to synthesise the example, rather than simply dish up one that was cooked earlier. A Turing-test conversation is good for this, because it's spontaneous. The AIs in question usually try to fake a capacity for understanding by making conceptually related remarks: mention "George W. Bush", for instance, and it might say, "I don't want to talk about politics." But there's only so many of these relationships you can hard-code up front, so it becomes obvious whether you're dealing with a genuine synthesiser or a large cheat-sheet pretty rapidly.


Sure so I would keep pushing with why type questions. Especially on politics and other subjects where there isn't general agreement.



TFBW wrote:
"D. Niall Dennett" indeed. :)


He is another character in my story.


TFBW wrote:
Consciousness is only genuinely apparent to the conscious being itself,


Precisely. And I do not think that this "being" has any kind of substance in the mechanical sense. And therefore could not be recreated as any kind of being another would recognise. It is of course possible to "pretend" to oneself a make believe character and have converstations and games with it but quite another to make that character real to others.



TFBW wrote:
The kind of smartness I have described is a strong kind of acid test for the ability to both analyse and synthesise. If a person can analyse the function of a brain, and synthesise a better brain based on the understanding so gained, then that synthetic brain should be capable of exactly the same self-analysis and synthesis.


Would it be possible to analyse and synthesise without understanding? Are these not things that would not be possible without it. You may be right to test for those but I think these are tests of understanding. Understanding is not a substance that is either in the box or not but rather something you could only detect by it's fruits.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Mar 26, 2007 10:13 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
thsman wrote:
Precisely. And I do not think that this "being" has any kind of substance in the mechanical sense. And therefore could not be recreated as any kind of being another would recognise. It is of course possible to "pretend" to oneself a make believe character and have converstations and games with it but quite another to make that character real to others.

I'm not sure I follow the significance of what you say here. There's a difference between intelligence and consciousness: intelligence is something that you can test for; consciousness is known only to the conscious being. It's certainly a stretch to create an AI and declare it "conscious", unless all you mean by "conscious" is "sophisticated behaviour". Having said that, an AI could be real enough as a personality without being conscious.

thsman wrote:
Would it be possible to analyse and synthesise without understanding? Are these not things that would not be possible without it. You may be right to test for those but I think these are tests of understanding. Understanding is not a substance that is either in the box or not but rather something you could only detect by it's fruits.

I think I agree with this. The ability to analyse and synthesise in the way I've described is a demonstration of understanding. Understanding, in this case, is just really another way of saying "intelligence", or perhaps it is a specific aspect of intelligence. As with intelligence, we seek the effects of its presence, rather than a substance. Ultimately, "intelligence" may just be a term that means "a certain degree of general sophistication in behaviour".


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 27, 2007 1:23 am 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
TFBW wrote:
I'm not sure I follow the significance of what you say here.


What we each refer to as "I" does exist and is not an illusion but nor is it a part of, nor a function of, the brain. I believe the body (including the brain) is analogous to a car and "I" is the driver. The driver isn't actually a part of the car at all.


TFBW wrote:

I think I agree with this. The ability to analyse and synthesise in the way I've described is a demonstration of understanding. Understanding, in this case, is just really another way of saying "intelligence", or perhaps it is a specific aspect of intelligence. As with intelligence, we seek the effects of its presence, rather than a substance. Ultimately, "intelligence" may just be a term that means "a certain degree of general sophistication in behaviour".


I think of intelligence as what we do to work out how to achieve our ends. It is not all there is to "I" but is an ability of "I" and not a function of a brain. Artificially as a coder I can write a program that someone else might think has some intelligence but it would be my intelligence. Also the program would be written to solve a problem of some kind for a user. When the problem is solved it is the user who says, "Aha" and never the program.

Using the car and driver analogy if you were looking under the hood and into the mechanical aspects that move a car you would see things happening. You would see evidence of intentions, motivations, understandings and even emotions but you would not see the source of these things until you looked at the driver.

So my position at this time is that strong AI is not possible because the driver could not be created.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 27, 2007 12:25 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
thsman wrote:
So my position at this time is that strong AI is not possible because the driver could not be created.

That's essentially Cartesian dualism, isn't it?

I'm not sure that a capacity for "understanding" requires a supernatural (or non-material, at least) component. I think that the most limiting factor in AI at the moment is our lack of understanding of how the brain works. On the other hand, I have doubts about the possibility that an intelligence -- artificial or not -- can fully understand its own operation. There's something suspiciously self-referential about the concept that gives it an air of "pulling oneself up by the bootstraps". But although I doubt we could make AI in our own image, I'm less convinced that AI requires a ghost in the machine, whether or not human beings have one. In Cartesian terms, I believe in the possibility of zombies.

Consciousness, on the other hand, I think must be supernatural. I haven't the vaguest notion of how matter can be conscious, no matter how much of it there is or how complex its arrangement. Jerry Fodor is one of the physicalists who acknowledges what a quandry this is for physicalism, in stark contrast to Daniel Dennet's denial. Free will is another aspect of existence which I can't explain in physical terms. I can comprehend it in an abstract sense, but I have no idea how it might be achieved.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 27, 2007 1:55 pm 
Offline
Established Expositor

Joined: Tue Mar 20, 2007 11:03 pm
Posts: 14
TFBW wrote:
That's essentially Cartesian dualism, isn't it?


Having just looked it up I'd say yes.

TFBW wrote:
I think that the most limiting factor in AI at the moment is our lack of understanding of how the brain works.


I agree there isn't much understanding of the brain but I also think that is the wrong place to look. The most startling thing about the field of strong AI is it's lack of progress in more than 50 years compared with almost any other field. It couldn't possibly be due to a lack of processing power of microchips or the cost of memory. They don't have a clue about what to do with such things if they did have them.


TFBW wrote:
I'm less convinced that AI requires a ghost in the machine, whether or not human beings have one. In Cartesian terms, I believe in the possibility of zombies.


For AI I would say that zombies are probably the only possibility.

TFBW wrote:
Consciousness, on the other hand, I think must be supernatural. I haven't the vaguest notion of how matter can be conscious, no matter how much of it there is or how complex its arrangement. Jerry Fodor is one of the physicalists who acknowledges what a quandry this is for physicalism, in stark contrast to Daniel Dennet's denial. Free will is another aspect of existence which I can't explain in physical terms. I can comprehend it in an abstract sense, but I have no idea how it might be achieved.


If by "consciousness" you mean "I" then I am with you. Is there anyone researching "I" to find out what it is and what it's properties are? I find many attempts to explain it away from the starting premise that everything is material but very little else. I will look into those people you mentioned.

I think, without a great deal of research to back it up I admit, that we have been conned on some aspects of scientific method. There has been a line drawn in the wrong place between the legitimate field of science and superstition. Possibly it stems from the assertion that all is material. So that in doing research we are supposed to limit what we investigate to those things that can be measured and/or perceived by the senses or instruments and reject all else. Now I don't know if that is really what is taught but it seems that way to me. I would suggest that it is perfectly valid to investigate those things that can be experienced too.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Mar 27, 2007 2:37 pm 
Offline
Your Host

Joined: Mon Jul 10, 2006 6:57 am
Posts: 204
Location: Sydney, Australia
thsman wrote:
If by "consciousness" you mean "I" then I am with you. Is there anyone researching "I" to find out what it is and what it's properties are? I find many attempts to explain it away from the starting premise that everything is material but very little else. I will look into those people you mentioned.

There's an ongoing debate over whether the concept of "I" is necessary. You'll find terms like "qualia" at the centre of the debate, and thought experiments such as "Mary's room" (by Frank Jackson). Some conclude that physical facts alone are insufficient to explain the phenomena, but others (notably Dennet) disagree.

thsman wrote:
...in doing research we are supposed to limit what we investigate to those things that can be measured and/or perceived by the senses or instruments and reject all else. Now I don't know if that is really what is taught but it seems that way to me. I would suggest that it is perfectly valid to investigate those things that can be experienced too.

Well, we have accepted techniques in science with regards to measurement. Some consider measurable experiment to be the very heart of science. When it comes to a non-physical fact, such as "consciousness" (if non-physical it be), then we're outside charted territory. Even if we get agreement that there must be non-physical facts, we don't have any accepted paradigms for investigating them. There may not even be any way of investigating them.

Some people launch into denial mode the moment something like this is suggested because they are epistemic optimists. They think that we can know everything if we try hard and keep at it long enough. The idea that something might be outside the realm of knowability is anathema to them. I'm not such an optimist: Goedel cured me of that. I'm a firm believer that certain facts can't be known, and the true nature of "I" may well be among those facts.

I'm open to suggestions, though.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group