[personal profile] cosmolinguist
I'm watching an old episode of QI and Stephen Fry has just described the Turing test as "the most important thing for a machine," in the context of advancement in robots and computers and that sort of thing.

And I just thought, man, what a human-centered way to think about it! It's probably not the most important thing to a machine at all, because why would a machine care about how well it can simulate a boring rubbish fallible weird old human? It's an important thing for humans in the machines they're building, maybe, but not for the machine, right?

But then I thought, in order to pass the Turing test, it'd have to care about passing the Turing test because that's what humans care about.

And I kept thinking about this and my brain got all tangled up.

(no subject)

Date: 2016-10-25 08:28 am (UTC)
lenores_raven: (keyboard)
From: [personal profile] lenores_raven
This is one of my favourite things to think about. :-)) Heja, robots!

(no subject)

Date: 2016-10-25 09:54 am (UTC)
haggis: (Default)
From: [personal profile] haggis
Yes, I always thought the Turing Test was an odd irrelevant exercise. You will learn a lot about humans and about language beating the Turing Test, not convinced you will learn about AI.

This and the fact that Turing Machines are so different from the computers I recognised meant that it took me a long time to realise how important his work actually was.

(no subject)

Date: 2016-10-27 11:16 am (UTC)
ggreig: (Default)
From: [personal profile] ggreig
The machine wouldn't have to care to pass the Turing test; it would only have to give the appearance of caring well enough to fool a human observer.

(no subject)

Date: 2016-10-27 11:59 am (UTC)
ggreig: (Default)
From: [personal profile] ggreig
It's an interesting question isn't it? If a machine is entirely rational, and doesn't care but can emulate caring for its own ends - what differentiates that from psychopathy? (I'm sure there are differences, but as a non-expert either in psychology or AI I can't immediately draw them out.)

(no subject)

Date: 2016-10-27 12:40 pm (UTC)
sashajwolf: photo of Blake with text: "reality is a dangerous concept" (Default)
From: [personal profile] sashajwolf
Which is exactly the sort of question Turing was interested in when he proposed the test :-)

(no subject)

Date: 2016-10-24 09:16 pm (UTC)
From: [identity profile] kickthehobbit.livejournal.com
Dude. I love this. :D

(no subject)

Date: 2016-10-26 06:01 pm (UTC)
From: [identity profile] mrs-leroy-brown.livejournal.com
This is a genuine thought I have - that after the singularity, it may be decided that humans need to go - we treat each other and the planet like shit, and we will be really stupid to advanced AI who can learn at rates we can't possibly. But if they come from us, which they are/will, will we get a bit of a pass?

I err on the side of no.

Profile

the cosmolinguist

March 2026

S M T W T F S
1 2 3 4 56 7
8 9 10 11 12 13 14
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags