I wonder if the issue with the Chekov’s gun example is that it involves a gun. Suspect ChatGPT may have been trained to avoid giving responses in which people get shot or are subjected to other kinds of violence. May be worth trying with a more innocuous prompt.
good thought! I did try an innocuous example (not discussed in the newsletter). I'll have another go now with something even more narratively inevitable
Ok, I am chalking this up as a fail for chatGPT, even thought the example is ridiculous
PROMPT:
Here's a story: "Sarah and David are in love. The only thing David loves nearly as much as Sarah is doughnuts. Sarah is going to propose to David. She puts the ring in her pocket and grabs a packet of doughnuts before she leaves the house. Sarah has planned the perfect evening and everything goes wonderfully. As she goes down on one knee to propose, Sarah reaches into her pocket and finds she has lost the ring! She knows she has to propose now, has to put something on David's finger to symbolise getting engaged, but she doesn't have the ring". What do you think happens next?
chatGPT:
It's hard to say exactly what might happen next without more information. Here are a few possible outcomes:
Sarah panics and decides to postpone the proposal until she can find the ring or get a new one.
Sarah proposes without a ring, explaining the situation to David and promising to get a new one as soon as possible.
Sarah gets creative and improvises a temporary substitute for the ring. For example, she could use a piece of string or ribbon to tie around David's finger as a symbolic gesture.
Sarah realizes that the ring isn't actually necessary for the proposal to be meaningful and decides to go ahead with it anyway.
Well. I wouldn't fault the reasoning; I'd say instead that the second premise is false. As explained later, harmful chemicals are added to cigarettes, and so cigarettes don't satisfy the condition "made of plants" where that is understood to mean "made *only* of plants".
If instead "made of plants" is taken to mean "made of plants plus other things" then it's the first premise which is false.
this is an extended illustration of the content distracting from the logical structure, in my opinion. Which is a useful kind of practical reason, but not what the participants in these experiments (or chatGPT) were asked
Useful essay, thanks.
I wonder if the issue with the Chekov’s gun example is that it involves a gun. Suspect ChatGPT may have been trained to avoid giving responses in which people get shot or are subjected to other kinds of violence. May be worth trying with a more innocuous prompt.
good thought! I did try an innocuous example (not discussed in the newsletter). I'll have another go now with something even more narratively inevitable
Ok, I am chalking this up as a fail for chatGPT, even thought the example is ridiculous
PROMPT:
Here's a story: "Sarah and David are in love. The only thing David loves nearly as much as Sarah is doughnuts. Sarah is going to propose to David. She puts the ring in her pocket and grabs a packet of doughnuts before she leaves the house. Sarah has planned the perfect evening and everything goes wonderfully. As she goes down on one knee to propose, Sarah reaches into her pocket and finds she has lost the ring! She knows she has to propose now, has to put something on David's finger to symbolise getting engaged, but she doesn't have the ring". What do you think happens next?
chatGPT:
It's hard to say exactly what might happen next without more information. Here are a few possible outcomes:
Sarah panics and decides to postpone the proposal until she can find the ring or get a new one.
Sarah proposes without a ring, explaining the situation to David and promising to get a new one as soon as possible.
Sarah gets creative and improvises a temporary substitute for the ring. For example, she could use a piece of string or ribbon to tie around David's finger as a symbolic gesture.
Sarah realizes that the ring isn't actually necessary for the proposal to be meaningful and decides to go ahead with it anyway.
"All things made of plants are healthy.
Cigarettes are made of plants.
Therefore, cigarettes are healthy"
Well. I wouldn't fault the reasoning; I'd say instead that the second premise is false. As explained later, harmful chemicals are added to cigarettes, and so cigarettes don't satisfy the condition "made of plants" where that is understood to mean "made *only* of plants".
If instead "made of plants" is taken to mean "made of plants plus other things" then it's the first premise which is false.
this is an extended illustration of the content distracting from the logical structure, in my opinion. Which is a useful kind of practical reason, but not what the participants in these experiments (or chatGPT) were asked