Poisoned Wine Puzzle

Problem

An assassin has infiltrated your wine cellar where you keep 1000 pristine bottles of vintage. The assassin poisons one of the bottles before being startled and fleeing the cellar. You are left unsure which bottle of wine has been poisoned. However, you know the poison is powerful, and that even a diluted drop would be deadly. Moreover, you know the poison takes effect within 24 hours of imbibing. Once it takes effect, death is immediate.

Fortunately, you have a poison testing device. Less fortunately, you only have ten non-reusable testing cups for the device. So, for example, you might use a cup to test one bottle of wine for poison, but afterwards the cup cannot be used to test another. As with human ingestion, the poison can be detected when it takes effect, sometime within 24 hours of the test, and once it takes effect, it may be detected immediately.

Perhaps even less fortunately, you have intended to serve the 1000 bottles during a lavish party in exactly 24 hours. Clearly you cannot do so now. Always the optimist, you are sure there is a way to isolate the poisoned bottle using only the ten testing cups, allowing the remaining 999 bottles for the party.

Question

How do you find the poison?

Solution

Let's begin with some intuitive labels. Name the testing cups A, B, C…J. Name the bottles 1-1000.

Let's try on a few putative solutions, observing where they fail and why, and also where they succeed and why.

First Pass - Clearly, we might test ten bottles of wine, one for each cup. That is, we might take some small portion of wine from, say, bottle 1 and place it in testing cup A. Similarly, we might take some small portion of wine from bottle 2 and place it in testing cup B, etc. At some time within 24 hours we will then know if one of the tested bottles has been poisoned.
However, since the testing cups are non-reusable, this approach would leave us with 990 bottles untested. A more subtle approach is warranted.

Second Pass - We might instead divide the 1000 bottles into ten allotments of 100 bottles each, e.g. 1-100 would form an allotment, as would 101-200, and so on. We might then take, say, a small portion of wine from each of the 100 bottles in allotment 1-100, which we place in testing cup A. Similarly, a small portion of wine from each of 101-200 would be placed in testing cup B, and so on. At some time within 24 hours we will then know if one of the allotments contains the poisoned bottle of wine.    
This pass is a significant improvement over the first. To see why, assume the poison is in wine bottle 425. Then testing cup A-D and F-J will not indicate poison has been detected, while testing cup E will. Of course, that leaves us with poison somewhere among 100 bottles. We assumed the poison was in bottle 425, but we would've achieved the same result had we assumed the poison was in any of the 401-500 bottles. In the interest of having a stellar party rather than merely an amazing party, let's press on.

Third Pass - Keep the second pass attempt, but add that A also tests bottles with names ending in "1", B tests those ending in "2", C in "3" and so on until we reach J which tests, in addition to 901-1000, those bottles with names ending in "0".
The result seems to significantly narrow down the poison. For example, say the poison is in bottle 467. Then E will indicate poison since E tests 401-500. So will G, since G tests not only 601-700, but also every bottle ending in "7". Unfortunately, both E and G overlap in many other places, e.g. 407, 417, etc. This pass offers little advantage over the last. We must continue.

Final Pass - Each preceding pass was an attempt to uniquely identify each bottle by some distribution of samples over testing devices. You're an optimist, but the preceding failures might lead you to think the glass is half empty, i.e. there is no such arrangement. Note, however, there are 1000 bottles and 10 testing cups. Each testing cup either will or will not test a given bottle of wine. But then there are 2^10=1024 ways we might uniquely distribute the bottles along the cups. Since there are only 1000 bottles, we should be able to uniquely identify each bottle. We need only find the right arrangement.

And these observations reveal the lines along which we may find our solution. There are two 'values' with respect to each bottle of wine, that a testing cup might take. For the purposes of book-keeping, if a testing cup tests a bottle, say that cup is evaluated as 'True' at that bottle, or 'T'. Otherwise, say that cup is evaluated as 'False' at that bottle, or 'F'. If, for example, testing cup A tests bottle 456, then A will be evaluated as 'T' at that bottle. If not, then 'F'. I chose these values to better illustrate our unique identification using a feature of a commonly taught algorithm for evaluating sentences in propositional logic: truth tables. Appealing to truth table distributions of 'T' and 'F' will provide the needed distribution of 1000 bottles of wine (rows) over 10 testing cups (columns).

But working with a truth table with over 1000 rows is cumbersome. Let's illustrate instead with a smaller example and then generalize. Specifically, let's examine our solution with only 3 testing cups, resulting in 2^3=8 rows in the truth table, each of which will correspond to a bottle of wine:


         A         B        C   
1     T         T         T
2     T         T         F
3     T         F         T
4     T         F         F
5     F         T         T
6     F         T         F
7     F         F         T
8     F         F         F
 

Each row is distinct. Then according to the table every testing cup will test bottle 1, A and B will test 2, A and C will test 3, and only A will test 4. A will not, however, test any of the remaining bottles. Rather, B and C will test 5, B will test 6, C will test 7, and no cup will test 8. Then if no cup indicates poison has been detected, it is in bottle 8. If only C, then 7; only B, then 6; only A then 4. If both B and C, then 5; A and C, then 3; A and B, then 2. If all testing cups indicate poison has been detected, then the poison is in bottle 1.

It is easy to see how our illustration of 3 testing cups and 8 bottles generalizes to the case of 10 cups and 1000 bottles. In that case, as stated, we are able to uniquely identify 2^10=1024 bottles, and hence, 1000 bottles with some to spare.

So, retain your optimism. It looks like you'll have a party to remember (or not!) after all. Though, keep an eye on the guest list; there's an assassin out there looking for you.

Hat of a Different Color (Part II)

Review

In a previous post we began examining the Three Hats Puzzle. Here we complete our solution. So far, we've introduced axioms characterizing a domain of five hats (two blue and three red) and three individuals (Alex, Barbara, and Cherise). We've additionally asserted that Alex and Barbara may see the hats of other participants, but Cherise cannot, and that neither Alex nor Barbara know what color hat they are wearing, while Cherise knows her hat color. From these axioms we were able to infer if Alex (or Barbara) sees two hats, then they cannot both be blue. This should sound correct, since if, say, Alex saw two blue hats, and since there are only two blue hats, then Alex would know his hat color. Since he doesn't, he doesn't.

To solve the puzzle, however, we must infer that Cherise knows her hat color.

Informal Solution

We will ultimately introduce first-order axioms and infer the solution to the case, but first we should examine the case informally to see what axioms we might need. Let's think about Alex for a moment.

  • We know Alex may see two red hats. To see why, note that if Alex sees two red hats, then he does not know what color hat he is wearing, i.e. he could be wearing a blue hat or red hat. Moreover, Barbara gains no new information based on Alex's claim that he does now know what color hat he is wearing, as all three participants could be wearing red hats.
  • We know Alex may see a red hat and a blue hat, but here we must be careful. The distribution of the hats matters. It is permissible for Alex to see Barbara wearing a blue hat and Cherise wearing a red hat. Then Alex's hat may be blue or red. Moreover, Barbara gains little information from Alex's claim. If Alex's hat is, say, red and Cherise's hat is red, then Barbara's hat may be red or blue.
  • Note, however, Alex cannot see Barbara wearing a red hat and Cherise wearing a blue hat. This leads to contradiction. To see why, assume Cherise is wearing a blue hat. Then both Alex and Barbara see Cherise wearing a blue hat. Alex may speak truly when claiming he does not know what color hat he is wearing, as he may see Barbara wearing a red hat and Cherise wearing a blue hat. Nevertheless, this option leaves Barbara speaking falsely. For if Barbara sees Cherise wearing a blue hat and knows that Alex cannot see two blue hats and also knows that Alex does not know what color hat he is wearing, then Barbara can infer that her hat must be red. Clearly, if Barbara's hat were blue then Alex would know what color hat he's wearing, as there are only two blue hats.

Our informal solution results in only two options for Alex. Alex either sees two red hats or Barbara wearing a blue hat and Cherise wearing a red hat.

More importantly than all that though, is the fact that we've stumbled upon our solution to the problem! On either of these options it must be the case that Cherise is wearing a red hat. Indeed, this is information Cherise may infer from the constraints of the case. In other words, Cherise knows what color hat she is wearing without being able to see any hats at all.

Strengthening our Axioms

Our only task remaining is to formalize our informal solution, and verify the intended results follow from our formalization. To our axiom set we add the binary relation "W" with an intended reading that the first (individual) wears the second (hat).

1. ∀x∀y∀z((W(x,z) & W(y,z)) -> x=y)
Only one individual may wear a given hat
2. ∀x∀y∀z((W(x,y) & W(x,z)) -> y=z)
Every individual wears only one hat
3. ∀x∀y(W(x,y) -> (I(x) & H(y)))
Only individuals wear hats

We also require that seeing a hat entails the hat is being worn, but no one sees the hat they are wearing.

4. ∀x∀y(S(x,y) -> ∃z(W(z,y)))
If someone sees a hat then it's being worn by someone
5. ~(∃x∃y(S(x,y) & W(x,y)))
No one sees the hat they are wearing

Supplementing these general axioms are the following facts, reflecting that, say, anything Alex sees is either Barbara's or Cherise's hat (and mutatis mutandis for Barbara).

6. ∀x(S(a,x) -> (W(b,x) v W(c,x)))
Alex sees the hats Barbara and Cherise wear
7. ∀x(S(b,x) -> (W(a,x) v W(c,x)))
Barbara sees the hats Alex and Cherise wear

Finally, we add the fact that if Barbara sees Cherise wearing a blue hat, then Barbara knows what color hat she is wearing.

8. ∀x((S(b,x) & W(c,x) & B(x)) -> K(b,b))
If Barbara sees Cherise wearing a blue hat, Barbara knows what color hat she is wearing

These axioms and facts generate a class of models in which the following are permissible:

9. ∃x∃y(S(a,x) & S(a,y) & xy & R(x) & R(y))
Alex sees two red hats
10. ∃x∃y(S(b,x) & S(b,y) & x≠y & R(x) & R(y))
Barbara sees two red hats
11. ∃x∃y(S(a,x) & S(a,y) & x≠y & B(x) & R(y) & W(b,x) & W(c,y))
Alex sees Barbara wearing a blue and Cherise wearing a red hat

But, importantly, which rule out the following possibilities:

12. ∃x∃y(S(a,x) & S(a,y) & xy & R(x) & B(y) & W(b,x) & W(c,y))
Alex sees Barbara wearing a red hat and Cherise wearing a blue hat
13. ∃x(S(a,x) & B(x) & W(c,x))
Alex sees Cherise wearing a blue hat

Hence, we are able to infer that Cherise is wearing a red hat, i.e. the following is a theorem:

14. ∃x(W(c,x) & R(x))
Cherise is wearing a red hat

Which is essentially what we intended to show. The resulting axiom set thus far can be found here. (Exercise: Show Cherise knows the color of her own hat).

Checking Our Work

Proofs were generated with Prover9 and models were checked with Mace4. If you'd like to check the models yourself, I advise generating a model with Mace4, then looking at the 'cooked version'. You can make the model even more perspicuous by copying it into Notepad++, hitting ctrl+F, navigating to the "Mark" tab, then entering "-*" (no quotes) with "Bookmark Line" selected. This will bookmark each line that begins with "-" which, in Mace4 means the predicate or relation is not satisfied. You can then navigate to Search->Bookmark->Remove Bookmarked Lines, to remove all the unsatisfied predicates and relations. The result will be a small model that's easy to read.

Curtis & Robson on the Metaphysics of Time

Benjamin Curtis and Jon Robson, in A Critical Introduction to the Metaphysics of Time, provide an impressive overview of contemporary debates over the nature of time. Check out a draft of my review of the book here.

Of the wealth of material covered in this introductory text, I found the authors' discussion of future contingents fascinating, and yet perplexing. In particular, the authors claim the possibility of an (alethically) open future conflicts with the classical logic principle of bivalence. They then use deviation from classical logic to undermine the possibility of future contingents. I take issue with several claims made by the authors (you can see a few more in the review above). For one, bivalence is intuitive, but it's not limited to classical logic. Other non-classical logics incorporate this principle as well. Of course, this pushes the question back from logical principles to logical theories. To be fair, the authors claim classical logic is widely accepted due to its theoretical virtues. But there is no discussion of what theoretical virtues are desirable and why, or comparison against alternative logics.

One might think my complaint is unfair, since this is an introduction to the metaphysics of time and not a philosophical logic text. I would, however, relate this same claim to the authors. If you're going to appeal to classical logic to undermine metaphysical theses, more discussion of why philosophers have accepted classical logic over others is desirable. Otherwise, leave philosophical logic questions alone and focus on the metaphysics. 

Jason Turner's Factualism

Appearances to the contrary the world consists ultimately of facts, not of things. According to Factualism, familiar objects and properties of experience are mere abstractions from this single ontological category. Jason Turner’s recent defense of a version of this thesis in The Facts in Logical Space: A Tractarian Ontology, is precise, exhaustive, and persuasive. Those working on facts will find much of interest, as will those working at the intersection of formal logic and metaphysics...

...and those wanting to hear what I think of Turner's book an check out my review (in draft form) here before release.  

Content with Publicity

Check out my recent paper here (outline below)!

In Concepts: Where Cognitive Science Went Wrong (1998), Fodor provides a list of conditions he claims any adequate theory of concepts must meet. Among the entries is what is known as the publicity constraint - concepts must be shareable across distinct agents. In attached paper, I examine motivation for requiring theories of concepts meet the publicity constraint. I also extract, explain, and motivate four premises Fodor employs in arguing for this constraint. In passing, I outline aspects of Fodor’s Language of Thought Hypothesis, paying particular attention to the representational and computational theories of mind. Next, I formalize and defend Fodor’s argument that generalizable laws of psychology entail concepts must be public. I then evaluate Fodor’s argument, ultimately declaring it unsound given his commitment to an informational semantic account of mental state content coupled with his response to Frege Puzzles which plague such accounts. On Fodor’s behalf, I propose motivating the publicity constraint via argument to the best explanation, while noting such a tactic is an uphill battle.

Frede among the Skeptics

Check out my recent presentation on ancient Skepticism where I try to get clear on whether, according to Sextus Empiricus, Skeptics had beliefs. I engage with Michael Frede's two seminal papers The Skeptic's Beliefs (1979) and The Skeptic's Two Kinds of Assent and the Question of the Possibility of Knowledge. If the presentation piques your interest, check out the paper here (which is much better than the presentation imo).

Now That's a Hat of a Different Color...

Three Hats Puzzle
Three individuals, call them Alex, Barbara, and Cherise, enter a pitch black room, where they are led to a table on which rests five hats, 3 red hats and 2 blue hats. The hats are arranged in no obvious order, and no individual can discern the colors in the dark, but Alex, Barbara, and Charles know how many hats of each color there are. They each select a hat from the table, and wear that hat outside the room into a well-lit area. Alex looks at Barbara and Cherise, and says, “I don’t know what color my hat is.” Barbara looks at Alex and Cherise and says, “I don’t know what color my hat is.” Cherise does not look at anyone else, since Cherise is blind. Nevertheless, Cherise says “I know what color my hat is.” This is all that is said, and they each speak truly.

Challenge
Explain how Cherise knows her hat color.

Note: Like many puzzles, this has numerous lateral solutions, e.g. the red and blue hats are differently shaped, Cherise is colorblind but can see blue, etc. Lateral solutions are easily dismissed without affecting the details of the scenario, e.g. the hats share all properties save color, Cherise is not just colorblind, etc. The challenge is to find a logical solution. A logical solution will follow directly from the details of the scenario, and will not be easily dismissed since doing so will require changing the scenario.

Solution and Discussion
I pose this puzzle to students who are then encouraged to work in small groups (1-3 students) to find a solution. Once students have understood the puzzle and the distinction between lateral and logical solutions, groups are quick to reason in the following manner:

Since Alex speaks truly, she must not see two blue hats. If Alex did see two blue hats, then she would know her hat was red. Then Alex must see either two red hats or one red and one blue hat. Similarly for Barbara, who must see either two red hats or one red and one blue hat

This seems unsurprising; the reasoning involved is direct, it is an immediate consequence of understanding the details of the case. We may show this formally. First, we fix on our notation.

Our language is first order with identity. Our domain consists of eight objects, which we sort into individuals with the predicate “I” and hats with the predicate “H”. Let “a” denote Alex, “b” Barbara, “c”. Let “B” be the predicate applying to blue hats, and “R” the predicate applying to red hats. Sample axioms characterizing the domain include (see here for the full set):

1.      ∀x(Hx v Ix)
Everything in the domain is either a hat or an individual
2.      ~∃x(Hx & Ix)
Nothing in the domain is both a hat and an individual
3.      ∃x∃y∃z(x≠y v x≠z v y≠z & Ix & Iy & Iz & ∀w(Iw -> (x=w v y=w v z=w)))
There are exactly three individuals
4.      ∀x(Bx -> Hx)
All blue hats are hats
5.      ∀x(Rx -> Hx)
All red hats are hats
6.      ∀x(Hx -> (Bx v Rx))
Every hat is either red or blue
7.      ∃x∃y(x≠y & Bx & By & ∀z(Bz -> (x=z v y=z)))
There are exactly two blue hats

…And so on. We also introduce relations.  Let “S” stand for an irreflexive binary relation holding between an individual and a hat with the intended reading being that the individual sees the hat. Let binary “K” hold between individuals with the intended reading that the first individual knows the hat color of the second. Sample characterizing axioms include:

8.      ~∃x(Sxx)
The ‘sees’ relation is irreflexive
9.      ∀x∀y(Sxy -> (Ix & Hy))
Individuals see hats
10.     ∀x∀y(Kxy -> (Ix & Iy))
Only individuals know things
11.      ∀x∀y∀z((Sxy & Sxz & y≠z) -> ∀w(S(x,w) -> (w=y v w=z))
Individuals see at most two hats

…And so on. We also characterize the following facts concerning the case:

12.  ∃x∃y(x≠y & Sax & Say)
Alex sees two things
13.  ∃x∃y(x≠y & Sbx & Sby)
Barbara sees two things
14.  ~∃x(Scx)
Cherise sees nothing
15.  ~Kaa
Alex does not know what color hat she is wearing
16.  ~Kbb
Barbara does not know what color hat she is wearing

With these axioms in hand, we may infer the following additional facts as theorems simply based on the domain and relation constraints:

17.  ∃x∃y(Sax & Say & (Bx & By) v (Rx & Ry) v (Bx & Ry))
Alex sees either a blue/blue, red/red, or blue/red distribution
18.  ∃x∃y(Sbx & Sby & (Bx & By) v (Rx & Ry) v (Bx & Ry))
Barbara sees either a blue/blue, red/red, or blue/red distribution

We add two more plausible facts. Observe, there is a relationship between knowing one’s hat color and the possible hat distribution. Consider Alex. If Alex sees two blue hats she knows what color hat she is wearing, which we may formalize as (watch the scope; avoid the Drinker Paradox!):

19.  ∃x∃y(Sax & Say & x≠y & Bx & By) -> Kaa
If there are two blue hats Alex sees, then Alex knows her hat color

A similar fact pertains to Barbara, but we will leave that aside here. Importantly, given (15) the consequent of (19) is false. Hence, the following theorems can be inferred:

20.  ∀x∀y(Sax & Say & x≠y) -> ~(Bx & By))
If Alex sees two hats they are not both blue
21.  ∃x∃y∃z(Sxy & Sxz & y≠z & ((Ry & Rz) v (By & Rz)))
Someone sees either a red/red or blue/red distribution of hats

In other words, for Alex, Barbara, or Cherise, the only available distributions of colors are red and red, or blue and red. We have then matched the direct reasoning above with our axioms.

Of course, this is not the answer to the puzzle. To solve the puzzle we must infer Cherise knows what color hat she is wearing, i.e. Kcc (Exercise: Why won’t simply adding this fact to the axioms suffice?).

This step seems the trickiest for students. I suspect it is because moving forward in the solution requires indirect reasoning, i.e. assuming something for the sake of a contradiction. The stumbling block, however, often leaves them ready to abandon the puzzle. Don’t let the difficulty of the puzzle stand in the way…we’ll infer the solution next post. In the meantime, play around with the axiom set here. The syntax is readable by Prover9. All theorems were checked with this application. Models were checked with the bundled Mace4 finite model checker.

Square of Individuals

Suppose there are at least two distinct individuals, Alex and Bob, and that Alex is part of Bob.

Ground mereology has the two-place parthood relation as reflexive, i.e. everything is part of itself: the maximal part. The relation is not, however, symmetric as it is intuitively false that if x is part of y, then y is thereby part of x. There are at least two ways to reject symmetry: asymmetry or antisymmetry. On the former, any x part of y entails y is not part of x. On the latter, if x and y are parts of each other, they are identical. The first is too strong, since inconsistent in the presence of reflexivity:

1.      x P(x,x)                                 Premise
2.      xy P(x,y) -> ~P(y,x)          Premise
3.            SHOW   !                           DD
4.                  P(a,a)->~P(a,a)         2, ∀ Instantiation
5.                  P(a,a)                          1, ∀ Instantiation
6.                  ~P(a,a)                       4,5 MP
7.                    !                                 5,6 !

Ground mereology accepts instead the weaker antisymmetry. Additionally, parthood is taken to be transitive as it is plausible any part x of y which is part of z entails x is also part of z.

Useful definitions can be constructed from this characterization of parthood. Two individuals are said to overlap if they share a part in common, are discrete if they do not overlap, and an individual is said to overlap the complement of another, if the first shares a part with the complement of the second.

Remarks in hand, return to Alex and Bob, denoting the first with “a” and the second with “b”, and the parthood between them as “P(a,b)”. Observe, parthood entails overlap for these individuals:

1.      P(a,b)                                    Premise
2.      SHOW ∃x P(x,a) & P(x,b)     DD
3.             P(a,a)                             Reflexivity
4.             P(a,a) & P(a,b)               1,3, CI
5.             ∃x P(x,a) & P(x,b)          4, ∃ Introduction

Line 5 reflects that “a” and “b” share a part in common. Hence, if Alex is part of Bob, then Alex overlaps Bob. The converse does not hold (Exercise: Find a countermodel).

Observe next, to say Alex and Bob are discrete, is to deny they overlap. Equivalently, it is to claim they have no common parts. Moreover, we would be saying of, say, Alex, that Alex overlaps some part of the complement of Bob, and vice versa. For Alex to overlap Bob’s complement is for there to be a part of Alex that is not a part of Bob.

1.      ~∃x P(x,a) & P(x,b)                           Premise
2.           SHOW∃x P(x,a) & ~P(x,b)           DD
3.                 ∀x P(x,a) -> ~P(x,b)              1, Substitution
4.                 P(a,a)                                     Reflexivity
5.                 P(a,a) -> ~P(a,b)                   3, ∀ Instantiation
6.                 ~P(a,b)                                   4,5 MP
7.                 P(a,a) & ~P(a,b)                     4,6 CI
8.                 ∃x P(x,a) & ~P(x,b)                7, ∃ Introduction

Hence, if Alex is discrete from Bob, then Alex overlaps the complement of Bob. Since both overlap and complement overlap are symmetric, we can say the same for Bob (Exercise: Prove overlap and complement overlap are symmetric).

Our brief foray into the mereology wilderness permits, given the assumptions with which we began, a square of individuals (cp. Square of Opposition). As is well-known, blindly translating categorical sentences into first-order notation undermines logical relations of the traditional square, as classical logic permits conditionals which are vacuously true. We avoid the problem of existential import by sticking with individuals. Hence, our square parallels the tradition:

Untitled Diagram.jpg

Implication holds whenever the first is true then the second must be, and if the second is false so must the first be. Contraries are sentences which may both be false but which may not both be true. Contradictory sentences require that if one is true the other is false, and if one is false the other is true. Subcontraries are sentences which may both be true, but which may not both be false (Exercise: Verify the remaining corners).

There’s ∃x about Mary

PUZZLE:
Larry is married but Nick isn’t. Larry is looking at Mary, and Mary is looking at Nick.

QUESTION:
Is someone married looking at someone not married?

CHOICES:

A.     Yes
B.     No
C.     Not enough information to answer

This puzzle was posed to me by a student (thanks Richard!) after class one morning. A cursory google suggests 80% choose incorrectly. I am skeptical; I’ve found no empirical evidence supporting this claim (I’d be interested if anyone else has). Be warned, searching for the puzzle will likely turn up solutions, so if you’d like to solve it, best settle here and reflect for a bit. I’ll wait. Once you are finished, check your answer by clicking it above. Afterwards, scroll down for a solution and some discussion.

SOLUTION:
The hallmark of a logical solution to a puzzle is that once presented with the solution, nearly everyone agrees it is correct. The Wason Selection Test is an example. Many choose incorrectly. Bentham reports a psychologist once confessed to him that nearly everyone accepts the standard solution as correct once it is explained [1]. The puzzle under discussion strikes me as logical in this sense. I’m curious if you agree.

I’ve translated the puzzle into a standard classical first-order language, where the predicate and relations symbols are obvious. It is straightforward to prove the solution (I’m using Hardegree’s natural deduction system in Symbolic Logic: A First Course). In symbols:

1.      Ml & ~Mn                                            Premise
2.      Llm & Lmn                                          Premise
3.      SHOW∃x∃y(Mx & ~My & Lxy)          ID
4.                   ~∃x∃y(Mx & ~My & Lxy)      AID
5.                    ∀x∀y(~Mx v My v ~Lxy)      4, Substitution
6.                    ~Ml v Mm v ~Llm                  5, Universal Instantiation
7.                     Mm                                         1,2,6 DE
8.                     ~Mm v Mn v ~Lmn               5, Universal Instantiation
9.                     ~Mm                                      1,2,6, DE
10.                     !                                            7,9 Contradiction

I’ve assumed as premises the information provided in the puzzle. On the SHOW line you’ll find a symbolization of the follow-up question. On line 4, I assume the negation of the SHOW line. On line 5, I substitute negated existential quantifiers for universal quantifiers trailed by negation, which is then distributed via De Morgan application. Since both variables in line 5 are under universal scope, I instantiate without restriction; in particular, to the constants denoting Larry and Mary. The result is line 6, which says Larry is unmarried, Mary is, and Larry is not looking at Mary. However, on line 1 we assumed Larry was married. Similarly, on line 2 we assumed Larry was looking at Mary. Hence, by disjunctive syllogisms, we infer Mary must be married on line 7. Instantiating line 5 once more, this time with Mary unmarried, Nick married, and Mary not looking at Nick, results by similar reasoning in Mary not being married on line 9. Since we already inferred that Mary was married, we now find ourselves in a contradiction. Hence, we infer someone is looking at someone not married.

Another way to think about the solution is to observe (or suppose what is plausible) that Mary is either married or not. If married, then since Mary is looking at Nick who is unmarried, someone is looking at someone who is unmarried. If unmarried, then since married Larry is looking at Mary, someone married is looking at someone unmarried. Either way, the answer is "Yes."

DISCUSSION:
Several of my students, when initially posed with the puzzle, claim there is “Not enough information to answer”. I take this to suggest students have trouble thinking to make certain plausible suppositions. To be clear, I do not think students have trouble making plausible suppositions and reasoning from them in general. They do, after all, deliberate about the future. Rather, it just doesn’t naturally occur to them to make even plausible suppositions in certain contexts, such as the context of puzzles and the contexts of proofs.

Works Cited
[1] Bentham, J. (2008). Logic and Reasoning: Do the Facts Matter? Studia Logica. 88:67-84

Just Another Joke Page

Jokes and Jokes

"Your honor, I understand I'm on trial for a murder that happened 10 years ago. I'll admit, I have memories of committing the murder. Will you punish me though for something that happened so long ago? How can you be so sure I'm even the same guy? So much can change in 10 years. Look, when I was a teenager, 20 years ago, I put fireworks in a neighbor's mailbox. It exploded, and they never found out I was the culprit. Consequently, I never paid damages. But you wouldn't drag me into court today to charge me for the reckless behavior of a teenager, would you? I've changed so much since then! I mean, I killed a guy."

Funny Paper Titles

  • Schrodinger's Can't: What Quantum mechanics says about 'ought implies can'
  • Barcan up the Wrong Tree: Vindicating Quine's Objections to Ruth Barcan-Marcus's Quantified Modal Logic
  • Oh, the Humeanity! Justice as Bareness
  • Being Clever Only Gets You So Far: Notes on Zeno's Paradox(es)
  • The Anti-Disetablishmentanglement Problem: The Meta-Ethical Entanglements We Weave

Funny Instructor-Student Interactions

Instructor: "So, what are your thoughts?" *to student in last row*
Student: "Who? Me?"
Instructor: "No, no, the girl behind you."
Student: *turns only to be confronted with the wall*
Instructor: "Never mind, she looks busy. I'll let you answer for her."

Student: "But sarcasm is one of my many talents." *after being reprimanded for excessive sarcasm*
Instructor: "So you don't have any other talents then?"

Student: "Can you give an example of equivocation 'on the fly'?"                                                                                                                Instructor: "'On the fly' you say? Sure thing. (i) Mathematicians work with planes; (ii) Planes work with jet fuel; (iii) Therefore, Mathematicians work with jet fuel."