Lies, Deception, and Bullshit
I bet you believe I'm bald. You may not have thought about that before I mentioned it, but surely you believed it before then. Similarly, I bet you believe 256+98076=98332 despite having never thought of this equation before. Call the beliefs you have that you've made explicit to yourself, explicit beliefs, and those beliefs you have that you've not yet made so explicit, implicit beliefs. Sometimes you might learn you've implicit beliefs you'd rather you didn't have. Maria Stewart's speech Why Sit Ye Here and Die? Provides an example of this sort. Among other things, she calls out northern abolitionists who claim to explicitly believe racism is wrong for not hiring black workers. These hiring practices seem to suggest white employers implicitly held racist beliefs. Stewart brings this to light in her speech by pointing to what seems like a performative contradiction. For our purposes, this is a plausible diagnostic for uncovering implicit beliefs.
Beliefs come in a wide variety. You might, for instance, believe the Earth orbits the Sun, or you might believe the Earth does not orbit the Sun. You might even lack any belief about celestial orbits at all. These observations provide shape to types of ignorance, which may be confused with types of belief. For example, you might not have any belief - implicit or explicit - about the number of rings around Saturn, i.e. you might be entirely ignorant about that topic. On the other hand, you might believe there are only two rings around Saturn, i.e. you might have a false belief about that topic. Holding some beliefs crowd out others. If you believe P, then you can't also believe not P or anything that obviously entails not P. If P is true, that's one way to be wrong about P. Another is not having any belief about P, since that entails you don't believe P (or not P).
We have then two axes to help structure not having true beliefs (suppose P is true):
Believe P Explicit True Belief Implicit True Belief
Believe not P Explicit False Belief Implicit False Belief
No Belief of P Explicit Ignorance Implicit Ignorance
Aware Unaware
You might think it odd to say one could have explicit ignorance, but this isn't so peculiar. There are many things I know I don't know, and know I don't have an opinion about, e.g. capital gains tax. In any event, with that structure in mind, let's think about liars. We know a liar when we see one, right? Maybe. We should be careful, since it seems some things appear like lies but aren't. For example, suppose you ask Kasey if I'm a good logician, and he responds "John? He's always on time!" You might think he's not answered your question. However, what Kasey has plausibly done is invite you to make an inference to conclude that I'm not in fact a good logician. If I were, then he would've said so. But simply because he's not answered your question it doesn't follow he's lied. He wasn't, for instance, trying to deceive you; it's reasonable to think you'd have drawn the inference he invited you to draw. This is rather an example of pragmatic implicature.
The paradigmatic case of a liar, as Sartre understands, is someone who:
(i) Either explicitly or implicitly believes the truth
(ii) Explicitly denies the truth to himself or others
(iii) Accepts that (i) and (ii) are in conflict
(iv) Denies (iii) to himself or others
This is to say Sartre claims there are two levels of deception in paradigmatic cases of lying. Let's consider an example. Suppose it's raining outside and I explicitly believe this. This satisfies (i). Suppose you ask me if it's raining and you are explicitly ignorant of whether it is or isn't. Suppose I say "It's not raining." This satisfies (ii). It seems to follow I'm aware of the conflict between (i) and (ii), and I accept it. "Accepting" is a peculiar attitude one can take towards propositions about the world. You can accept things you think are false, for instance, in pursuit of other goals. For example, a scientist might see a great deal of counterevidence for his favorite theory, but accept that it's true anyway for the sake of further inquiry. Similarly, a lover might accept a partner is faithful despite having overwhelming evidence to the contrary. This is the sense in which the liar can accept two conflicting claims, satisfying (iii). Moreover, you can often understand this in terms of behavior (note, there's no 'explicit' requirement on accepting here). My explicit belief that it's raining and claim otherwise is a performative contradiction, suggesting at least an implicit belief of the sort you'd expect in (iii). To continue the example, suppose I explicitly deny the conflict in (iii). Then I've met all the conditions to be a paradigmatic liar.
Psychoanalysis of Lies
How might be understand this phenomenon at the level of psychology? If Freudian psychoanalytic theory is true, then you've a subconscious id - instincts and drives as part of the mind; arational or irrational - and conscious ego - one intellectual part of the mind; rational. According to the psychoanalytic explanation of lies, both (i) and (iii) fall into the subconscious while (ii) and (iv) are conscious. The subconscious protects itself, and the conscious part of the mind avoids engaging with it, perhaps out of shame, or out of powerlessness (rationality can't even engage irrationality…).
Sartre claims this explanation is inadequate since it seems to require the id have something like a rationality, since otherwise it wouldn't 'know' what desires and instincts it should be protecting in (iii). I think this isn't such a good argument.
One response is simply noting the id doesn't need to 'know' anything other than that there are certain bad outcomes correlated with revealing certain instincts and desires to consciousness. But this need not be rational in any robust sense. Neural networks suggest learning may take place without much awareness, as long as background parameters are set up in certain ways.
A better response though is to observe that the id need not 'know' anything, since it's plausible the ego is protecting itself by simply not engaging with what it (implicitly) takes to be irrational. You might think the ego must nevertheless know what desires and instincts to ignore, but note the ego might simply be wired - as we are - to avoid disastrous consequences after minimal exposure to them. Children who touch hot stoves often fear all stoves for some time, hot or cold. Similarly, the ego early on might have wrestled with instincts and desires that couldn't be pinned, and were too emotionally costly to engage with. This is to say it’s plausible the ego itself avoids engagement with certain desires and instincts not because they’re being hidden by the id, but because they’re part of the id and unruly.
Which is all just to say I don't find Sartre's motivation compelling. Still, he gets points for offering an alternative explanation all at the level of consciousness for the same phenomenon. To do this, he needs only avail himself of something the Freudian already accepts, namely, the distinction between consciousness and awareness. You can be conscious but not aware. Suppose you're looking for cufflinks in a drawer but can't find them. Later, while driving to a gala, you realize the cufflinks were in the drawer and you overlooked them. The plausibility of this scenario suggests you can perceive cufflinks - and so be conscious - but not be aware of that perception.
***As an aside, this distinction also provides a plausible background against which to explain the truth - I think - of the following claim:
(IMG-PER) It is possible for you to imagine and perceive the same content
Suppose (IMG-PER) is false. Then it's not possible to imagine and perceive the same content. Now suppose you're standing before a white wall. Keep your eyes open and imagine the wall is a slightly darker shade of white than it is. Now slowly imagine the color changing so it is the same color as the actual wall. If (IMG-PER) is false, then it's possible for you to be imagining the darker white wall while perceiving the white wall (since you can perceive and imagine different content at the same time) up until the last moment, in which case you can't be imagining the white wall anymore, but can only be perceiving it. This strikes me as absurd. Hence, I think (IMG-PER) is true. The distinction between consciousness and awareness assists in explaining this phenomenon, since it's plausible you're consciously perceiving and imagining the white wall, but only attending to one of those conscious activities. In other words, it's plausible awareness is limited here.
Bad Faith
Sartre thinks you don't need the id to explain paradigmatic liars. Indeed, the answer he provides should already be familiar to you, since the discussion above about ignorance and implicit beliefs might all occur at a conscious level. To make this clear, consider his example of the waiter who believes himself solely and entirely a waiter. The truth is that he's an autonomous agent capable of choosing, dreaming, etc. in ways that outstrip waiting tables. It's plausible that he at least implicitly believes this, while explicitly believing he's solely a waiter. This might be uncovered by, say, reflecting on how he'd act if he won the lottery. I suspect he'd no longer solely wait tables. Of course, he might find such novel possibilities burdensome, since then he'd need to choose. This is where the tension between his implicit and explicit beliefs enters. He might not recognize this tension, but instead focus solely on being the best waiter he can, so he doesn't have to address his freedom. Indeed, suppose he's pressed to engage in an action not typical of waiters, e.g. rob a bank at gunpoint. Surely he'd not simply act like a waiter in that context. Still, his actions in typical contexts, e.g. gestures, mannerisms, suggest he either explicitly or implicitly denies his freedom to choose for the sake of solely being a waiter.
See if you can examine Sartre's other examples with the table above; it'll be useful to incorporate the category of ignorance in your explanation since - as you may have noted - I didn't…
Sartre suggests bad faith emerges as either focusing on one's facticity or transcendence too much, and there's no way to combine them. We're thus doomed, it seems, to bad faith in most cases. I'd love to explore this, but how about you give it a shot?