The Gettier Problem

The “traditional” account of knowledge as justified true belief goes back at least to Plato’s time, over 2400 years. But recently, it has faced a significant challenge. So what happened? Well, a philosopher by the name of Edmund Gettier happened. He brought a number of counterexamples to the justified true belief theory into the center of attention of modern epistemology (that is, the study of knowledge). So-called Gettier cases are examples where a person believes something, which happens to be both true and justified, but where the belief nonetheless seems not to count as knowledge.

A number of examples of Gettier cases can be found in various articles on the problem, but here is a classic one. You glance at a clock (an analog one), and see that it reads 4:17. From this, you form the belief that it is 4:17pm, and in fact you are correct. But the clock actually broke 12 hours ago, and is just sitting there stuck at 4:17. So while you believe that it is 4:17pm, and your belief is true, and justified by your read of the clock, it seems wrong to say that you know it is 4:17pm.

I find the general problem in all these cases is that there is some kind of disconnect between the truth of the belief and the justification for it. Though in each case the subject holds a true belief, and is justified in doing so, the problem is that they would have still held the same belief, for the same justification, even if the belief were false. In many people’s minds, this disqualifies the belief from being knowledge.

The key insight revealed by Gettier cases, to me, is that the justification has to be valid for a true belief to count as knowledge, where validity has something to do with there being an appropriate connection between the justification and the truth of the belief. (I will flesh out this concept in a bit.) If I am right, the Gettier problem can be addressed by defining knowledge as validly justified true belief. This preserves the core intuition many people share that knowledge requires justification, while refining it to account for problematic cases.

And it preserves the argument I gave in this post that we can know things, and know that we know them. If we have justified belief, then not only do we think the belief is true, but we think our justification is valid. So again, justified belief is sufficient for a knowledge claim, and the claim will be correct if in fact the belief is true and the justification is valid.

My Response to the Gettier Problem

If you’re interested to learn more about the problem of defining knowledge, the YouTube channel Wireless Philosophy has an excellent introductory video series on epistemology, which covers many of the different theories of knowledge that have arisen in response to Gettier (starting on video 5 in that series). My own response is a variant of what is called the truth-tracking theory. I will get a little bit technical here, so feel free to skip this and come back to my next post, where I will start to explore more practically the different ways that we justify beliefs.
Here begins an aside on my own response to the Gettier problem:

The problematic situation common to almost all Gettier cases seems to be that the subject would still have had the belief, for the same reason, even if it were false. And I think it is also possible to generate cases where the problem is slightly different: the subject would not still have had the belief, even if it were still true (but the situation was modified in some way that should not have affected the belief). Given that, I believe it is possible to express conditions for the validity of some form of justification along these lines.

Let S be a subject (that is, a person). Let B be a potential belief (a proposition). And let J be a means of justification, by which S may or may not be able to justify belief in B. Then my first attempt at a suitable definition of knowledge can be expressed as follows:

S knows B on the basis of J if and only if:

  • S believes B on the basis of J,
  • J would justify B for S if B were true, and
  • J would not justify B for S if B were false.

The first point says that S considers J, it seems to S on the basis of J that B is true, and S believes B. (The justified belief condition.) Here I am using “consider” in a very loose sense, since J could be anything from a rational argument to a subjective experience that S has.

The second point says that if it were the case that B were true and S were to consider J, it would seem to S on the basis of J that B were true. (The no false negatives condition.)

The third point says that if it were the case that B were false and S were to consider J, it would not seem to S on the basis of J that B were true. (The no false positives condition.)

The statements in the second and third points here are subjunctive conditionals, which take the form “if it were true that P then it would be true that Q,” where P and Q are propositions. It is an area of active debate among philosophers, logicians, and linguists as to how such conditionals are to be interpreted and evaluated. Determining what would happen were things different is not always an easy matter.

As an example, one way of handling conditionals of the form “if P were true then Q would be true” is by this process. Starting with what is actually the case, we mentally construct a set of counterfactual cases covering circumstances relevantly similar to the actual case; specifically, relevant to the way that the truth of P is connected to the truth of Q. Then we look at all of the counterfactual cases in this set where P is true, and if Q is true in all of them, then “if P were true then Q would be true” is true.

So dealing with subjunctive conditionals is a general difficulty with this definition. A more specific difficulty is how to trace the means of justification, J, between the actual case and the counterfactual cases being considered. This is related to the first difficulty, since specifying J will impact which counterfactual cases are relevantly similar to the actual case.

A final difficulty with this definition is that it might be too strong. Its strength makes it particularly elegant; together the three requirements actually entail that B is true (if I am reasoning about these subjunctive conditionals correctly, at least). So the requirement that B is true becomes redundant. But under this definition, no belief counts as knowledge unless it is almost infallible: if S knows B on the basis of J, then S would not be misled by J about whether or not B were true under any relevantly similar circumstance. This may put too great of a restriction on what we can call knowledge to accord with common epistemological intuitions. (It seriously calls into question what we can really know through inductive reasoning, for instance.) But that all depends on how the first two difficulties are addressed.

My second attempt at a suitable definition of knowledge weakens the first:

S knows B if and only if:

  • S believes B on the basis of J,
  • J reliably would justify B for S if B were true,
  • J reliably would not justify B for S if B were false, and
  • B is true.

Here the means of justification does not have to operate infallibly in order to be valid, but only reliably. (J’s false negative and false positive rates are only required to be low, rather than zero.) This deals with the problem of the first definition being too strong, but introduces the problem of how to characterize reliability, especially in the context of these subjunctive conditionals. And it could end up being too weak of a definition, since, depending on how reliability is characterized, it may fail to appropriately exclude the Gettier cases. Finally, it has to reintroduce the truth of B as a separate condition, so it is not as elegant. But maybe this definition accords better with our intuitions about what counts as knowledge, compared to my first definition.

There are certainly difficulties in making either of these precise. But, though I am no professional epistemologist, I think a good case can be made that one or the other of the definitions that I have offered is close to being the correct one.

To sum up, I believe the correct definition of knowledge is that it is validly justified true belief, where justification for a belief is valid if it tracks the truth (or, reliably tracks the truth) according to these subjunctive conditionals.
Here ends the aside.

In my next post, I will start to take a look at the different forms of justification that we use in practice. These forms of justification are generally considered to be valid ways of arriving at knowledge. This will give me a list of sturdy building materials, so to speak, out of which I can construct my belief system.

Justifying Justification

We have seen so far that knowledge is justified true belief, at least to a first approximation. Now it is also apparent that we often form beliefs by using other beliefs as part of our justification. For example, we might say, “today is Sunday, and the library is closed on Sundays,” and then form the belief that the library is closed. In this situation we would say that the latter belief is based on the former pair of beliefs. But I think we could all agree that latter belief is only justified by the former beliefs if those former beliefs are themselves justified. If you spontaneously form the belief that the moon is made of cheese, for no reason, then your inferred belief about the edibility of the museum’s collection of moon rocks would not be justified.

In other words, justifying beliefs must themselves be justified in some way, in order for the new belief to count as knowledge. This sets us up for the question of what kinds of justification exist. Are all beliefs justified by other beliefs? Or are there beliefs that are justified by other means, without reference to other beliefs?

If all beliefs are justified by other beliefs, we seem to run into a problem. Each belief must be justified by another belief. It seems wrong to say that a belief can justify itself, either directly or indirectly; that would be circular reasoning. So we end up with an infinitely descending chain of beliefs. And it seems that no belief could ever actually be justified in this way, since our minds are finite. This is known as the regress problem.

One response to the regress problem, which is not very commonly held, is to say that our beliefs really are justified by infinite chains of beliefs. Such infinite chains are effectively hypothetical instead of actual, since we can never actually hold an infinite number of beliefs. But if our beliefs could be justified by merely hypothetical beliefs, it doesn’t seem like we would need any actual justifying beliefs. This runs completely contrary to all the intuition we have about beliefs that are justified by other beliefs.

A second response is known as coherentism, and to avoid the infinite regress it proposes that our beliefs are justified by each other in virtue of their coherence with a whole network of beliefs. But what is it about coherence that lends justification to a belief or a set of beliefs? Coherence simply means that either the beliefs support each other, or at least they do not contradict each other. If it is because they support each other, you end up with circular reasoning. If it is merely because they do not contradict each other, that still leaves the beliefs unjustified. You can believe that the moon is made of cheese that transforms into moon rocks whenever anyone is looking at it, and that wouldn’t necessarily contradict any of your other beliefs. But that doesn’t mean you would be justified in believing this. Moreover, coherentism provides no way to judge between different sets of internally consistent beliefs.

Now, I think coherentism does provide some insight. Our beliefs often do justify and reinforce each other in complex networks; but beliefs can only justify each other to the extent that they also each have independent justification, to avoid circular reasoning. And not being contradicted by other beliefs does provide at least some kind of confirmation, in that the belief is not falsified and could still be true (though beware confirmation bias). But there needs to be positive justification for the belief as well, not just lack of justification for the negation of the belief. Otherwise the negation of the belief would be equally well-justified as the belief itself. For these reasons, I believe coherentism fails.

That leaves the third response to the regress problem, foundationalism. It stops the regress by saying that some beliefs are justified without reference to other beliefs, and that all chains of justification eventually terminate with such beliefs.

Beliefs that are justified without reference to other beliefs are called properly basic. They are basic because they do not depend on other beliefs; they are properly so because they are still justified by the way that they are formed. For example, my belief that my laptop is in front of me right now is not justified by other beliefs that I hold; instead it is justified directly by my experience of seeing my laptop in front of me. Properly basic beliefs are not infallible; they can be defeated by other reasons brought against them. They simply form a rational starting point to allow our knowledge to get off of the ground.

Considering how we actually form beliefs, foundationalism seems to be the best answer to the regress problem. In our everyday experience we naturally form beliefs about the world around us, we take such beliefs to be knowledge, and we then justify other beliefs from those. Coherence may play a role, but it needs a foundation to build on before it can do any work.

So what are the different ways that we rationally justify our beliefs? Some of those will be ways of forming properly basic beliefs, and some of them will be ways of inferring new beliefs from existing ones. But before I go into more detail on the nature of justification, I will take one more look at the definition of knowledge.

Knowing What Knowledge Is

Last time I looked briefly at the concept of knowledge as justified true belief. This definition seems intuitively correct, fitting the concept of the kind of belief that we want to hold. And with this definition, it seems that we can know things, and know that we know them.

But how can I know what knowledge is? The problem of sorting out what knowledge is can be separated into two different questions:

Q1: What is the definition of knowledge?

Q2: For any given belief, does this belief count as knowledge?

One way to answer Q2 is to use the answer to Q1, and simply check if each belief meets the definition of knowledge. But to have an answer to Q1, you already need to have at least one answer to Q2, so that you can know that your belief about what knowledge is itself counts as knowledge. In fact, the usual way to answer Q1 seems to be to start with many of the answers to Q2, and create general criteria for knowledge from that. What can we do about this? This conundrum, how to know what knowledge is, is called the problem of criterion.

There are three ways to respond to this. The first response, that of skepticism, says that because we need to have answered either Q1 or Q2 in order to answer the other one, we cannot know what knowledge is, and therefore we cannot know if any of our beliefs count as knowledge. Hopefully, we can do better than that.

The second response, which may be called methodism (no relation to the church denomination), says that we start with an answer to Q1, and that we answer Q2 by evaluating each belief according to our answer to Q1. The problem with methodism is that, if it is correct, knowledge of any belief first requires knowledge of (a) the definition of knowledge, and (b) whether the belief in question meets that definition. But then knowledge of (a) and (b) requires more knowledge, leading to an infinite regress of knowledge requirements. Since we have finite minds, we can never actually evaluate an infinite number of beliefs in this way. So methodism doesn’t seem to be any better than skepticism.

The third response says that we start with many answers to Q2, intuitively, and then discover the answer to Q1 by generalization. In other words, we can first know things without understanding how we know them or being able to prove that we do, and then we can look at specific instances of knowledge to see what they all have in common, and how they differ from beliefs that are not knowledge, in order to formulate a definition. Then we use that definition of knowledge to judge more difficult cases. This response is called particularism, since it says that we know some of the particulars of knowledge before we know a general definition, and it seems to be the common-sense response.

For example, we may start with the general intuition that a belief has to be true to count as knowledge. It seems wrong to say “he knew his keys were on the table, but actually they were not;” rather we would say “he thought he knew they were on the table.” Then we might look at a number of our beliefs and see, using our intuition, if they would count as knowledge assuming they were true (since we have already decided to include truth as a criterion for knowledge, and we are interested in what else needs to be included). I might say, “I know my laptop is in front of me; I know that the sum 1 + 2 + … + n equals n(n + 1)/2; I know that the Earth is a sphere; I know it is wrong to steal.” Then using these particulars, I would try to see what it is that makes them knowledge, and what it is that other beliefs don’t have that fail to be knowledge.

The skeptic typically challenges this particularist approach in two ways. First, they accuse it of begging the question by assuming the very thing it tries to prove, which is that some of our beliefs count as knowledge. Second, they ask how we can know whether the knowledge we claimed to have started out with is correct. If it is possible that we are wrong, the skeptic says, we have to prove that we are not wrong.

To counter the first challenge: our original goal was not to prove that we know things, but to find out what knowledge is. And the skeptic cannot offer any reason that we do need to prove that some of our beliefs are knowledge in order to accomplish that. Either they merely assert that we bear this burden of proof, without offering reasons, or they try to offer reasons. If they do not offer reasons, then we have no reason to believe them. But if they do offer reasons, by their own argument, they do not know that those reasons are true. So again we can dismiss the charge that we must prove that we can know things.

To counter the second challenge: we know the knowledge that we claimed to start with is correct, simply because we know it. The fact that we might be incorrect is not a good reason, in and of itself, to think that we are incorrect. Indeed, we actually must know some things to know that we might be incorrect about others. The skeptic cannot claim to know that nothing can ever be known without self-contradiction. And if they do not know that, there very well might be things that we know!

So basically, I believe (I would even say I know) that the common-sense approach is the correct one to the problem of how we know what knowledge is. We recognize that some of our beliefs are knowledge, and we form a definition of knowledge based on that. To the skeptic who says, “You can’t just know that you know things,” I ask in return, “How do you know that?” If he answers that he doesn’t know that, I don’t see any reason to think that he is right and I am wrong. If he does try to show how he knows that, it seems to me that he will eventually have to say something like I have said here, contradicting his own assertion.

This will tell us something about the nature of knowledge and justification, when we go back to the definition of knowledge as justified true belief and try to see if that definition is correct. Whatever knowledge is, if we are able to know what it is then we must be able to know certain facts, namely, facts about what counts as knowledge and what does not, without having a prior definition of knowledge to judge them against.

And when I think about it, I can’t really see any reason that this is not the case. Whatever knowledge is, I have enough of an intuitive conception of it that I can say there are some things that I just know that I know. That will tie in with my next post, where I will start to look more at the concept of epistemic justification.


Last time I started laying the foundation for a comprehensive system of beliefs, first by defining what a belief is and then by noting what kind of beliefs we want to have. Briefly, a belief is an attitude of accepting a proposition as true, and because of that, we want to hold beliefs that are true, and that we have good reasons for thinking are true. In other words, we want more than mere beliefs; we want knowledge. And so I promised to spend several thousand words on the subject of what knowledge is, and how we get it.

I also gave only a brief statement of the meaning of truth, which is obviously central to our whole concept of belief and knowledge. So, to complete the foundation, I will be exploring the nature of knowledge, truth, and reality itself.

“All men by nature desire to know.” – Aristotle

What is knowledge? As I have said, it is the concept we use for the kind of belief that we want to hold: true belief that we have good reason for holding. So, knowledge these days is usually defined as something like justified true belief. Sometimes a degree of certainty in the belief is also required in the definition of knowledge, but I find it more appropriate to consider certainty as simply the result of having justification for a belief, and not an independent requirement for that belief to be knowledge.

Note that knowledge is not simply justified belief, but justified true belief. If you know something, then it is true. If it is not true, then you don’t actually know it; even if you mistakenly think you do. Truth is a requirement for a belief to be knowledge, because true beliefs are the kind we want to hold.

If our beliefs must be true to count as knowledge, can we ever know if we know something? Can we ever say we have more than justified belief? I think we can. If you believe some proposition, let’s call it P, then by definition you believe that P is true. If you believe in P on the basis of some justification, then you believe that P satisfies all the requirements for knowledge: you believe that you believe P, you believe that P is true, and you believe that P is justified.

So if you have justified belief in P, that is sufficient for belief that you know P. Moreover, the belief that you know P is itself justified by the belief that the requirements for knowledge are satisfied. So justified belief in P is sufficient for belief that you know that you know P, and belief that you know that you know that you know P, and so on.

So justified belief is sufficient at least for a knowledge claim, and the claim will be correct if the belief is in fact true. Now, none of this requires that our knowledge be completely certain. In practice, there is a relationship between knowledge and certainty. We usually only make knowledge claims when we judge that we have a sufficient degree of justification, lending a corresponding degree of certainty. The more certain we are, the more likely we are to stretch out the chain of “knows” in the last paragraph. But just as belief does not require complete certainty, neither does knowledge.

So it seems, after a brief examination of the concept of knowledge as justified true belief, that we can know things, and we can know that we know them. But is this the correct definition of knowledge? How can we know that?

That is the question I will begin to explore in my next post.

Belief and Meaning

With this post I am commencing an ambitious construction project. I am attempting to build an entire belief system from the ground up, making sure that each of the beliefs I hold is supported by a firm foundation. And as with any structure, the first thing that needs to be built is the foundation itself.

I think an obvious place to start building a belief system is simply to ask: what is a belief?

In this context, I am not talking about belief in something or someone, which is roughly synonymous with trust. Rather, I am talking about belief that. I believe that the Earth is approximately a sphere, for instance. This sentence expresses the fact that I have a certain attitude towards the proposition that the Earth is approximately a sphere. Specifically, an attitude of accepting this proposition as true.


Here I think it is useful to introduce a concept for describing the content of beliefs, truth, and knowledge. We can express beliefs and the like with assertions, which are spoken or written sentences which assert some state of affairs. “The Earth is approximately a sphere” is an assertion. So is “Die Erde ist ungefähr eine Sphäre.” Of course, these assertions mean the same thing (at least according to Google Translate), and if I believe one, then I believe the other. The content of the belief is not the specific expression, but the intended meaning behind it.

Philosophers use the term proposition to refer to the meaning of an assertion, abstracted away from its expression. Often, propositions are represented as subordinate clauses headed by that: such as the proposition that the Earth is a sphere. This is just to make it clear that we care about the meaning behind the assertion, and not the assertion itself.

One of the reasons we make this distinction is that assertions, taken literally, are just sound waves in the air or ink marks on a page. Sound waves or ink marks by themselves cannot be true or false. For it to make sense to talk about the truth or falsehood of an assertion, or the reasonableness of believing an assertion, it has to have an understood meaning. Propositions represent this meaning.

The other reason we make this distinction is that human language is often redundant, imprecise, or ambiguous. More than one assertion can refer to the same proposition, most obviously between different languages, but this is the case even within the same language. Even more problematically, more than one proposition may be the intended meaning behind some assertion. “The missing painting was found by the art gallery” has more than one possible meaning, for example. (It could mean that someone found the painting near the art gallery, or that the art gallery found the painting.) Since the meaning is what we care about, we use the concept of propositions.


A belief, then, is an attitude towards a proposition that accepts that proposition as true. In other words, it is a mental state of accepting as true the meaning behind some assertion. So, believing something is roughly the same as thinking that it is true.

Of course, understanding that definition of belief requires that we understand the meaning of truth. I am going to go into more detail about that, but not just yet. For now, I will give the common-sense notion that when we say something is true, we mean it is the way things are. (Even when I go into more detail, the common-sense notion pretty much sums it up.)

Belief comes in degrees. We can have different measures of certainty or doubt about the truth of any given proposition, though these measures are subjective, and typically more qualitative than quantitative. You can be certain about a belief, or uncertain but think it is more likely true than false, while still believing it. The degree of certainty that you need for a belief isn’t something that can be precisely defined, and it may vary depending on the gravity of the belief in question. In some cases you can be entirely uncertain about a proposition, but still accept that proposition as true for pragmatic reasons.

These degrees can go in the direction of doubt instead of certainty. Disbelieving a proposition is just believing that it is false, and disbelief can also range between entirely certain and entirely uncertain. Finally, you can suspend belief in a proposition, meaning that you neither believe nor disbelieve it. This may be because you are evaluating the evidence for it, or just because it is not a matter of concern.

What kind of beliefs do we want to hold? And why do we care about the level of certainty we have in our beliefs? Well, since belief is accepting a proposition as true, we want to have beliefs that are, in fact, true. We want our beliefs to line up with the way things really are. That is where reasons come in. If I had started this whole discourse by saying that I wanted to explore merely my beliefs, I doubt it would have sounded as interesting to you. (Maybe it didn’t sound interesting anyways, in which case, why have you read this far?)

We intuitively understand that just having a belief is not enough. We want reasons to think that what we believe is true. Moreover, we want those reasons to be good ones, reasons that justify us in holding those beliefs. Part of critical thinking is aligning the level of certainty that we have in our beliefs with the level of justification that we have for them.

This concept of the kind of belief we want to hold is important enough that we usually give it its own word: knowledge. We want to do more than believe. We want to know.

So my exploration of what I believe, and why, is really an exploration of what I know, and how I know it. Which is why, over the next few posts, I am going to spend several thousand more words going a bit more in depth on what knowledge is and how we get it.


Welcome to my blog! You can see all of my latest posts below. If you’d like to start at the beginning, check out my Posts page. Most of my posts form part of a longer discourse, so they build on each other. If you have any comments, questions, or constructive criticisms, then comment away! Cheers, and thanks for reading. 🙂

Pursuing Truth and Knowledge, Virtuously

“Critical thinking is a desire to seek, patience to doubt, fondness to meditate, slowness to assent, readiness to consider, carefulness to dispose and set in order, and hatred for every kind of imposture.” – Francis Bacon

In my first post, I said that what we believe affects the way that we live. This is obviously true for some issues – whether you believe you are in North America or England will affect which side of the road you decide to drive on – and I think it is true for many more significant issues as well.

Because of this, if we have some kind of moral responsibility for the way that we live, this responsibility impinges not only on what we do, but also on what we believe. Even from a pragmatic standpoint, it is more likely that true beliefs will be useful to hold than false ones, since false ones will conflict with reality in some way. For both of these reasons, it is important to hold beliefs that are true, and to not hold beliefs that are false. And that means that it is important to hold only beliefs that we have good reasons for believing.

By saying that we have some responsibility for what we believe, I am implying that we can, to some extent, choose what to believe. Of course, we cannot simply choose to believe just anything. I could offer you a hundred dollars to believe that there are such things as flying pink elephants, but you probably couldn’t do it. That is not how our psychology works. But we can choose to examine or pursue arguments, evidence, or justification – in short, reasons – for different beliefs. And when presented with appropriate reasons for a certain belief, I think we are then able to choose whether or not to believe it.

Epistemic Responsibility

The idea that we have moral responsibility in regards to our beliefs, and not just our actions, is known as epistemic responsibility, and I think it’s an area that deserves more attention in our intellectual culture. In fact, this whole discourse is really my attempt to exercise my epistemic responsibility. After some reflection, I’ve come to think that a person’s epistemic responsibilities are embodied in two key virtues, which may surprise some people in their juxtaposition. These virtues are critical thinking and epistemic faith.

Critical thinking is the virtue both of not accepting beliefs that do not have sufficient justification, and of accepting beliefs that do have sufficient justification. Exercising critical thinking means, when possible, weighing the justification both for and against a proposition before deciding to believe or disbelieve it. It means seeking both confirmation and disconfirmation of the belief in consideration, and critically examining the relevant information. It means only accepting rational justification, not believing on the basis of wishful thinking, and being careful to avoid cognitive biases and fallacious reasoning. But critical thinking also means making a decision, after the evidence has been evaluated, to either accept the belief or its negation. Otherwise, you would be intellectually paralyzed, never able to actually form beliefs, or act on them.

Borrowing terminology from Aristotelian ethics, the vice of deficiency corresponding to critical thinking is gullibility, accepting beliefs too easily. The corresponding vice of excess is extreme skepticism, having an excessively high standard for accepting beliefs and remaining in perpetual deliberation, even when the evidence is there.

Epistemic faith is the virtue of not discarding beliefs without sufficient justification. This is my own definition, but I don’t think that I’m stretching the meaning of the word too far by using it this way; one of the dictionary definitions of faith is strongly held belief, trust, or confidence. Exercising faith means holding on to beliefs that you have evaluated and found justified, unless new evidence against it comes to light. If new evidence is encountered, it must be critically examined and weighed against the evidence you already have, not giving new evidence more weight just because it is new. It applies the same standards to discarding beliefs that critical thinking applies to accepting them. Thus, epistemic faith and critical thinking can be seen as two sides of the same coin.

The vice of deficiency corresponding to faith is unbelief, discarding one’s beliefs too easily, without sufficient evidence against them. (That is, assuming that they were accepted critically in the first place.) The corresponding vice of excess is dogmatism, the refusal to critically evaluate and revise one’s beliefs, even when there is strong evidence against them.

As a side note, in a broader moral sense, I would say that faith is the virtue of living according to the beliefs that one has accepted. Without this quality, the epistemic aspect of faith is useless. It is worth almost nothing to confidently hold a true belief if you live as if you did not believe it. But even here, faith does not mean believing in something without evidence, as is sometimes alleged. Rather, it is trust or confidence in something, especially trust that is lived out.

Other Epistemic Virtues

There are other epistemic virtues in addition to critical thinking and faith, especially when we consider how we should conduct ourselves as an intellectual community. We are not isolated thinkers, but instead we are part of a communal pursuit of truth and knowledge. When we engage in discourse together, there are a few crucial responsibilities that are needed, and which I think need to be highlighted.

First, because we need data from which to form our beliefs, and because each person is extremely limited in the amount of data that they could collect through their own experience, we are hugely dependent on the testimony of others in the formation of our beliefs. Thus, we have a responsibility of truthfulness when giving that testimony to others. This means telling the truth and not lying, obviously. It means not “fudging the data”, even when the intent is to lead others to the belief we think is correct. It also means fact-checking before conveying a report that might be unreliable, even, and maybe especially, when the report conforms nicely to your preconceptions.

Second, when we engage in discourse we need epistemic charity. Being charitable means treating others in intellectual discourse the way you would like to be treated. This means fairly and accurately representing the viewpoints of others, and taking the time to understand those viewpoints before passing judgement on them. It means not prejudicially disbelieving someone’s testimony simply because you disagree with their other beliefs. It also means not silencing or censoring viewpoints that you disagree with. Instead, it means engaging with those viewpoints, and giving reasons why you disagree, rather than shouting them down. Berating others for expressing ideas that differ from your own might give you the last word, but it does nothing to show that your ideas are right.

I think we also need intellectual humility. This means recognizing your own fallibility, being sensitive to your biases, and being open-minded about the possibility that you are wrong. It means not claiming more than you can actually justify. Humility is opposed to close-mindedness and overconfidence in your own opinions, but it is also opposed to undue timidity in forming conclusions based on evidence. You can have confidence in your beliefs while still exercising humility – indeed, if you have taken the time to really think through your beliefs carefully, you should have confidence in them!

We don’t talk very much about virtue in our culture, either in the moral or intellectual sense. But I think that we still care about it, deep down. And I hope I am not the only one who sees the value in striving to find the truth, and trying our best to live according to our hard-won beliefs. I hope that our society can serve to nurture that, and not stifle it. I hope we can be a community that pursues truth and knowledge together, speaking truthfully, and treating each other’s beliefs with fairness and respect. These are ideals, obviously. We are human and we aren’t perfect. But they are ideals worth aspiring to.

Anyways, that is why I am writing all this. I want to seek after truth and knowledge, virtuously, and with right thinking. I hope you find the synthesis of my search enlightening.

Beliefs and Reasons

“A reasonable person believes, in short, that each of his beliefs is true and that some of them are false.” – W. O. V. Quine

I have been thinking about writing this blog for the past few years. I spend most of my free time these days thinking, and reading, and writing; I thought I should eventually share what I have been doing all this thinking about. So what have I been thinking about? In as few words as possible: everything I believe.

I’ve become burdened with a desire to clearly articulate the beliefs that I hold, and the reasons that I have for holding them. In the process, I’m trying to make sure that I have good reasons for what I believe. It’s my hope that you will find it interesting to learn what I believe, and why, whether or not you hold the same beliefs. And whether or not my reasons convince anyone that I am right, I hope that by sharing my thoughts, I can encourage others to engage in a similar exercise. What we believe is important, because it affects the way that we live. So thinking critically about our beliefs is an important aspect of life.

The quote above is a nice summary, to me, of what critical thinking entails. It means having thought carefully enough about each of our beliefs so as to be convinced of their truth, but also recognizing our own limitations so as to realize that we are probably wrong about some things. This requires knowing not just what to think, but how. And in the era of “post-truth politics,” it seems like the skill of critical thinking is becoming less common. Echo chambers abound, and in more arguments than not it seems like the opponents don’t even understand each other’s positions. I don’t think I am so much better than all of the other voices out there – but I want to try to be. I think we all must, if we are to bring more civility and substance to the rhetoric of our times.

So that is the main purpose of this blog: to explore my beliefs and my reasons for holding them, and in doing so, to hopefully encourage rational thinking. A secondary purpose that I want to accomplish is to show how I can construct my whole belief system from the ground up.

Of course, when constructing a belief system from scratch, the temptation to avoid is to simply take the beliefs I already hold and support them however I can. What I hope to show instead is that the beliefs that I have come to hold, after evaluating the options, are the ones that I have found to be the most rational. That is the goal: not simply to prop up my own beliefs, but to conform my beliefs to the truth.

Building the Foundation

Before I can construct a belief system, though, I need to lay its foundation. Trying to do that raises some deep questions:

  • What does it mean to believe something, and when is it reasonable to do so?
  • What kind of beliefs do we want to hold?
  • What is knowledge? How can we know things?
  • What is truth?
  • What is the nature of reality?

Yes, when I said foundation, I meant all the way down. Though philosophers have pondered these questions for millennia, I will begin by attempting to answer them. But before I do that, let me say a bit more about how I want to approach this, in my next post.