Efficiency of a Gas Cycle

I was reading up on the physics of refrigeration for … reasons … and while working out some of the math, stumbled on something interesting. It’s almost certainly not novel, but I thought I’d write it out anyway. (It might not even be correct, if I’ve made a careless mistake somewhere. I’m cautiously confident it’s right, though.)


The gist of my “discovery” is that the (non-standard1) efficiency of refrigeration using a gas cycle is a simple ratio of two temperatures.

I’ll define a gas cycle here for convenience. In a gas cycle, a volume of cold gas is placed in thermal contact with a system in order to extract heat from the system. The gas is then separated, compressed to a high temperature, then placed in contact with a heat sink, rejecting the heat into the sink. The gas is then allowed to expand, cooling down in the process, and the cycle starts again.

More technically, a gas starts off at temperature T_N (N for minimum), and is placed in thermal contact with the system to be cooled, its temperature falling isochorically until equilibrium to T_C (C for cold reservoir). The gas is then compressed adiabatically to a new temperature of T_X (X for maximum), where it is then placed in contact with the heat sink and it cools isochorically until equilibrium to temperature T_H (H for hot reservoir). The gas is then allowed to expand adiabatically, reaching the original temperature T_N.

The quantity I’m interested in is :


= \frac{n C_V \Delta T_{extr}}{n C_V \Delta T_{rej}}

= \frac{T_C - T_N}{T_X - T_H}

Obviously, not all four temperatures are free variables, otherwise the efficiency could be anything. So, we need to find the relationship between the four temperatures, and plug it into the expression for efficiency.

The Four Temperatures

The core of the relationship lies in the adiabatic transitions from T_C to T_X and T_H to T_N.

We look first at T_C and T_X.

We start with the equation for adiabatic processes:

TV^{\gamma - 1} = constant

and we know that

V_N = V_C and V_H = V_Xbecause the transition between these states is isochoric.

So, from the adiabatic equation, we proceed to isolate the V terms so that we can later eliminate them with the other equations:

T_X V_X^{\gamma - 1} = T_C V_C ^{\gamma - 1}

\implies (\frac{V_C}{V_X})^{1 - \gamma} = \frac{T_C}{T_X}

\implies \frac{V_C}{V_X} = (\frac{T_C}{T_X})^{\frac{1}{1 - \gamma}}

Similarly, for T_N and T_H we obtain

 \frac{V_N}{V_H} = (\frac{T_N}{T_H})^{\frac{1}{1 - \gamma}}

Substituting into V_N = V_C (isochoric transition):

V_H (\frac{T_N}{T_H})^{\frac{1}{1 - \gamma}} = V_X (\frac{T_C}{T_X})^{\frac{1}{1 - \gamma}}

But since V_H = V_X (isochoric), this simplifies down to:

\frac{T_N}{T_H} = \frac{T_X}{T_C}

which is the desired relationship between the four temperatures.


We pick a variable at random and substitute into the expression for efficiency:

\frac{T_C - T_N}{T_X - T_H}

= \frac{T_C - \frac{T_C T_H}{T_X}}{T_X - T_H}

= \frac{T_C}{T_X} \frac{T_X - T_H}{T_X - T_H}

= \frac{T_C}{T_X}

This is the same expression from the earlier section!!

what sorcery is this??


So somehow the efficiency turns out to be this simple ratio.

It’s not completely out of the blue. The ratio structure of the relationship between the four temperatures hinted that it was describing some underlying parameter, but I didn’t expect the constant to be the efficiency of the transfer.

The expression for efficiency also makes some intuitive sense. You would expect that the efficiency goes down if you pump the gas up to higher temperatures. I didn’t expect it to be such a simple relationship, though.

Maybe there’s actually something really simple going on, but a intuitive physical understanding of this system continues to elude me. But whatever the case, the expression for efficiency is certainly one of the most elegant relationships I’ve seen.

  1. Usually, efficiency is defined as the ratio of useful work extracted (in this case, the energy removed from the system to be cooled) to the work input. However, the value I am interested in is the ratio of heat extracted from the system to be cooled to the heat output to the heat sink. There’s no name for this that I know of, so I just use “efficiency” here. 

Stagnation and Chaos

Sometimes I get the feeling that I’m stagnating. I wake up, go to my computer, study some, read some, write some, and maybe watch some videos. But my creativity feels stunted, and my writing uninspired. I achieve nothing and I go to sleep, to start over the next day.

What an absolute waste of time. Not only is it boring to be stuck like that, but nothing gets done, and there is no foreseeable exit from the cycle — it doesn’t seem like my life is within my control.

Two weeks ago, I had been competing against one of my brilliant to-be classmates in a programming competition, and I learnt a lot over the course of the week-long competition. But after that, I began to stagnate. I made one song cover in the first three days, and after that, nothing much. I’d fallen into the cycle.

Until yesterday. I met with a friend, went out for lunch and had some conversation, listened to music… Nothing of consequence, but somehow it managed to shake me out of the cycle. So here I am, today, breaking out of the cycle and making something new.

And I think I can explain why, and reliably break out of cycles of stagnation in the future.

Dynamical Model of the Mind

The mind is complex. We have myriad thoughts about a plethora of subjects, and similarly with our emotions. However, the configurations are bounded — there are not an infinity of subjects to think about, nor an infinite number of mental states. This follows from the finite size of the brain and the finite maximum information density of matter. Certain thoughts also tend to lead into certain other thoughts, reasonably predictably — thinking about an apple tends to lead only into thoughts about a very small number of related concepts such as “red”, “tree”, or “Isaac Newton” and is very unlikely to bring up thoughts about “river”, “fragmentation”,”municipality” or the near-infinite multitudes of things that have nothing to do with “apple”.

This naturally (to me, at least — due to my physics background) leads to a mental model of the mind as a dynamical system, where the state of the mind is represented as a single point in a n-dimensional space, where n is the number of distinct concepts that the mind in question could be thinking about. This dynamical system should probably be modelled as chaotic, due to the highly nonlinear and unpredictable nature of the brain. The dimensions in mindspace can probably also be modelled as continuous, due to the aforementioned nature of thoughts to link only to close-by thoughts, leading to some degree of adjacency. A mind, given a certain initial position (which corresponds to an initial configuration of thoughts), will then tend to move along trajectories in mindspace according to what thoughts are likely to be formed next from the current set of thoughts.

Attractors in Mindspace

Boredom or stagnation, then, occurs when the mind falls into an attractor within mindspace. It begins to cycle, to orbit or otherwise stay within a certain trajectory or subspace within mindspace. There may be some variation between orbits, but ultimately the mind’s trajectory is locked within the attractor — there does not exist, or exists very few, trajectories that the mind can take within mindspace that will lead it out of the attractor. In the absence of sufficiently large external input, the mind is unlikely to exit the attractor. It will be stuck and will repeat the same or similar thoughts until it is either able to exit, or the mind ceases to exist.

This is a familiar phenomenon with people who are addicted to a certain thing, for example computer gaming. The game is a sufficiently attractive prospect that the mind is dragged into it. The mind may leave due to the exigencies of food and water, but ultimately thoughts will flow back to the game.

And that’s fine. People can be perfectly happy living their lives while their minds only inhabit small set of thoughts. But I, for one, would not. I would hate to be there. While I might be happy in the short term, in the long term I would still be there — there is no potential for growth, for development. There is no meaning for me to find in an attractor.

Exiting Attractors

How, then, to exit an attractor? The model of the mind I am using is a dynamical system, and in a dynamical system, the evolution of the point’s position in time is dependent on the equations of the system, which in this case would be the environment and surroundings of the mind. For example, the thing drawing gaming addicts away from the game would be the body sending signals of hunger to the mind, each of which will tug the mind-point towards the “eat” thought. In essence, an “eat” attractor is created by hunger signals, until the hunger is sated at which point the “eat” attractor disappears. To exit the attractor, the equations of the mind must be changed to create a new, more attractive attractor, or to weaken the existing undesired attractor, or both.

One way that this may happen is boredom. Sometimes the mind notices that it is travelling the same old path, causing it to become bored. The mind’s trajectory will change slightly, because boredom is part of the mind’s state. Eventually, sufficient boredom may build up, leading the mind close enough to the fringes of the attractor until a path out is found. It can also be imagined as the attractor becoming shallower, until finally it becomes convex.

This is why people like variety. By injecting a small degree of variety, people are able to reduce boredom while having the same enjoyment they once did. For example, Bobby Fischer invented Chess960 in an attempt to switch up the chess metagame, in part because he was bored of seeing the same old openings every time (also because mediocre Russians were memorising openings and performing disproportionately well). Games like Pokemon also change game mechanics or introduce new ones, to revitalise players’ interest.

Deliberately Exiting Attractors

However, for a person trying to avoid stagnation, waiting for boredom to run its course may take too long, sometimes on the order of weeks, months, or years. How can a person speed things up?

The mind is not an isolated system; interaction with objects or other minds can change the mind’s position within mindspace. For example, a writer may find their creative well running dry. They cannot think of any new topics or ideas. They may decide to take a holiday, meet new people, read a book, or otherwise jump to a new position in mindspace, from which they can start anew with fresh ideas. They may even encounter new concepts they have not considered before, which will have the effect of adding completely new dimensions to their mindspace.

So to avoid boredom and stagnation, do something new. Find some new input that will jumpstart your thoughts out of the attractor.

Reading a piece of writing, for example, will help. In fact, writing something is essentially recording down a position or set of positions in mindspace in a manner such that the position can be located by another mind. By reading it, the mind will be able to move to that point, hopefully exiting the attractor.

Still Bounded

There is a problem, however.

Mindspace is finite. No matter what you do, your mind is ultimately stuck within the bounds of mindspace. Even if you are able to escape small attractors, your mind is still necessarily bound within this “final attractor” that is the entirety of mindspace. You may be able to expand mindspace, but it is not infinitely expandable, so given enough time there will always be a point in mindspace visited again and again and again (pigeonhole). Even within a normal amount of time, most of our lives may be being lived in a large attractor, though different people will have different periods of reoccurrence.

This has unfortunate implications for people searching for the meaning of life. People tend to think of themselves as free, but assuming that the mind is physical (i.e. souls do not exist — a very reasonable assumption to me), then any living being is certainly bounded by the limited information density of space. Any pursuit therefore must end. There is nothing that can be done for all eternity without repeating oneself. Anyone achieving immortality had better find some way to entertain themselves, because they’re going to be oh so very bored.

But as a mere mortal, I don’t think exploring every point in mindspace is an option for me. So I am happy to settle for being restricted to an attractor… so long as it is functionally infinite.

Balls and Balances


I explain the solution to a common, moderately difficult logic puzzle, and propose a novel variation with maximised difficulty. The wording of this variation, without solution, can be found in this1 footnote. The (obfuscated) solution is available in this2 footnote. The method is provided in the main text.


About two weeks ago, I attempted a set of logic puzzles that I came across on the Internet. I solved the first few, but there was one that I did not complete (mainly because my bus reached its stop). After getting off the bus, I forgot about the puzzle and did not think further on it.

Until yesterday. I had some free time, and found my mind wandering over various things, and for some reason thought about the puzzle. Since I had nothing better to do, I set about to solving it.

The puzzle went something like this:

“You have a bag of 12 balls, of which one is defective. The balls are all identical, except for the defective ball, which weighs slightly less or more; you do not know which. You have a scale which can tell you which of two sets of balls weigh more, or if they are the same weight. What is the minimum number of uses of the scale you need to identify the defective ball with certainty?”

I will go on to explain my thoughts and my solution to this puzzle, and raise related puzzles. If you want to attempt the puzzle by yourself, now is the time to do so.


Having done vaguely similar puzzles before, I immediately framed the question in terms of information. This is what I thought:

“There are twelve balls, therefore the location of the defective ball is described by three-point-something binary bits of information. Each comparison provides one bit of information. Ergo, four comparisons should be the minimum required.”

I then set out to find the exact algorithm that would find the ball, and immediately found something wrong.

I imagined weighing six of the balls against six of the others. The scale would tip to one side, but the problem is that this doesn’t provide any information as to the location of the defective ball. I wouldn’t know which side the defective ball was, since I didn’t know whether it was supposed to be lighter or heavier.

This seemed to indicate that one additional bit of information would be required to identify whether the defective ball was heavier or lighter, making a total of five bits.

At this point, alarm bells started going off in my head. Five comparisons seemed to be far too high a number for this problem. There also seemed to be a significant amount of wasted information from the comparison operations. I was also wondering why twelve balls were given, rather than sixteen, which I imagined would eliminate information redundancy which should strictly increase the difficulty of the puzzle.

Modified Solution

After some thought, I realised that I had gotten two things wrong:

  1. The location of the defective ball is not described by one in twelve possibilities, instead it is one in twenty-four, accounting for whether it is heavier or lighter.
  2. The comparison does not give one binary bit of information, instead it gives one ternary bit: in addition to telling which side is heavier, both sides could be equal.

The approach to obtain a solution becomes clear: I need to find an algorithm that distributes the twenty-four possibilities as flatly as possible in a ternary outcome tree, minimising the maximum depth of the tree. Since twenty-four is two-point-something ternary bits, I should expect a solution to use no more than three comparisons.

The Weighing Operation

Before we proceed, we must establish what exactly happens in a single weighing operation.

Suppose I weigh one ball against one ball, and the scale shows the left side is heavier. Out of the four possibilities (Ball 1 is heavier, Ball 1 is lighter, Ball 2 is heavier, Ball 2 is lighter), I have eliminated half of them, and these two remain: Ball 1 is heavier, Ball 2 is lighter.

I shall abbreviate these as follows:

  • 1HL means that neither of “Ball 1 is heavier” or “Ball 1 is lighter” have been eliminated.
  • 5H means that “Ball 5 is lighter” has been eliminated, and “Ball 5 is heavier” remains.
  • 12- means that both possibilities have been eliminated, and 12 is certainly not the defective ball.

So weighing balls 1HL vs 2HL gives:

  • Case 1: 1 is heavier
    • 1H, 2L
  • Case 2: 2 is heavier
    • 1L, 2H
  • Case 3: Equal weight
    • 1-, 2-

However, the above is not complete, failing to take into account the implications on the other balls. The complete list would be:

  • Case 1: 1 is heavier (1 > 2)
    • 1H, 2L, (3-12)-
  • Case 2: 2 is heavier (1 < 2)
    • 1L 2H, (3-12)-
  • Case 3: Equal weight (1 = 2)
    • 1-, 2-, (3-12)HL

Thus, within each of Case 1 and 2 there are 2 remaining possibilities, and within Case 3 there are 20. If we want to minimise the depth of the tree, we need to even out the spread of possibilities between each of the 3 cases.

The First Step

It turns out that for the first comparison, weighing 4 balls against 4 balls gives the best spread:

  • Case 1: (1-4) > (5-8)
    • (1-4)H, (5-8)L, (9-12)-   [8 possibilities]
  • Case 2: (1-4) < (5-8)
    • (1-4)L, (5-8)H, (9-12)-   [8 possibilities]
  • Case 3: (1-4) = (5-8)
    • (1-8)-, (9-12)HL                [8 possibilities]

Notice that Cases 1 and 2 are effectively identical, with 4 balls H and 4 balls L, therefore any algorithm that proceeds to solve Case 1 will also apply identically to Case 2.

Solving Case 1

Cases 1 can be solved quite easily. We want to split the 8 possibilities evenly into 3 cases of 3, 3, 2, such that only one more weighing is required.

To do this, I chose to weigh Left: 1H, 2H, 5L vs Right: 3H, 4H, 6L, leaving 7L and 8L aside.

  • Case 1a: Left > Right
    • 1H, 2H, 6L
  • Case 1b: Left < Right
    • 3H, 4H, 5L
  • Case 1c: Left = Right
    • 7L, 8L

After this, the solution is trivial. If we have 1H, 2H and 6L, we just weight 1H vs 2H to see which is heavier; if equal, then 6L is the solution.

This solves Case 1, and therefore Case 2.

Solving Case 3

Case 3 is a little more difficult. It is not obvious (to me, at least, at the time) what needs to be done to split the 8 possibilities into cases of 3, 3, and 2.

However, a solution exists, by borrowing a ball (1-) that has already had its possibilities eliminated.

We weigh Left: 9HL, 10HL vs Right: 11HL, 1-.

  • Case 3a: Left > Right
    • 9H, 10H, 11L
  • Case 3b:
    • 9L, 10L, 11H
  • Case 3c:
    • 12HL (Solved)

Cases 3a and 3b is the same situation found in Case 1: weighing a pair with the same polarity will yield the result. As for Case 3c, it is already solved, although we do not know whether the defective ball is heavier or lighter. However, if we want to find out, we can simply weigh it with 1-.

Thus Case 3 is solved, and thereby the puzzle.

Variation #1: 13 balls, 26 possibilities

Even though I obtained the solution, one question remained in my mind. Why did the puzzle author specify 12 balls? 3 weight comparisons are, in theory, sufficient to distinguish 13 balls: 3 ternary bits can distinguish 27 possibilities, and 13 balls only present 26 possibilities.

As it turns out, a solution is possible.

With 13 balls, it is not immediately possible to split the 26 possibilities into 3 cases of less than 9 each. Weighing 4v4 results in 8,8,10, and 5v5 results in 10,10,6. If there are more than 9 possibilities in one case, then two weighings, each of which provide one ternary bit of information, cannot distinguish every subcase.

However, if we have access to another ball 14-, which is known to be non-defective, we can weigh 5v5 with 14- on one side. This splits the cases into 9,9,8, perfect for our purposes.

As before, Case 1 and 2 are identical, and Case 3 is the same as in the original puzzle. So we need only solve Case 1.

Case 1 is not difficult to solve. It contains 5 balls of one polarity and 4 balls of the other. Let’s take the case where we have (1-5)H and (6-9)L.

We weigh Left: 1H, 2H, 6L vs Right: 3H, 4H, 7L, leaving 5H, 8L and 9L aside.

  • Case 1a: Left > Right
    • 1H, 2H, 7L
  • Case 1b: Left < Right
    • 3H, 4H, 6L
  • Case 1c: Left = Right
    • 5H, 8L, 9L

In all three cases, to solve we need only weigh two balls with the same polarity to distinguish between the three remaining possibilities, solving the puzzle.

We can see that there are 3 possible reasons why the puzzle author didn’t include the thirteenth ball:

  • They didn’t want to involve the use of an additional ball. (However, this could have been overcome by making the setting of the question in a “ball factory”, making more balls readily available. This would have the possibly desirable side effect of increasing the difficulty of the question.
  • They didn’t think to use an additional ball. They tried 13, couldn’t make it to work, and reduced it to 12.
  • They wanted to reduce hints at the solution. 12 is divisible by many factors, thereby not hinting at any particular method to the solution. (Although 13 is prime and similarly doesn’t hint at a solution, 13 immediately discourages thinking about the question in a binary fashion, which is the mistake I made and which by my best guess would be the mistake most people would make.)

Variation #2: 27 possibilities

What if we wanted to have 27 possibilities? We could include another half-ball: 14H. This ball only has one possibility enabled.

As it turns out, this is also possible. The solution is left as an exercise to the reader.

Variation #3: More than 27?

My initial mischaracterisation of the problem as having 12 possible states was flawed, but not unjustified. Although there are 24 states, the answer only distinguishes between twelve balls; the puzzle does not require us to state whether the defective ball is heavier or lighter. Therefore, there might exist some optimisations of the solution where the H and L cases of some balls are grouped together. By grouping these cases, we save the information required to distinguish the H and L cases.

I argue that we can only optimise one ball.

Looking at the weighing operation, so long as a ball is weighed on the scale, at least one of its cases will necessarily be eliminated. If it falls, its L case is eliminated, if it rises, its H case is eliminated, if balanced, both are eliminated.

However, to make use of grouping for optimisation, both H and L cases must remain open. If we have eliminated one of the cases, then we have already expended the information that we want to save by combining the cases.

Combining the above two facts, we deduce that we can only combine cases on a ball we have not weighed before.

This is important because it also guarantees that only one ball can be optimised by combining cases. If we have two balls we can optimise on, then we cannot have weighed either of them. But by symmetry, we also cannot distinguish the two, and therefore we do not know which is the solution. Therefore, if the question is to be solved, only one ball can have both cases remaining.

As we saw in “Solving Case 3”, case 3c, optimisation is possible, as we grouped together cases 12H and 12L. So, theoretically, we are able to solve a puzzle with 14 balls.

Let’s see if this is in fact possible:

We follow the same procedure in Variation #1, weighing 5v5 balls where one of the balls is taken from elsewhere and known to be not defective. This splits cases into 9,9,10. Case 1 and 2 are identical to Variation #1 and can be solved by the same method. The crux is in Case 3, where we seem to have insufficient information.

In Case 3, we have balls (10-14)HL. The solution in fact follows the same lines as Variation 1:  we weigh Left: 10HL, 11HL vs Right: 12HL, 1-,  with 13HL and 14HL set aside.

  • Case 3a: Left > Right
    • 10H, 11H, 12L
  • Case 3b:
    • 10L, 11L, 12H
  • Case 3c:
    • 13HL, 14HL

Cases 3a and 3b are trivial, following the same method outlined previously. Case 3c is the focus, containing 4 possibilities where all others contain 3.

To solve, we weigh 13HL vs 1-.

  • Case 3ci: 13HL > 1-
    • 13H (Solved)
  • Case 3cii: 13HL < 1-
    • 13L (Solved)
  • Case 3ciii: 13HL = 1-
    • 14HL (Solved)

And thereby, 14 balls can be solved.

As the 14 ball variant is solvable, and I have shown above that no more information can be saved by this method, I believe that 14 balls is the maximum that can be distinguished in this type of problem.

This leads me to propose Variant #3 of this problem:

You have 10 bags, each containing N balls.  All balls are identical, with the sole exception of exactly one ball in bag #4. You do not know which ball is the defective ball. It looks identical, but weighs slightly less or more, you do not know which. You have a scale which can tell you which of two sets of balls weigh more, or if they are the same weight. What is the maximum value of N, such that you can certainly identify the defective ball within 3 uses of the scale?

I believe that this variant is the hardest possible variation of this problem. Not only does it require all the reasoning of the original variation, it also requires the reader to:

  • realise that the question phrasing allows access to additional balls of known weight, and apply this knowledge.
  • realise that although 3 uses of the scale can technically distinguish only 27 possibilities, the nature of the question allows 28

I expect that most people will be able to reach an answer of 12. Only some will reach 13, and only the most careful will realise that 14 is possible. I suspect that if I were to pose this question to myself, I might not even reach 12, due to my initial binary misconception; I might even guess 8.


I have solved the most common variation of this problem, and created several variations with increased difficulty, including one with maximal difficulty. I hope that this post has shed light on how to approach and solve this general type of logic puzzle, as well as on the nature of information.

  1. You have 10 bags, each containing N balls.  All balls are identical, with the sole exception of exactly one ball in bag #4. You do not know which ball is the defective ball. It looks identical, but weighs slightly less or more, you do not know which. You have a scale which can tell you which of two sets of balls weigh more, or if they are the same weight. What is the maximum value of N, such that you can certainly identify the defective ball within 3 uses of the scale? 
  2. Solution: sqrt(14*3-6)*3-log(34+47)/log(5-2)
    Solution is obfuscated so that glancing at it will not spoil the question. When you are ready to check your answer, paste it into google or work it out yourself. 


I argue that knowledge, as the word is currently used, is a flawed concept and is not useful. It is merely a feature of language which does not occur in reality.

Throughout history, many formal definitions of knowledge have been proposed, attempting to formalise its linguistic use. The traditional definition of knowledge as justified true belief has already been shown to be incorrect by the Gettier problems. For completeness, I will raise one Gettier-type counterexample here. Farmer A walks past a field. He sees a cow in the field. He forms a justified, true belief that there is a cow in the field. Unbeknownst to him, the cow he sees is a cardboard cutout. However, a cow does actually exist in the field, but is hidden in a ditch, unseen by Farmer A. Farmer A’s belief that there is a cow in the field is therefore true and justified, but we would not say that “Farmer A knows that there is a cow in the field”. This illustrates the incompleteness of true justified belief as a definition of knowledge.

Further, I argue that even definitions that attempt to take the Gettier problems into account are unsatisfactory.

Take for example Nozick’s definition of knowledge:

S knows that p if and only if:
(1) p is true.
(2) S believes that p.
(3) If p weren’t true, S wouldn’t believe that p.
(4) If p were true, S would believe that p and not-(S believes that not-p).

Knowledge under this definition is guaranteed to be true, seemingly solving the problem. However, I contend that criteria (3) and (4) cannot be fulfilled. Using the farmer and cow example, how would the farmer ensure that there is only one cow in the field? Suppose he searches every square metre of the field. There could be a cow in an underground bunker below him, but still considered in the field. Or, there could be a cow walking silently behind the farmer wherever he was walking in the field. More generally, it is always possible to construct a counterexample where p is not true but S believes p is true (or vice versa). Taken to the extreme, this can take the form of a vatted brain, fed signals at all times identical to that which the brain of a farmer searching a field would experience. Because of the above, I contend that (3) and (4) can never be fulfilled with certainty. (See footnote 1)

Due to this, I argue that any binary definition for knowledge (where something is either not known, or known with absolute certainty) is unsatisfactory. This covers all definitions I have seen. I argue that any binary definition that does not require absolute certainty will have a Gettier-type counterexample, exploiting whatever area is left unverified. Yet, any definitions that do require absolute certainty will find certainty impossible to fulfil, thus all statements will be unknown, making the definition useless.

This could be reworded succintly as:

Nothing can be determined with absolute certainty.

The degree of certainty required for beliefs to be considered knowledge could be set as not absolute certainty, resulting in knowledge that could be wrong, which is unacceptable. Or, it could be set as absolute certainty, and nothing can ever be known, which is unacceptable.

Unfortunately, the way the concept of knowledge is used in language requires that its definition be binary, that knowledge be absolutely certain. It is impossible to say that “Person A knows Statement X is true” while Statement X has a chance of being false. This means that any attempts to define knowledge while conforming to its linguistic use must fail.

I thus argue that knowledge, as the concept is currently used, is not a useful concept. Instead, we should speak of beliefs which have a probability of being true. I propose that belief of a statement be on a continuous spectrum strictly between 0 (statement is certainly false) and 1 (statement is certainly true). The exact values 0 and 1 are unobtainable. The number value of belief is the perceived probability of the statement being true, as determined by Bayesian logic applied to available observations of evidence. (See footnote 2)

With this definition, the farmer in the example may estimate a probability of 0.95 that there is a cow in the field, based on past experience. It is unlikely, after all, that a fake cow would be present. It can then be said that the farmer believes, with 95% probability, that there is a cow in the field.

In summary, I feel that a definition of knowledge is not possible, nor is it useful. Certain knowledge does not exist, only uncertain beliefs.

Footnote 1: In this piece, I argue that beliefs cannot be known with absolute certainty. I realise that this is not strictly true, but have omitted this for simplicity. To the best of my knowledge, it is not true only in the cases of statements that are true by definition, and statements about present observations. I do not consider past observations certain because memories can be lost or mistaken.

“Cats are animals.” True by definition.
“I am currently observing visual signals (which my mind interprets as a cat).” A present observation.
“I think, therefore I am.” A present meta-observation.

Footnote 2: In fact, from a Bayesian perspective, the failure of binary definitions of knowledge is a direct consequence of Cromwell’s rule. The linguistic requirement that knowledge be absolutely certain is unreasonable from a rational, Bayesian viewpoint.

Also posted on /r/philosophy.