Friday, December 31, 2010

Induction of Objectivity (Aristotle)

[Previous post in the series: "Reduction of Objectivity (Aristotle)"]

Objectivity now being reduced, we can work through the steps Aristotle had to in order to induce his principle of objectivity. It’s essentially five steps:
  1. Grasp the distinction of percepts and concepts.
  2. Understand that concepts are capable of error, whereas percepts are not.
  3. Learn that the functioning of concepts is under our control, whereas percepts are not.
  4. Discover that we can somehow use percepts as a means to measure concepts.
  5. We’ll then know that a method is necessary, and that it is possible because we know what it would consist of, by reducing the fallible part to the infallible part.

Sunday, December 26, 2010

Reduction of Objectivity (Aristotle)

[Previous post in the series: "Induction of Justice"]

The aim of this essay is to reduce the idea of objectivity so that we can inductively reach Aristotle’s understanding of the concept. It’s important because we need his understanding of the concept to really understand Ayn Rand’s discoveries. After inducing this, we can induce the full, Objectivist understanding of objectivity from Aristotle’s development.

The definition of objectivity Aristotle would have given: “volitional adherence to reality by the method of logic.”

Dictionary definition: “Not affected by personal feelings; based on facts.” Based on facts, and not based on feelings—this is the main thing people understand about objectivity.

It isn’t enough to set aside your feelings in a cognitive context without some other means of understanding facts, and “based on facts” can’t simply be about percepts, because all conceptual knowledge would be barred from the approach of objectivity. So the dictionary definition informs us that we need a method or rules of thinking that ties thinking to facts, instead of feelings.

The first step down from this idea of objectivity is: “The method of adhering to reality to gain knowledge,” and we learn what the method is later. How would we grasp the idea that we even need a method?

It isn’t as simple as: from observation and induction we know that man is capable of error, he’s fallible; from this, we can deduce that you can’t be certain of your conclusions and that therefore, we can deduce that we need a method of gaining knowledge to guide us: this is a rationalistic argument.

It is necessary to grasp that we’re capable of error if we hope to even reach the concept of objectivity, but “objectivity” and “error” are vastly far apart from each other, cognitively speaking. The understanding of the fact of error came very easily, going way back into prehistory: people would bring home the wrong animal to eat, bring the wrong things needed to start a fire, etc. The striking fact, which the rationalist would overlook, is the idea that people are fallible didn’t suggest to anyone before the Greeks that we were in need of a method for checking our thinking and conclusions. In effect, the rationalist is taking as common sense what was actually a monumental discovery by the Greeks, by specifically Aristotle. The pre-Greeks had a means to deal with errors, but it wasn’t objectivity, but intrinscicism: authority, their faith in authority. The Pharaoh knows, or God knows, or whatever. It’s an invalid leap to go from “people are capable of error” to “we need a method of checking our thinking.”

So, to grasp why we would need a method at all, we need to know something about the mind, specifically what its operations are, what is possible of the mind, where it goes wrong, and how. If we don’t know how it goes wrong, or where, or what it could be doing that is different from what it’s doing, then we have no means to improve the mind. The first thing we need to know is that there are some areas or operations of the mind in which it is safe, or infallible. We have to know that first, before we can start looking for a method, as that knowledge gives us a clue as to what we can do when we’re using a fallible process.

Once we know that some part of our mind is error-free, we can figure out later that we can guide our minds reliably by using the safe data to check our fallible data, which is the essential process of objectivity. Later, we determine that the way to check this is to reduce all conceptual products to sensory observation. This idea of infallible data is important, because without it, we could never devise a method of guiding ourselves to the truth, and we could not count on it as underlying our conclusions, including our conclusion as to how we can improve our mental processes. There are then important distinctions which exist within our individual consciousness, which we have to discover before we could construct a method for correcting our errors, or even preventing them.

How could someone discover that there’s a process that can go wrong as opposed to a process that is safe?

Well, we know that we have free will, that we have control over something in our consciousness, because it would be impossible to wonder about how to guide our thinking, or find ways to improve our conclusions, if the whole operation of the mind is out of our control.

The idea we’re getting to is that Aristotle had to make a crucial discovery: there’s a part of the mind that can go wrong, and that’s the part that we’re in control of, where our free will reigns, and that there’s a part of the mind that is safe, where we don’t need control. As a result, we can decide to check the part that can go wrong using the other, error-free part. That’s what we have to know before we can search for a method of guiding our thinking.

What obvious major discovery about consciousness had to be made before we can determine that one part is fallible while one isn’t, and that one part is controlled by our mind, while the other is not. What’s the basic distinction of consciousness that had to be discovered before we could discover other distinctions and thus grasp the need of a method? The distinction between percepts and concepts. Not those exact words: for instance, Plato and Aristotle called the distinction “the realm of sense” and “the realm of ideas.” Ideas or Forms or Universals or Essences: how we word it is irrelevant. The point is that without this distinction, we would have no footing in prescribing guidance.

So, we couldn’t reach the method of logic until we knew that the method was necessary and possible, and to know these we would need to know three things:

1. We need to know what kinds of error are possible. That means that we would have to discover what kind of mental content is fallible vs. infallible. This is necessary, because it gives us a clue as to what we’re trying to correct (the fallible part), and that we’re trying to accomplish this by somehow measuring the fallible part against the infallible part.
2. We have control over the fallible part—free will reigns over the fallible area. There’s no point in prescribing a method if we have no control over the relevant part of the mind.
3. What is the relationship between these two areas? How could we relate, measure or reduce the fallible to the infallible?

Once we know those three, we’ll know that a method is both necessary and possible. The final issue, between percepts and concepts, is directly observable, one by extrospection, the other by introspection.

[Next post in the series: "Induction of Objectivity (Aristotle)"]

Wednesday, December 22, 2010

Advances in Baconian Induction: John Herschel (Part 1 of 3)

Introduction

John Frederick William Herschel (1792-1871) was an important 19th Century scientist, arguably the most important. (I currently put William Whewell and Herschel on nearly the same footing, with Whewell having a slight edge.) He studied and made applications to the fields of astronomy, mathematics, chemistry, botany, and electricity. He was also one of the first modern "philosophers of science," and an advocate of the use of inductive reasoning in scientific investigations, particularly a version of Francis Bacon's method of induction, informed by the discoveries of science since the early 17th Century (Bacon died in 1626). To promote and encourage the activities of the "men of science," Herschel published the work A Preliminary Discourse on the Study of Natural Philosophy (1830), a treatise on the scientific method, detailing the elements of science, scientific subjects that had been and were being studied, and the procedures that a good man of science should utilize. (This book would be influential for many later scientists, notably Charles Darwin.) Most importantly, Herschel proposed in this work an enhancement of Francis Bacon's philosophy of induction, discussing both the nature of inductive reasoning and the value that should be placed upon it in science. Indeed, the very progression of science from the state of pre-science speculations and collections of facts is a progression of inductions, Herschel would remind us.

This three part essay will detail the elements and rules of Herschel's view of induction, starting with his empiricist view of experience being the source of all knowledge, working our way through his rules for inductive reasoning and ways for verifying inductions made, and the role of analogy, hypothesis, and the complimentary relation of induction and deduction in science. As a result, it isn't a complete discussion of all the important points about science made by Herschel in his Preliminary Discourse, such as the role of precise measurement in describing laws of nature, and I would suggest that the reader takes some time to read the book itself.