Monday, November 18, 2013

Luciano Floridi and a Short Introduction to Information : Part 2

Semantic information as presented by Luciano Floridi deserves a discussion all on its own. He not only describes how it differs from MTC, which we talked about in Part 1, but also introduces two logical paradoxes that each require some explanation. To prepare ourselves let's take a look at the following formal definition of semantic information:

"p qualifies as factual semantic information if and only if p is (constituted by) well-formed, meaningful and veridical data." (p. 50)

Note the veridical aspect in this definition. Information that is not true is not considered factual semantic information. Floridi explains that there is a difference between semantic content and semantic information, which the definition tries to express. Instances of the former can be either true or false, but instances of the latter are always true.

Take for example the case of John, who tells a mechanic that the car's battery is flat because his wife forgot to switch off the car's lights. In actual fact it was John who forgot to switch off the lights, but he does not want to own up to it. The fact that the car's battery is flat is a case of semantic information, because it is true, but what he told the mechanic about the cause, namely that it was his wife's carelessness, is semantic content because it is not true.

Advantages of the definition


There are several advantages to the definition, of which three are highlighted by Floridi:

1. False information is not genuine information
2. Knowledge and information are directly related
3. It solves the Bar-Hillel Carnap paradox

We've already discussed the first point on the list, but the second point is equally important. Knowledge and information are closely related, and of the same conceptual family. Any epistemic project, for instance a subject like biology, is made up of bits of information related to one another. These bits of information account for one another to provide a coherent view of the subject. Semantic information forms the correct starting point for any scientific investigation.

Before proceeding to the paradoxes, including the relevance of the third advantage, it is worth clarifying the relationship between MTC and semantic information.

Relationship to MTC and IRP


At its inception MTC generated much excitement for its theoretical potential in the emerging field of information. Unfortunately, as time passed it became clear that many of the semantic concerns of information could not be explained by MTC. For instance, it could not clarify inquiries into semantic questions surrounding truth and error, explain the relation between one piece of information and another, or assist investigations into more complicated forms of epistemic and psychological phenomena.

While there continues to be debate about the degree to which semantic information is constrained by MTC, there is nevertheless a general acknowledgement that the constraints have loosened over time. Despite this, some connections between the two have remained stable. Of these the communication model (previously explained in part 1) and the Inverse Relationship Principle (IRP) have remained the most stable.

As IRP is important in order to understand the paradoxes let's take a quick look at it. Remember that MTC describes information in terms of probability. For instance a unary source provides 100% probability (and hence no new information).


In the light of this IRP can be understood as:

"the inverse relation between the probability of p - where p may be a proposition, a sentence of a given language, an event, a situation, or a possible world - and the amount of semantic information carried by p"

The scandal of deduction


This is brings us to the first of the two paradoxes, namely the scandal of deduction. As per IRP, the higher the probability of p, the less informative it is. At the extreme end, when P(p) = 1, it is at its least informative because it is always true.

Since it is always true it is also a tautology. Tautologies are generally known for being non-informative, such as if John was told that the mechanic "will or will not" fix the car's battery. Although it is reasonable, it provides no new information.

The problem arises when we compare this with classical logic. As Floridi notes, "in any valid deduction, the information carried by the conclusion must already be contained in the information carried by the (conjunction of) the premises" (p. 55). In other words, a conclusion is possible only if the conditional is a tautology.

As has already been noted, a tautology is non-informative, and therefore a logical conclusion is by implication non-informative. This counter-intuitive outcome is called the scandal of deduction. It suggests that our logical and mathematical endeavours provide no new information, whereas we would certainly want to disagree.

One way to try and solve the issue is by appealing to the psychological value of informativeness. In this respect, logical reasoning elicits the meanings contained in the premises, highlighting them in a way that makes them clearer to our human brains. This approach does not explain how and why deductive reasoning is such an essential component in science, instead suggesting that it is optional for the sake of clarification.

A more successful way of resolving the dilemma involves the introduction of "virtual information" that assists the reasoning process and is then released by the time the conclusion is reached, leaving no traces.

To understand this, consider the hypothetical act of deciding what to do given two situations. Suppose you have an exam tomorrow and you are deciding whether to study for it or instead go to a party. Although you do not actually have foreknowledge you can reason that if you study then you might pass, but if you do not study you will definitely fail. After considering these hypothetical courses of action you realise that the first course of action could help you to attain the degree you've always desired. Thanks to logic, you decide to forego the party and study hard.

This process of reasoning involves "virtual information". The real outcome (studying) is only arrived at after you stepped outside the real situation in which the information applied, and reasoned hypothetically about the information at your disposal. Once you concluded and made your decision, you stepped back into the real situation.

This demonstrates that logical deductions are indeed valuable and informative.

The Bar-Hillel Carnap Paradox


As an exact opposite to the previous case, consider p becoming less and less probable. As it becomes less probable it should also become more informative. When p reaches zero it should logically be at its most informative. Yet when p is zero it is effectively a contradiction. An example would be "the car's battery is and is not flat". We should be receiving the most semantic information in this case, but instead we are faced with a contradiction.

This unexpected outcome is called the Bar-Hillel-Carnap paradox, after the two philosophers who first described and popularised it. The paradox is now considered a valid property of weakly semantic information, albeit a somewhat unfortunate one.

Nevertheless there is a simple way to avoid the paradox, namely by strengthening the semantic information with veridicality. As Floridi says, "if something qualifies as factual semantic information only when it satisfies the truthfulness condition, contradictions and indeed falsehoods are excluded a priori" (p. 59).

Contradiction is thereby a form of misinformation. The statement "the car's battery is and is not flat" would be incorrect in the strongly semantically informative sense, and should be "the car's battery is flat".

This concludes our discussion of semantic of information.

No comments: