[ Home ] [ Expert Systems: A Definition ] [ Introduction to Expert System Shells ] [ About XMaster ] [ Application Areas ] [ XMaster Features ] [ Reverend Thomas Bayes ] [ Bayesian Inferencing ] [ System Requirements ] [ What's New ] [ FAQs ] [ Contact Us ] [ Release Notes ] [ Bugs and Fixes ] [ Licence Agreement ] [ Privacy Statement ][ Products ]Login ] |

Bayesian Inferencing techniques are named after the Reverend Thomas Bayes, an 18th Century vicar who had an interest in studying statistics.

His theorem rests on the belief that everything, no matter how unlikely, has a prior probability of being true, ie of occurring.

This probability may be so low that it is, in fact, zero; or, it may be so high that it is, in fact, 1.

But somewhere between and including these two values will be a probability of any event occurring, even if we know nothing more about the situation. This is called the prior probability of occurrence of event H and is denoted as P(H).

But what we want to know is the posterior probability of H - that is to say, the probability of H being true after some evidence has been observed.

If we denote this evidence as E, then this posterior probability can be denoted as P(H|E), which is the conditional probability of H being true given that evidence E has been observed.

We can also denote the probability of evidence E being observed given that hypothesis H is true as P(E|H); and the unconditional probability of evidence E being obeserved as P(E).

Bayes’ theorem then states that:

This is the basis of the inferencing techniques that*XMaster* uses today.

His theorem rests on the belief that everything, no matter how unlikely, has a prior probability of being true, ie of occurring.

This probability may be so low that it is, in fact, zero; or, it may be so high that it is, in fact, 1.

But somewhere between and including these two values will be a probability of any event occurring, even if we know nothing more about the situation. This is called the prior probability of occurrence of event H and is denoted as P(H).

But what we want to know is the posterior probability of H - that is to say, the probability of H being true after some evidence has been observed.

If we denote this evidence as E, then this posterior probability can be denoted as P(H|E), which is the conditional probability of H being true given that evidence E has been observed.

We can also denote the probability of evidence E being observed given that hypothesis H is true as P(E|H); and the unconditional probability of evidence E being obeserved as P(E).

Bayes’ theorem then states that:

P(H|E)=P(E|H)P(H)

P(E)

This is the basis of the inferencing techniques that