SKEDSOFT

Neural Network & Fuzzy Systems

Introduction:-Knowledge refers to stored information or models used by a person or machine to interpret, predict, and appropriately respond to the outside world.The primary characteristics of knowledge representation are twofold: (1) what information is actually made explicit; and (2) how the information is physically encoded for subsequent use. By the very nature of it, therefore, knowledge representation is goal directed.

In real-world applications of "intelligent" machines, it can be said that a good solution depends on a good representation of knowledge. So it is with neural networks that represent a special class of intelligent machines. Typically, however, the possible forms of representation from the inputs to internal network parameters are highly diverse, which tends to make the development of a satisfactory solution by means of a neural network a real design challenge.

A major task for a neural network is to learn a model of the world (environment) in which it is embedded and to maintain the model sufficiently consistent with the real world so as to achieve the specified goals of the application of interest. Knowledge of the world consists of two kinds of information:

1. The known world state, represented by facts about what is and what has been known; this form of knowledge is referred to as prior information.

2. Observations of the world, obtained by means of sensors designed to probe the environment in which the neural network is supposed to operate. Ordinarily these observations are inherently noisy, being subject to errors due to sensor noise and system imperfections.

The subject of knowledge representation inside an artificial network is, however, very complicated. Nevertheless, there are four rules for knowledge representation that are of a general commonsense nature.

Rule 1. Similar inputs from similar classes should usually produce similar representations inside the network, and should therefore be classified as belonging to the same category.

Rule 2.Items to be categorized as separate classes should be given widely different representations in the network. The second rule is the exact opposite of Rule 1.

Rule 3. If a particular feature is important, then there should be a large number of neurons involved in the representation of that item in the network.

Rule 4.Prior information and invariances should be built into the design of a neural network, thereby simplifying the network design by not having to learn them.

How to Build Prior Information into Neural Network Design?

An important issue that has to be addressed, of course, is how to develop a specialized structure by building prior information into its design. Unfortunately, there are currently no well-defined rules for doing this; rather, we have some ad-hoc procedures that are known to yield useful results. In particular, we may use a combination of two techniques:

1. Restricting the network architecture through the use of local connections known as receptive fields.

2. Constraining the choice of synaptic weights through the use of weight-sharing.These two techniques, particularly the latter one, have a profitable side benefit: thenumber of free parameters in the network is reduced significantly.