Neural Networks

FIELDS OF STUDY

Network Design; Computer Engineering; Robotics Computer Science

ABSTRACT

Neural networks, or artificial neural networks (ANNs) are modeled on the structure and interconnectedness of neurons in biological brains. They have a three-layered structure consisting of an input layer, a hidden layer, and an output layer. Like a biological system, the input layer acquires data and delivers it to the hidden layer for processing. When a suitable solution to the input data has been determined, the hidden layer delivers it to the output layer. ANNs are controlled by the system architecture, the activity function and the learning function, enabling them to learn, or “train,” by adjusting to changing input information. Inputs are assigned different weights and activities, and computation is carried out using “fuzzy logic” as the key to artificial intelligence. ANNs learn and compute most commonly through the backpropagation algorithm, which is akin to negative feedback, to minimize error by successive approximation. When the ANN has determined the solution having the least error, this becomes the output of the system. Typically, an ANN of this type gives no indication of how the solution was obtained. Neural networks can be analog or digital, and can be constructed as electronic, optical, hydraulic and software-simulated systems. Parallel processing is the most compatible type of computer system for software-simulated ANNs. Several types of ANN can be defined according to the logic principles by which they function.

PRINICIPAL TERMS

ARTIFICIAL NEURAL NETWORKS

An artificial neural network can be analog or digital, and may be electronic, optical, hydraulic or software-simulated in nature. For the purposes of computer science, however, only electronic and optical constructs are considered. The principal electronic components are digital CPUs. Optical components that use light for data transmission and storage are still in early development, though they are expected to provide greatly enhanced computational and operational abilities in comparison to silicon-based processors. An artificial neural network (ANN) has a functional structure analogous to that of the brain, consisting of an input layer, a hidden or intermediate layer that carries out the computation and processing of data, and an output layer. In an ANN, the interactivity of the processor network is structured to mimic the interactivity of neurons in the brain and each of the processors in the ANN is effectively connected to all of the other processors in the network. The interconnectivity may be electronic or optical in transistor-transistor based communication, or the entire network may be simulated by software. Currently, ANNs are physically limited to just a few thousands of concurrent processors in the largest interconnected systems, in what is effectively a two-dimensional array. By comparison, the human brain contains about 100 billion (1011) neurons, each with multiple interconnections in a three-dimensional array. Miniaturization of integrated circuitry, the development of three-dimensional integrated circuits, and the development of quantum computers is expected to produce a very large increase in the power of ANNs.

SPECIFICATION OF ANNS

Three factors are typically used to specify an ANN. The first is the architecture of the system. The architecture specifies the variables involved in the composition and functioning of the ANN and typically assigns weights, or relative importance, to the elements in the input layer and assigns their respective activities. The second factor is the “activity rule,” which describes how the weight values of the inputs change as they respond to each other. The third factor is the “learning rule,” which specifies how the input weights change over time. The learning rule functions on a longer time scale than the activity rule and modifies the weights of the various interconnections according to the input patterns that are received. The most common learning rule used is based on the back-propagation of error, or what might otherwise be termed negative feedback, for the adjustment of the weights of input signals. Through repetition of backpropagated error correction the variance of the computed result from its actual value is minimized, and triggers the output of the result. Effectively, the functioning of an ANN can be thought of as simply a repeated process of approximation to achieve the outcome that has the highest likelihood of being true.

INPUT WEIGHTING

The weighting of inputs to an ANN is based on the logical principle of the “threshold logic unit” (TLU). The TLU is a processing unit for threshold values having n inputs but just one output. The TLU returns an output of 1 when the sum of the proportional, or weighted, values of all inputs is greater than or equal to the threshold value, and an output of 0 otherwise. In effect it is a “truth test” of the input conditions. For example, given the threshold value of 3 and the number of inputs as 6, then the TLU will return a value of 1 if the total input over all six inputs is 3 or more. It will return a value of 0 if the total input over all 6 inputs is less than 3.

SAMPLE PROBLEM

A water tank releases water at a rate of 3 liters per minute. It is connected to 6 hoses that bring fresh water into the tank at different rates. To maintain a constant level of water in the tank, fresh water must be introduced at the same rate of 3 liters per minutes using all 6 hoses. An indicator on the side of the tank points to 1 if the water level rises, or to 0 if the water level decreases. When fully open, each hose brings 1 liter of water per minute into the tank, and this rate is controlled by operating the hoses only fractionally. Determine whether the indicator will point to 1 or 0 when the 6 hoses are operated at 0.1, 0.75, 0.2, 0.5, 0.33 and 0.25 of capacity, respectively

Answer:

The TLU for this system is that the sum of all 6 hose inputs must be 3 liters per minute or greater for the indicator to point to 1, else it will point to 0. Therefore,




PRINICIPAL TERMS

Since this value is less than the 3 liters per minute required to maintain the constant water level, the water level in the tank will decrease and the indicator will point to 0.

This is an example of a hydraulic neural network, and clearly demonstrates the concept of weighted inputs as it applies to neural networks generally.

USES AND LIMITATIONS OF ANNS

The development of ANNs is currently still very basic, although algorithms for the concept were first produced in 1947. The limiting factor since that time has always been the limitations inherent in available computing capabilities. General applications of the concept could not be produced until computational ability became sufficiently small and agile enough to be feasible. While it is tempting to think of neural networks in terms of supercomputers, the first generally available device to make use of an ANN for the control of its function was a household vacuum cleaner that could self-adjust its operational settings according to the floor-level conditions that it encountered. A similar system has since been incorporated into many other kinds of household appliances. The feature of interest in these, and in all ANNs, is that they have the ability to learn, or “train,” to carry out tasks. An exceptional example of this is the “Roomba” floor sweeping device and its competitors. The task the device is required to learn is the most efficient manner of sweeping a mapped space of the physical dimensions of the floor. Neural networks learn their tasks by capturing associations and regularities within patterns, just as a Roomba captures associations and regularities within its map of a floor area. They also are very effective in applications involving a large volume of diverse data that includes numerous variables for input, or when the relationships between variables are poorly understood. Similarly, ANNs are useful when the relationships between input variables are not described well or responsive to conventional computational approaches. The major problem with the functioning of ANNs that rely on backpropagation is that their computation strategies are not transparent to the user. They function ultimately in a “black box” manner to produce an output from specified inputs, without providing any indication of how that output was achieved. In another context, a software-simulated ANN consisting of several processor nodes can be slow to run on a single computer, since that computer consists of just a single processor node. Consequently, it must compute the result from each of the software-simulated nodes before the back-propagation of error can be investigated. This feature renders parallel processing computers more capable for the manipulation of software-simulated ANNs. Progress in the fields of parallel processing and quantum computing to increase the ability of neural networks to perform increasingly complex computations and “learn as they go” will bring with it corresponding advances in the development of artificial intelligence.

—Richard M. Renneboog M.Sc.

Galushkin, Alexander I. Neural Network Theory. New York, NY: Springer, 2007. Print.

Graupe, Daniel. Principles of Artificial Neural Networks. 2nd ed. Hackensack, NJ: World Scientific, 2007. Print.

Hagan, Martin T., Howard B. Demuth, , Mark H.

Beale, and Orlando de Jesús. Neural Network Design. 2nd ed. Martin Hagan, 2014. Print.

Pandzu, Abhujit S. and Robert B. Macy. Pattern Recognition with Neural Networks in C++. Boca Raton, FL: CRC Press, 1996. Print.

Prasad, Bhanu, and S.R. Mahadeva Prasanna, eds. Speech, Audio, Image and Biomedical Signal Processing Using Neural Networks. New York, NY: Springer, 2008. Print.

Priddy, Kevin L. and Keller, Paul E. Artificial Neural Networks, An Introduction. Bellingham, WA: SPIE Press, 2005. Print.

Rao, M. Ananda, and J. Srinavas. Neural Networks. Algorithms and Applications. Pangbourne, UK: Alpha Science International, 2003. Print.

Rogers, Joey. Object-Oriented Neural Networks in C++. New York, NY: Academic Press, 1997. Print.