Neural Networks for Pattern Recognition (Advanced Texts in Econometrics)

Neural Networks for Pattern Recognition (Advanced Texts in Econometrics)

Christopher M. Bishop

Language: English

Pages: 504

ISBN: 0198538642

Format: PDF / Kindle (mobi) / ePub


This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modeling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

Twitter Data Analytics (SpringerBriefs in Computer Science)

WCF 4.0 Multi-tier Services Development with LINQ to Entities

Computer Networks: A Systems Approach (5th Edition) (The Morgan Kaufmann Series in Networking)

Concise Computer Vision: An Introduction into Theory and Algorithms (Undergraduate Topics in Computer Science)

Cloud Computing: Theory and Practice

Pro Vim

 

 

 

 

 

 

 

 

 

 

density. By contrast, the second technique of non-parametric estimation does not assume a particular functional form, but allows the form of the density to be determined entirely by the data. Such methods typically suffer from the problem that the number of parameters in the model grows with the size of the data set, so that the models can quickly become unwieldy. The third approach, sometimes called semiparametric estimation, tries to achieve the best of both worlds by allowing a very general

principal directions aligned with the coordinate axes. The components of x are then said to be statistically independent since the distribution of x can be written as the product of the distributions for each of the components separately in the form d p(x)=n?(4 (2-io) Further simplification can obtained by choosing <7j — a for all j , which reduces the number of parameters still further to d + 1. The contours of constant density are then hyperspheres. A surface plot of the normal distribution

it follows that yk(x) = ayk(xA) + (1 - a)yk{xB) and hence yk(x) > yj(x) for all j =£ k. Thus, all points on the line connecting xA and x B also lie in 1Zk and so the region Tlk must be simply connected and convex. 3: Single-Layer Networks 82 3.1.3 Logistic discrimination So far we have considered discriminant functions which are simple linear functions of the input variables. There are several ways in which such functions can be generalized, and here we consider the use of a non-linear

minimum of the error function. Here we consider one of the simplest of such algorithms, known as gradient descent. It is convenient to group all of the parameters (weights and biases) in the network together to form a single weight vector w, so that the error function can be expressed as E = JB(W). Provided E is a differentiable function of w we may adopt the following procedure. We begin with an initial guess for w (which might for instance be chosen at random) and we then update the weight

a correct classification of all of the patterns in the training set. Many recent textbooks on neural networks have summarized Minsky and Papert's contribution by pointing out that a single-layer network can only classify data sets which are linearly separable, and hence can not solve problems such as the XOR example considered earlier. In fact, the arguments of Minsky and Papert are rather more subtle, and shed light on the nature of multi-layer networks in which only one of the layers of weights

Download sample

Download