..

Overhead for "Connectionism: An Introduction"

Additional Credits:
Funding
This module was supported by National Science Foundation Grants #9981217 and #0127561.

Introduction to Connectionism

What is connectionism?

Connectionism is the name for the computer modeling approach based on how information processing occurs in neural networks (connectionist networks are called artificial neural networks).

Anatomy of a connectionst model

Units are to a connectionist model what neurons are to a biological neural network -- the basic information processing structures.

 

Biological neural networks are organized in layers of neurons. For this reason, connectionist models are organized in layers of units, not random clusters.

But what you see here still isn't a network. Something is missing.

The connections! Network connections(or simply connections) are conduits through which information flows between members of a network. In the absence of such connections, no group of objects qualifies as a network.

There are two kinds of network connections:

input connection

a conduit through which a member of a network receives information (INPUT)

output connection

is a conduit through which a member of a network sends information (OUTPUT).

In biological neural networks, connections are synapses.

Because connectionist models are based on how computation occurs in biological neural networks, connections play an essential role in connectionist models -- hence the name "connectionism."

Connections in a connectionist model are represented with lines. Arrows in a connectionist model indicate the flow of information from one unit to the next.

From which units does the blue unit receive its INPUT?

To which units does it send its OUTPUT?

When each unit in a connectionist model is connected to each of the units in the layer above it, the result is a network of units with many connections between them. This is illustrated in the following figure. It captures the architecture of a standard, 3-layered feedforward network.

But what is going on within a unit?


Unit behavior

Units as computers

It has been noted elsewhere (see Functionalism) that there are a couple of features all computers have in common.

Neurons are computers (NOT digital ones)

MCP neurons are computers.

Units are computers (NOT digital ones)

Activations

 

The "information" J receives as INPUT and sends as OUTPUT are called activation values (or simply activations).

Are activation values analogous to action potentials?

No, not a single (discrete) action potential (as is the case with the MCP neuron whose OUTPUT is 0 or 1, "NO" or "YES").

Yes, as in how actively a neuron sends action potential (firing rate).

Connection weights

The strength or weakness of a connection is measured by a connection weight.

As is the case with activation values, connection weights are usually nondiscrete values between a certain range, usually -1 to 1.

A low connection weight (say, -.8) represents a weak connection (inhibition).

A high connection weight (e.g., .7) represents a strong connection (excitation).

How does a unit compute its OUTPUT?

(First it computes its combined input, then it "squashes" it).

The COMBINED INPUT to J (cj) is the sum of each INPUT activation multiplied by its connection weight.

To ensure that the OUTPUT activation of a unit NEVER exceeds its maximum or minimum activation values, a unit's COMBINED INPUT must be put through an activation function (a mathematical formula that "squashes" the COMBINED INPUT into the activation value range, which for us is 0 to 1).

Here is all of the processing put together.

A trained network composed of such units can do many things (e.g., pattern completion).

A trained network composed of such units has many interesting emergent properties (e.g., graceful degradation -- the property of a network whose performance progressively becomes worse as the number of its randomly destroyed units or connections increases).

GNNV as an example of how a connectionist network can be trained to recognize faces.

 

 
Copyright: 2006

You've reached the end of this component.
[Explore Complete Module]