# McCulloch Pitts Neurons (page 10)

 Author: Michael Marsalli
 Additional Credits: FundingThis module was supported by National Science Foundation Grants #9981217 and #0127561.

From your computer explorations in the previous section, you perhaps came to the conclusion that there is no MCP neuron that has the same output as the XOR function.  Indeed, no matter what the choice of initial weights, it appeared that the Hebbian learning process never seemed to settle on a fixed list of weights.  Every pass through the list of inputs seemed to rescramble the weights.  In fact, it is true that there is no single MCP neuron that matches the XOR function. While your explorations are evidence for this fact, the explorations by themselves can't prove this.  After all, there are an infinite number of possible choices for inital weights.  Maybe there is a good choice of initial weights that will lead to the XOR function.  In this section, we'll give a proof that no two input MCP neuron can have the same output as the XOR function.

We'll suppose that there is a two input MCP neuron with the same output as the XOR function.  Then we'll show that this supposition would lead to a false statement.  So the supposition itself must not be false, i.e. there can't be such an MCP neuron.

So suppose the weights of our neuron are w0, w1, and w2.  Now the XOR function gives 0 for an input of (0,0).

So our MCP neuron must have the same output.  But this means that w0*1 + w1*0 + w2* 0 < 0.  So w0 < 0.

Thus   -w0 > 0.

The XOR function also has output  0 for the input (1,1).  So our MCP neuron must have  w0*1 + w1*1 + w2* 1 < 0.

So w0 + w1 + w2 < 0.

The XOR function has output 1 for the inputs (1,0) and (0,1).  This means our MCP neuron must have

w0*1 + w1*1 + w2* 0  >= 0  and w0*1 + w1*0 + w2* 1 >= 0.  Thus w0 + w1 >= 0  and w0 +w2 >=  0.

We now have four inequalities for the three weights w0, w1, and w2.

1.  -w0 > 0
2. w0 + w1 + w2 < 0
3. w0 + w1<= 0
4. w0 +w2 <= 0

If we add inequalites 3 and 4, we obtain  2*w0 + w1 +w2 <= 0.  Now  if we add inequality 1 to this inequality, we obtain  w0 + w1 + w2 <=  0.  But this clearly contradicts inequality 2. Thus no MCP neuron can have all four of these inequalties be true.  Thus no MCP neuron can have the same output as the XOR function.

We've proved that there is a fundamental limitation on the type of function that can be obtained by a two input MCP neuron. In particular, not all logic functions are two input MCP neurons. We've showed this for the XOR function. Are there any other logic functions that are not MCP neurons ?

Exercise.  For each of the sixteen logic functions, either find an MCP neuron that has the same output, or prove that the logic function can't be an MCP neuron.

Although a single two input MCP neuron can't produce an XOR function, perhaps if we connect some two input MCP neurons together, we can produce a network of MCP neurons that have the desired result.  In fact, as we'll see, this idea works. McCulloch and Pitts essentially showed that any function that takes n inputs, each of which is 0 or 1, and produces an output of 0 or 1 can be reproduced by a network of  MCP neurons.

Now we'll see how to produce the XOR function using a network of two input MCP neurons.  Below we give tables for the three MCP neurons that we'll link together in a network.  We use A1, A2, and A3 to denote the outputs of the three MCP neurons.

 x1 x2 A1 1 1 0 1 0 1 0 1 0 0 0 0

 x1 x2 A2 1 1 0 1 0 0 0 1 1 0 0 0

 x1 x2 A3 1 1 1 1 0 1 0 1 1 0 0 0

The network will be constructed by sending the inputs x1 and x2 to each of the MCP neurons A1 and A2.  The outputs from A1 and A2 will then be sent as inputs to A3.  We will then have the following table.

Table 12
MCP Network of A1, A2, and A3
 x1 x2 A1 A2 A3 1 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 0 0 0

Note that the ouput A3 is indeed the XOR function of the inputs x1 and x2, but this was achieved by inserting two other MCP neurons in between the inputs and the final MCP neuron.

In their paper McCulloch and Pitts essentially showed that any function which assigns a 0 or 1 to a fixed number of inputs, each of which are either 0 or 1, can be reproduced by a network of MCP neurons. It thus seems that networks of MCP neurons are remarkably flexible and general.  Indeed this is the hope of much research on neural networks, but as we shall see there were some obstacles that had to be overcome.