AI::NeuralNet::BackProp - A simple back-prop neural net that uses Delta's and Hebbs' rule.
AI::NeuralNet::BackProp - A simple back-prop neural net that uses Delta's and Hebbs' rule.
use AI::NeuralNet::BackProp; # Create a new network with 1 layer, 5 inputs, and 5 outputs. my $net = new AI::NeuralNet::BackProp(1,5,5); # Add a small amount of randomness to the network $net->random(0.001); # Demonstrate a simple learn() call my @inputs = ( 0,0,1,1,1 ); my @ouputs = ( 1,0,1,0,1 ); print $net->learn(\@inputs, \@outputs),"\n"; # Create a data set to learn my @set = ( [ 2,2,3,4,1 ], [ 1,1,1,1,1 ], [ 1,1,1,1,1 ], [ 0,0,0,0,0 ], [ 1,1,1,0,0 ], [ 0,0,0,1,1 ] ); # Demo learn_set() my $f = $net->learn_set(\@set); print "Forgetfulness: $f unit\n"; # Crunch a bunch of strings and return array refs my $phrase1 = $net->crunch("I love neural networks!"); my $phrase2 = $net->crunch("Jay Lenno is wierd."); my $phrase3 = $net->crunch("The rain in spain..."); my $phrase4 = $net->crunch("Tired of word crunching yet?"); # Make a data set from the array refs my @phrases = ( $phrase1, $phrase2, $phrase3, $phrase4 ); # Learn the data set $net->learn_set(\@phrases); # Run a test phrase through the network my $test_phrase = $net->crunch("I love neural networking!"); my $result = $net->run($test_phrase); # Get this, it prints "Jay Leno is networking!" ... LOL! print $net->uncrunch($result),"\n"
This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of run()
and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more accurate than previous verions and is
based on the LMS rule. See range()
for information on output range limits. I have also updated
the load()
and save()
methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.
AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward, back-propagtion network; learning via a mix of a generalization of the Delta rule and a disection of Hebbs rule. The actual neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
You constuct a new network via the new constructor:my $net = new AI::NeuralNet::BackProp(2,3,1);
The new()
constructor accepts two arguments and one optional argument, $layers, $size,
and $outputs is optional (in this example, $layers is 2, $size is 3, and $outputs is 1).
$layers specifies the number of layers, including the input and the output layer, to use in each neural grouping. A new neural grouping is created for each pattern learned. Layers is typically set to 2. Each layer has $size neurons in it. Each neuron's output is connected to one input of every neuron in the layer below it.
This diagram illustrates a simple network, created with a call to "new AI::NeuralNet::BackProp(2,2,2)" (2 layers, 2 neurons/layer, 2 outputs).input / \ O O |\ /| | \/ | | /\ | |/ \| O O \ / mapper
In this diagram, each neuron is connected to one input of every neuron in the layer below it, but there are not connections between neurons in the same layer. Weights of the connection are controlled by the neuron it is connected to, not the connecting neuron. (E.g. the connecting neuron has no idea how much weight its output has when it sends it, it just sends its output and the weighting is taken care of by the receiving neuron.) This is the method used to connect cells in every network built by this package.
Input is fed into the network via a call like this:
use AI; my $net = new AI::NeuralNet::BackProp(2,2); my @map = (0,1); my $result = $net->run(\@map);
Now, this call would probably not give what you want, because
the network hasn't ``learned'' any patterns yet. But this
illustrates the call. Run now allows strings to be used as
input. See run()
for more information.
Run returns a refrence with $size elements (Remember $size? $size is what you passed as the second argument to the network constructor.) This array contains the results of the mapping. If you ran the example exactly as shown above, $result would probably contain (1,1) as its elements.
To make the network learn a new pattern, you simply call the learn method with a sample input and the desired result, both array refrences of $size length. Example:
use AI; my $net = new AI::NeuralNet::BackProp(2,2); my @map = (0,1); my @res = (1,0); $net->learn(\@map,\@res); my $result = $net->run(\@map);
Now $result will conain (1,0), effectivly flipping the input pattern
around. Obviously, the larger $size is, the longer it will take
to learn a pattern. Learn()
returns a string in the form of
Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).
With the X's replaced by time or loop values for that loop call. So, to view the learning stats for every learn call, you can just:
print $net->learn(\@map,\@res);
If you call ``$net->debug(4)'' with $net being the
refrence returned by the new()
constructor, you will get benchmarking
information for the learn function, as well as plenty of other information output.
See notes on debug()
in the METHODS section, below.
If you do call $net->debug(1), it is a good idea to point STDIO of your script to a file, as a lot of information is output. I often use this command line:
$ perl some_script.pl > .out
Then I can simply go and use emacs or any other text editor and read the output at my leisure, rather than have to wait or use some 'more' as it comes by on the screen.
AI::NeuralNet::BackProp
object. The network will have $layers
number layers in it
and each layer will have $size
number of neurons in that layer.
There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the new()
constructor will return undef.
The optional parameter, $topology_flag, defaults to 0 when not used. There are three valid topology flag values:
0 default My feed-foward style: Each neuron in layer X is connected to one input of every neuron in layer Y. The best and most proven flag style.
^ ^ ^ O\ O\ /O Layer Y ^\\/^/\/^ | //|\/\| |/ \|/ \| O O O Layer X ^ ^ ^
(Sorry about the bad art...I am no ASCII artist! :-)
1 In addition to flag 0, each neuron in layer X is connected to every input of the neurons ahead of itself in layer X.
2 (``L-U Style'') No, its not ``Learning-Unit'' style. It gets its name from this: In a 2 layer, 3 neuron network, the connections form a L-U pair, or a W, however you want to look at it.
^ ^ ^ | | | O-->O-->O ^ ^ ^ | | | | | | O-->O-->O ^ ^ ^ | | |
As you can see, each neuron is connected to the next one in its layer, as well as the neuron directly above itself.
Before you can really do anything useful with your new neural network
object, you need to teach it some patterns. See the learn()
method, below.
learn()
is complete
with the pattern()
method, below.
UPDATED: You can now specify strings as inputs and ouputs to learn, and they will be crunched automatically. Example:
$net->learn('corn', 'cob'); # Before update, you have had to do this: # $net->learn($net->crunch('corn'), $net->crunch('cob'));
Note, the old method of calling crunch on the values still works just as well.
UPDATED: You can now learn inputs with a 0 value. Beware though, it may not learn()
a 0 value
in the input map if you have randomness disabled. See NOTES on using a 0 value with randomness
disabled.
The first two arguments may be array refs (or now, strings), and they may be of different lengths.
Options should be written on hash form. There are three options:
inc => $learning_gradient max => $maximum_iterations error => $maximum_allowable_percentage_of_error
$learning_gradient is an optional value used to adjust the weights of the internal connections. If $learning_gradient is ommitted, it defaults to 0.20.
$maximum_iterations is the maximum numbers of iteration the loop should do. It defaults to 1024. Set it to 0 if you never want the loop to quit before the pattern is perfectly learned.
$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then learn()
will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn()
will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.
learn()
This takes the same options as learn()
and allows you to specify a set to learn, rather
than individual patterns. A dataset is an array refrence with at least two elements in the
array, each element being another array refrence (or now, a scalar string). For each pattern to
learn, you must specify an input array ref, and an ouput array ref as the next element. Example:
my @set = ( # inputs outputs [ 1,2,3,4 ], [ 1,3,5,6 ], [ 0,2,5,6 ], [ 0,2,1,2 ] );
See the paragraph on measuring forgetfulness, below. There are two learn_set()-specific option tags available:
flag => $flag pattern => $row
If ``flag'' is set to some TRUE value, as in ``flag => 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set()
will return an integer specifying the amount of forgetfulness when all the patterns
are learned.
If ``pattern'' is set, then learn_set()
will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:
my @set = ( [ 0,1,0,1 ], [ 0 ], [ 0,0,1,0 ], [ 1 ], [ 1,1,0,1 ], [ 2 ], # <--- [ 0,1,1,0 ], [ 3 ] );
If you wish to measure forgetfulness as indicated by the line with the arrow, then you would pass 2 as the "pattern" option, as in "pattern => 2".
Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I even measure that. Well, it is not a vital value that you have to know. I just put in a ``forgetfulness measure'' one day because I thought it would be neat to know.
How the module measures forgetfulness is this: First, it learns all the patterns in the set provided,
then it will run the very first pattern (or whatever pattern is specified by the ``row'' option)
in the set after it has finished learning. It will compare the run()
output with the desired output
as specified in the dataset. In a perfect world, the two should match exactly. What we measure is
how much that they don't match, thus the amount of forgetfulness the network has.
NOTE: In version 0.77 percentages were disabled because of a bug. Percentages are now enabled.
Example (from examples/ex_dow.pl):
# Data from 1989 (as far as I know..this is taken from example data on BrainMaker) my @data = ( # Mo CPI CPI-1 CPI-3 Oil Oil-1 Oil-3 Dow Dow-1 Dow-3 Dow Ave (output) [ 1, 229, 220, 146, 20.0, 21.9, 19.5, 2645, 2652, 2597], [ 2647 ], [ 2, 235, 226, 155, 19.8, 20.0, 18.3, 2633, 2645, 2585], [ 2637 ], [ 3, 244, 235, 164, 19.6, 19.8, 18.1, 2627, 2633, 2579], [ 2630 ], [ 4, 261, 244, 181, 19.6, 19.6, 18.1, 2611, 2627, 2563], [ 2620 ], [ 5, 276, 261, 196, 19.5, 19.6, 18.0, 2630, 2611, 2582], [ 2638 ], [ 6, 287, 276, 207, 19.5, 19.5, 18.0, 2637, 2630, 2589], [ 2635 ], [ 7, 296, 287, 212, 19.3, 19.5, 17.8, 2640, 2637, 2592], [ 2641 ] ); # Learn the set my $f = learn_set(\@data, inc => 0.1, max => 500, p => 1 ); # Print it print "Forgetfullness: $f%";
This is a snippet from the example script examples/ex_dow.pl, which demonstrates DOW average prediction for the next month. A more simple set defenition would be as such:
my @data = ( [ 0,1 ], [ 1 ], [ 1,0 ], [ 0 ] ); $net->learn_set(\@data);Same effect as above, but not the same data (obviously).
learn()
This takes the same options as learn()
and allows you to specify a set to learn, rather
than individual patterns.
learn_set_rand()
differs from learn_set()
in that it learns the patterns in a random order,
each pattern once, rather than in the order that they are in the array. This returns a true
value (1) instead of a forgetfullnes factor.
Example:
my @data = ( [ 0,1 ], [ 1 ], [ 1,0 ], [ 0 ] ); $net->learn_set_rand(\@data);
run()
will now automatically crunch()
a string given as the input.
This method will apply the given array ref at the input layer of the neural network, and it will return an array ref to the output of the network.
Example:
my $inputs = [ 1,1,0,1 ]; my $outputs = $net->run($inputs);
With the new update you can do this:
my $outputs = $net->run('cloudy, wind is 5 MPH NW'); # Old method: # my $outputs = $net->run($net->crunch('cloudy, wind is 5 MPH NW'));
See also run_uc()
below.
$net->uncrunch($net->run($input_map_ref));
All that run_uc()
does is that it automatically calls uncrunch()
on the output, regardless
of whether the input was crunch()
-ed or not.
range()
automatically scales the networks outputs to fit inside the size of range you allow, and, therefore,
it keeps track of the maximum output it can expect to scale. Therefore, you will need to learn()
the whole data set again after calling range()
on a network.
Subsequent calls to range()
invalidate any previous calls to range()
NOTE: It is recomended, you call range()
before you call learn()
or else you will get unexpected
results from any run()
call after range()
.
for my $x (0..20)
type of for()
constructor. It works
the exact same way. It will allow all numbers from $bottom to $top, inclusive, to be given
as outputs of the network. No other values will be possible, other than those between $bottom
and $top, inclusive.
crunch()
-ed internally and saved as an array ref. This has the same effect
as calling:
$net->range($net->crunch("string of values"));
$net->range($net->crunch("first string"),$net->crunch("second string"));
Or:
@range = ($net->crunch("first string"), $net->crunch("second string")); $net->range(\@range);
$net->range([$value1,$value2]);
Or:
@range = ($value1,$value2); $net->range(\@range);
The second example is the same as the first example.
bencmarked()
now returns just the string from timestr()
for the last run()
or
learn()
call. Exception: If the last call was a loop the string will be prefixed with ``%d loops and ''.
This returns a benchmark info string for the last learn()
or the last run()
call,
whichever occured later. It is easily printed as a string,
as following:
print $net->benchmarked() . "\n";
Level 0 ($level = 0) : Default, no debugging information printed. All printing is left to calling script.
Level 1 ($level = 1) : This causes ALL debugging information for the network to be dumped as the network runs. In this mode, it is a good idea to pipe your STDIO to a file, especially for large programs.
Level 2 ($level = 2) : A slightly-less verbose form of debugging, not as many internal data dumps.
Level 3 ($level = 3) : JUST prints weight mapping as weights change.
Level 4 ($level = 4) : JUST prints the benchmark info for EACH learn loop iteteration, not just learning as a whole. Also prints the percentage difference for each loop between current network results and desired results, as well as learning gradient ('incremenet').
Level 4 is useful for seeing if you need to give a smaller learning incrememnt to learn()
.
I used level 4 debugging quite often in creating the letters.pl example script and the small_1.pl
example script.
Toggles debuging off when called with no arguments.
crunch()
. Also saves any output ranges set with range()
.
This has now been modified to use a simple flat-file text storage format, and it does not depend on any external modules now.
save()
and completly restore the internal
state at the point it was save()
was called at.
join()
,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value.
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or
undefined string, it join_cols()
will just print the numerical value of each element seperated
by a null character (\0). join_cols()
defaults to the latter behaviour.
sprintf()
and int()
, Provides
better rounding than just calling int()
on the float. Also used very heavily internally.
qw()
to pass strings to crunch.
This splits a string passed with /[\s\t]/ into an array ref containing unique indexes
to the words. The words are stored in an intenal array and preserved across load()
and save()
calls. This is designed to be used to generate unique maps sutible for passing to learn()
and
run()
directly. It returns an array ref.
The words are not duplicated internally. For example:
$net->crunch("How are you?");
Will probably return an array ref containing 1,2,3. A subsequent call of:
$net->crunch("How is Jane?");
Will probably return an array ref containing 1,4,5. Notice, the first element stayed the same. That is because it already stored the word ``How''. So, each word is stored only once internally and the returned array ref reflects that.
crunch()
method, above, possibly to
uncrunch()
the output of a run()
call. Consider the below code (also in ./examples/ex_crunch.pl):
use AI::NeuralNet::BackProp; my $net = AI::NeuralNet::BackProp->new(2,3); for (0..3) { # Note: The four learn() statements below could # be replaced with learn_set() to do the same thing, # but use this form here for clarity. $net->learn($net->crunch("I love chips."), $net->crunch("That's Junk Food!")); $net->learn($net->crunch("I love apples."), $net->crunch("Good, Healthy Food.")); $net->learn($net->crunch("I love pop."), $net->crunch("That's Junk Food!")); $net->learn($net->crunch("I love oranges."),$net->crunch("Good, Healthy Food.")); } my $response = $net->run($net->crunch("I love corn.")); print $net->uncrunch($response),"\n";
On my system, this responds with, ``Good, Healthy Food.'' If you try to run crunch()
with
``I love pop.'', though, you will probably get ``Food! apples. apples.'' (At least it returns
that on my system.) As you can see, the associations are not yet perfect, but it can make
for some interesting demos!
It will return the current width when called with a 0 or undef value.
$pcx->{palette}->[0]->{red}; $pcx->{palette}->[0]->{green}; $pcx->{palette}->[0]->{blue};
Each is in the range of 0..63, corresponding to their named color component.
[$left,$top,$right,$bottom]
These must be in the range of 0..319 for $left and $right, and the range of 0..199 for $top and $bottom. The block is returned as an array ref with horizontal lines in sequental order. I.e. to get a pixel from [2,5] in the block, and $left-$right was 20, then the element in the array ref containing the contents of coordinates [2,5] would be found by [5*20+2] ($y*$width+$x).
print (@{$pcx->get_block(0,0,20,50)})[5*20+2];
This would print the contents of the element at block coords [2,5].
Yet with the allowance of 0s, it requires one of two factors to learn correctly. Either you
must enable randomness with $net->random(0.0001)
(Any values work [other than 0], see random()
),
or you must set an error-minimum with the 'error => 5' option (you can use some other error value
as well).
When randomness is enabled (that is, when you call random()
with a value other than 0), it interjects
a bit of randomness into the output of every neuron in the network, except for the input and output
neurons. The randomness is interjected with rand()*$rand, where $rand is the value that was
passed to random()
call. This assures the network that it will never have a pure 0 internally. It is
bad to have a pure 0 internally because the weights cannot change a 0 when multiplied by a 0, the
product stays a 0. Yet when a weight is multiplied by 0.00001, eventually with enough weight, it will
be able to learn. With a 0 value instead of 0.00001 or whatever, then it would never be able
to add enough weight to get anything other than a 0.
The second option to allow for 0s is to enable a maximum error with the 'error' option in
learn()
, learn_set()
, and learn_set_rand()
. This allows the network to not worry about
learning an output perfectly.
For accuracy reasons, it is recomended that you work with 0s using the random()
method.
If anyone has any thoughts/arguments/suggestions for using 0s in the network, let me know at jdb@wcoil.com.
run()
or learn()
methods of your network.
This is an alpha release of AI::NeuralNet::BackProp
, and that holding true, I am sure
there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would
appreciate it greatly if you could report them to me at <jdb@wcoil.com>,
or, even better, try to patch them yourself and figure out why the bug is being buggy, and
send me the patched code, again at <jdb@wcoil.com>.
Josiah Bryan <jdb@wcoil.com>
Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The AI::NeuralNet::BackProp
and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.
Below is a list of people that have helped, made suggestions, patches, etc. No particular order.
Tobias Bronx, tobiasb@odin.funcom.com Pat Trainor, ptrainor@title14.com Steve Purkis, spurkis@epn.nu Rodin Porrata, rodin@ursa.llnl.gov Daniel Macks dmacks@sas.upenn.edu
Tobias was a great help with the initial releases, and helped with learning options and a great many helpful suggestions. Rodin has gave me some great ideas for the new internals, as well as disabling Storable. Steve is the author of AI::Perceptron, and gave some good suggestions for weighting the neurons. Daniel was a great help with early beta testing of the module and related ideas. Pat has been a great help for running the module through the works. Pat is the author of the new Inter game, a in-depth strategy game. He is using a group of neural networks internally which provides a good test bed for coming up with new ideas for the network. Thankyou for all of your help, everybody.
You can always download the latest copy of AI::NeuralNet::BackProp from http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl
A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also announce in the group each time a new release of AI::NeuralNet::BackProp is available.
The list address is at: ai-neuralnet-backprop@egroups.com
To subscribe, send a blank email to: ai-neuralnet-backprop-subscribe@egroups.com
Rodin Porrata asked on the ai-neuralnet-backprop malining list, "What can they [Neural Networks] do?". In regards to that questioin, consider the following:
Neural Nets are formed by simulated neurons connected together much the same way the brain's neurons are, neural networks are able to associate and generalize without rules. They have solved problems in pattern recognition, robotics, speech processing, financial predicting and signal processing, to name a few.
One of the first impressive neural networks was NetTalk, which read in ASCII text and correctly pronounced the words (producing phonemes which drove a speech chip), even those it had never seen before. Designed by John Hopkins biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986, this application made the Backprogagation training algorithm famous. Using the same paradigm, a neural network has been trained to classify sonar returns from an undersea mine and rock. This classifier, designed by Sejnowski and R. Paul Gorman, performed better than a nearest-neighbor classifier.
The kinds of problems best solved by neural networks are those that people are good at such as association, evaluation and pattern recognition. Problems that are difficult to compute and do not require perfect answers, just very good answers, are also best done with neural networks. A quick, very good response is often more desirable than a more accurate answer which takes longer to compute. This is especially true in robotics or industrial controller applications. Predictions of behavior and general analysis of data are also affairs for neural networks. In the financial arena, consumer loan analysis and financial forecasting make good applications. New network designers are working on weather forecasts by neural networks (Myself included). Currently, doctors are developing medical neural networks as an aid in diagnosis. Attorneys and insurance companies are also working on neural networks to help estimate the value of claims.
Neural networks are poor at precise calculations and serial processing. They are also unable to predict or recognize anything that does not inherently contain some sort of pattern. For example, they cannot predict the lottery, since this is a random process. It is unlikely that a neural network could be built which has the capacity to think as well as a person does for two reasons. Neural networks are terrible at deduction, or logical thinking and the human brain is just too complex to completely simulate. Also, some problems are too difficult for present technology. Real vision, for example, is a long way off.
In short, Neural Networks are poor at precise calculations, but good at association, evaluation, and pattern recognition.