Class Ai4r::NeuralNetwork::Backpropagation
In: lib/ai4r/neural_network/backpropagation.rb
Parent: Object

Introduction

This is an implementation of a multilayer perceptron network, using the backpropagation algorithm for learning.

Backpropagation is a supervised learning technique (described by Paul Werbos in 1974, and further developed by David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams in 1986)

Features

  • Support for any network architecture (number of layers and neurons)
  • Configurable propagation function
  • Optional usage of bias
  • Configurable momentum
  • Configurable learning rate
  • Configurable initial weight function
  • 100% ruby code, no external dependency

Parameters

Use class method get_parameters_info to obtain details on the algorithm parameters. Use set_parameters to set values for this parameters.

  • :disable_bias => If true, the alforithm will not use bias nodes. False by default.
  • :initial_weight_function => f(n, i, j) must return the initial weight for the conection between the node i in layer n, and node j in layer n+1. By default a random number in [-1, 1) range.
  • :propagation_function => By default: lambda { |x| 1/(1+Math.exp(-1*(x))) }
  • :derivative_propagation_function => Derivative of the propagation function, based on propagation function output. By default: lambda { |y| y*(1-y) }, where y=propagation_function(x)
  • :learning_rate => By default 0.25
  • :momentum => By default 0.1. Set this parameter to 0 to disable momentum

How to use it

  # Create the network with 4 inputs, 1 hidden layer with 3 neurons,
  # and 2 outputs
  net = Ai4r::NeuralNetwork::Backpropagation.new([4, 3, 2])

  # Train the network
  1000.times do |i|
    net.train(example[i], result[i])
  end

  # Use it: Evaluate data with the trained network
  net.eval([12, 48, 12, 25])
    =>  [0.86, 0.01]

More about multilayer perceptron neural networks and backpropagation:

About the project

Author:Sergio Fierens
License:MPL 1.1
Url:ai4r.rubyforge.org

Methods

Included Modules

Ai4r::Data::Parameterizable

Attributes

activation_nodes  [RW] 
structure  [RW] 
weights  [RW] 

Public Class methods

Creates a new network specifying the its architecture. E.g.

  net = Backpropagation.new([4, 3, 2])  # 4 inputs
                                        # 1 hidden layer with 3 neurons,
                                        # 2 outputs
  net = Backpropagation.new([2, 3, 3, 4])   # 2 inputs
                                            # 2 hidden layer with 3 neurons each,
                                            # 4 outputs
  net = Backpropagation.new([2, 1])   # 2 inputs
                                      # No hidden layer
                                      # 1 output

Public Instance methods

Evaluates the input. E.g.

    net = Backpropagation.new([4, 3, 2])
    net.eval([25, 32.3, 12.8, 1.5])
        # =>  [0.83, 0.03]

Initialize (or reset) activation nodes and weights, with the provided net structure and parameters.

This method trains the network using the backpropagation algorithm.

input: Networks input

output: Expected output for the given input.

This method returns the network error:

> 0.5 * sum( (expected_value[i] - output_value[i])**2 )

Protected Instance methods

Propagate error backwards

Calculate quadratic error for a expected output value Error = 0.5 * sum( (expected_value[i] - output_value[i])**2 )

Calculate deltas for hidden layers

Calculate deltas for output layer

Propagate values forward

Initialize neurons structure.

Momentum usage need to know how much a weight changed in the previous training. This method initialize the @last_changes structure with 0 values.

Initialize the weight arrays using function specified with the initial_weight_function parameter

Update weights after @deltas have been calculated.

[Validate]