DMelt:AI/1 Backpropagation Neural Net

From HandWiki
Member


Backpropagation

This section devoted to one of the most popular neural net: backpropagation. Read Backpropagation article.

Neural Network using Python

The best way to get started with Neural Networks is to a look at a simply Python code which implements a back propagation neural net. In your DataMelt installation, look at the Python file:

Go to "Tools" and then "Online examples": Artificial Intelligence/neural net/neural_net_bpnn.py

Open it with DataMelt IDE and study it - it is simple and well documented. It is self-contained and does not require Java libraries (so you can modify and rerun it at your convenience). This code creates a neural network with two input, two hidden, and one output nodes. The training sample is given by the matrix, where the first 2 columns are input, and the last column is output.

[[0,0], 0]],
[[0,1], [1]],
[[1,0], [1]],
[[1,1], [0]]

Open it with DataMelt IDE and run it (press [F8]). You will see the output like this:

error 0.849818      
error 0.057854      
error 0.003672      
error 0.001686      
error 0.001081      
error 0.000792      
error 0.000630      
error 0.000527      
error 0.000445      
error 0.000384      
[0, 0] -> [-0.021372111181607808]
[0, 1] -> [0.9824168796884087]
[1, 0] -> [0.9822863753023391]
[1, 1] -> [0.022461131355512465]

The output of this scripts gives the training error, and the last lines are the predictions after the training.

Neural Network using Java

DataMelt contains several powerful Java libraries for unsupervised learning procedures. Unlike Python, libraries implemented in Java is by a factor 10-20 faster. Let us show how to use Jython (or any Java scripting language) to create and run a backpropogation network. This library uses the Encog engine from Heaton Research.

Below we show a simple neural net which reads CSV file, train network using 4-hiden layers and validate it. The input file looks like this:

 0,0,0
 1,0,1
 0,1,1
 1,1,0

Here is a simple 5-line code which performs neural net training.

# Licensed under the Apache License, Version 2.0 (the "License");
# based on http://www.heatonresearch.com/encog/

"""
/**
 * XOR: This example is essentially the "Hello World" of neural network
 * programming. This example shows how to construct an Encog neural network to
 * predict the output from the XOR operator. This example uses resilient
 * propagation (RPROP) to train the neural network. RPROP is the best general
 * purpose supervised training method provided by Encog.
 * 
 * For the XOR example with RPROP I use 4 hidden neurons. XOR can get by on just
 * 2, but often the random numbers generated for the weights are not enough for
 * RPROP to actually find a solution. RPROP can have issues on really small
 * neural networks, but 4 neurons seems to work just fine.
 * 
 * This example reads the XOR data from a CSV file. This file should be
 * something like:
 * 
**/
"""

from org.encog import Encog
from org.encog.util.csv import CSVFormat
from org.encog.util.simple import EncogUtility,TrainingSetUtil 

# create data file
data="""
0,0,0
1,0,1
0,1,1
1,1,0
"""
name="test.csv"
fi=open(name, "w")
fi.write(data)
fi.close()

trainingSet=TrainingSetUtil.loadCSVTOMemory(CSVFormat.ENGLISH,name,False,2,1)
network=EncogUtility.simpleFeedForward(2,4,0,1,True)
EncogUtility.trainToError(network, trainingSet, 0.01)
EncogUtility.evaluate(network, trainingSet)
Encog.getInstance().shutdown()

This example uses several important Java classes:

Now we will extend this example by putting a few more lines:

  1. Show the network structure in a separate window
  2. Build network analyser to extract weights, data etc.
  3. Save the network in a separate file and restore it

The script which does all of this is shown below:

# Licensed under the Apache License, Version 2.0 (the "License");
# based on http://www.heatonresearch.com/encog/

"""
/**
 * XOR: This example is essentially the "Hello World" of neural network
 * programming. This example shows how to construct an Encog neural network to
 * predict the output from the XOR operator. This example uses resilient
 * propagation (RPROP) to train the neural network. RPROP is the best general
 * purpose supervised training method provided by Encog.
 * 
 * For the XOR example with RPROP I use 4 hidden neurons. XOR can get by on just
 * 2, but often the random numbers generated for the weights are not enough for
 * RPROP to actually find a solution. RPROP can have issues on really small
 * neural networks, but 4 neurons seems to work just fine.
 * 
 * This example reads the XOR data from a CSV file. This file should be
 * something like:
 * 
**/
"""

from org.encog import Encog
from org.encog.util.csv import CSVFormat
from org.encog.util.simple import EncogUtility,TrainingSetUtil 
from org.encog.visualize import NetworkVisualizeFrame
from org.encog.neural.networks.structure import AnalyzeNetwork
from org.encog.util.obj import SerializeObject
from java.io import *

# create data file
data="""
0,0,0
1,0,1
0,1,1
1,1,0
"""
name="test.csv"
fi=open(name, "w")
fi.write(data)
fi.close()

trainingSet=TrainingSetUtil.loadCSVTOMemory(CSVFormat.ENGLISH,name,False,2,1)
net=EncogUtility.simpleFeedForward(2,4,0,1,True)
EncogUtility.trainToError(net, trainingSet, 0.01)
EncogUtility.evaluate(net, trainingSet)
SerializeObject.save(File("network.eg"), net) # save 


# restore the network
network=SerializeObject.load(File("network.eg"))
f=NetworkVisualizeFrame(network)
f.setDefaultCloseOperation(1)
f.setVisible(1)

print "analyze network"
a=AnalyzeNetwork(network)

print "Analyze network:"
a=AnalyzeNetwork(net)
print a.toString()
print "Values=",a.getAllValues()
print "Weights=",a.getWeightValues()  
print "Nr of connections=",a.getTotalConnections()

This script generates a pop-up JFrame with the network structure:

DMelt example: Read a CSV file and do backpropogation with Encog, and then analyse it

Neural Network scripting

This time we will customize the network. Instead of using predefined factories, we build actual objects and redefine the activation functions. We also use a script to input data.

As before, we will use a simple input with 2 inputs, which will look like this:

XOR_INPUT =  [ [ 0.0, 0.0 ], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0]] # 2 inputs 
XOR_IDEAL =  [ [ 0.0 ], [1.0], [1.0], [0.0]]                     # output

Here is Java code which performs backpropogation network analysis:

# Licensed under the Apache License, Version 2.0 (the "License");
# based on http://www.heatonresearch.com/encog/

# This example shows how to construct an Encog neural
# network to predict the output from the XOR operator.  This example
# uses backpropagation to train the neural network.


from org.encog.ml.data.basic import BasicMLDataSet 
from org.encog.neural.networks import BasicNetwork 
from org.encog.neural.networks.layers import BasicLayer 
from org.encog.neural.networks.training.propagation.resilient import ResilientPropagation 
from org.encog.engine.network.activation  import ActivationSigmoid

XOR_INPUT =  [ [ 0.0, 0.0 ], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0]]
XOR_IDEAL =  [ [ 0.0 ], [1.0], [1.0], [0.0]]
 
network = BasicNetwork()
network.addLayer(BasicLayer(None,True,2))
network.addLayer(BasicLayer(ActivationSigmoid(),True,3))
network.addLayer(BasicLayer(ActivationSigmoid(),False,1))
network.getStructure().finalizeStructure()
network.reset()

# train
trainingSet = BasicMLDataSet(XOR_INPUT, XOR_IDEAL)
train = ResilientPropagation(network, trainingSet)        # train network

# iterations
epoch=1
error=1
while (error > 0.01):
        train.iteration()
        error=train.getError()
        if (epoch%10==0): print "Epoch #", epoch ," Error:",error
        epoch+=1

print "-> Test Trained Neural Network"

for pair in trainingSet:
            output = network.compute(pair.getInput())
            print pair.getInput().getData(0), "," , pair.getInput().getData(1), ", actual=", output.getData(0), ",ideal=",pair.getIdeal().getData(0)

The same script will print the result of training:

Epoch # 10  Error: 0.238874112686
Epoch # 20  Error: 0.195396962817
Epoch # 30  Error: 0.126547597192
Epoch # 40  Error: 0.0592342636102
Epoch # 50  Error: 0.0286886069541
Epoch # 60  Error: 0.0100759401001

-> Test Trained Neural Network
0.0 , 0.0 , actual= 0.112184658782 ,ideal= 0.0
1.0 , 0.0 , actual= 0.958848378144 ,ideal= 1.0
0.0 , 1.0 , actual= 0.902418114956 ,ideal= 1.0
1.0 , 1.0 , actual= 0.0811959953326 ,ideal= 0.0

As you can see, the predicted values are rather close to the expected.

Neural Networks using the neuroph packag

You can build and train neural nets using the neuroph package. org/neuroph/nnet/MultiLayerPerceptron org/neuroph/nnet/MultiLayerPerceptron. This is Java implementation and should be rather fast. Look at a simple example for multi-layer perceptron for the XOR problem:

from jhplot import *
from jhplot.io import *
from org.neuroph.core.data import DataSet
from org.neuroph.core.data import DataSetRow
from org.neuroph.nnet import MultiLayerPerceptron
from org.neuroph.util import TransferFunctionType
from org.neuroph.core import NeuralNetwork

data=DataSet(2, 1)
data.addRow(DataSetRow([0,0],[0]))
data.addRow(DataSetRow([0,1],[1]))
data.addRow(DataSetRow([1,0],[1]))
data.addRow(DataSetRow([1,1],[0]))

#  2 inputs, 3 midle layers, 1 output
myMlP = MultiLayerPerceptron(TransferFunctionType.TANH, [2, 3, 1]);
learningRule = myMlP.getLearningRule()
learningRule.setBatchMode(True)
print "Training neural network..."
myMlP.learn(data)
print "Training done! Save NN"
myMlP.save("myMlPerceptron.nnet");

print "Read saved .."
nnet =NeuralNetwork.createFromFile("myMlPerceptron.nnet");

print "Testing.."
for dataRow in data.getRows(): 
    nnet.setInput(dataRow.getInput().tolist())
    nnet.calculate()
    networkOutput = nnet.getOutput();
    print "Input: ", dataRow.getInput().tolist() 
    print " Output: ", networkOutput.tolist()

and run it inside the DataMelt. It trains and then check th result of the run. DataMelt also includes a similar example but written in Java.

Backpropogation based on HNeuralNet

Below we will show a simple example how to use neural networks using the class HNeuralNet HNeuralNet which is designed for backpropogation neural networks. This library uses the Encog engine from Heaton Research, but restructured for simple scripting using Jython, BeanShell or Groovy or JRuby.

First, let us create a file with our input data which will be analyzed later using a neural network.

from java.util import Random
from jhplot  import *
from jhplot.io import *
fin=PND('Input Data'); out=PND('Output value')
r= Random()
for i in range(300):      # 3000 events
    x1=r.nextDouble()     # each event has 5 inputs and 1 output
    x2=0.5*r.nextDouble()
    x3=r.nextDouble()+0.01
    x4=1.2*r.nextDouble()+0.01
    x5=0.2*r.nextDouble()
    fin.add([x1,x2,x3,x4,x5])    # 5 values with input
    out.add([x1+x4+x2+x3*x5])    # 1 output from 5 inputs
d={'input':fin,'out':out}
Serialized.write(d,'data.ser')   # serialize data into a file
print "File data.ser was created!"

Our data structure is represented in the form of dictionary, with two string keys: 'input' (represented by PND array) and 'output' (also represented by a PND array), but consists only one number. Our output is a function of 4 random inputs. The goal of our neural net is to rediscover the relation between 5 inputs and the output, pretending that we know nothing about the such dependence.

Below we analyze this data in the file data.ser using a back propogation artificial neural network. The goal is make predictions on the output value knowing the input variables.

The principle of back propagation is actually quite easy to understand. The basic steps are:

  1. Initialize the network with small random weights
  2. Present an input pattern to the input layer of the network
  3. Feed the input pattern forward through the network to calculate its activation value
  4. Take the difference between desired output and the activation value to calculate the network’s activation error
  5. Adjust the weights feeding the output neuron to reduce its activation error for this input pattern
  6. Propagate an error value back to each hidden neuron that is proportional to their contribution of the network’s activation error.
  7. Adjust the weights feeding each hidden neuron to reduce their contribution of error for this input pattern.
  8. Repeat steps 2 to 7 for each input pattern in the input collection.
  9. Repeat step 8 until the network is suitably trained.

For such iterations, each pattern is presented in turn, and the network adjusted slightly, before moving on to the next pattern. The magic really happens in step 6, which determines how much error to feed back to each hidden neuron. Once the error value has been established, training can continue.


A next step is to build a backpropogation artificial neural network with 5 inputs, 3 hidden layers and one output.

from jhplot import *
from jhplot.io import *

net = HNeuralNet()
net.addFeedForwardLayer(5)
net.addFeedForwardLayer(3)
net.addFeedForwardLayer(1)
net.reset()
net.save("test.eg")
net.showNetwork()

This script builds this topology for neural net:

DMelt example: Backpropagation neural net (2): Create a neural network

For convenience, we randomize inputs and save it into a file "test.eg". The output of this script is shown here: The output file "test.eg" defines neural net; just open this file in any editor and you will find a simple XML definitions of our neural net.


You can find details of the class HNeuralNet HNeuralNet here. Once the neural net is saved, one can restore it an any time:

from jhplot import *
net = HNeuralNet()
net.read("test.eg","myNN")  # the name myNN should be the same as in the example above
net.showNetwork()

Now we have prepared data and NN. Next step is to train the NN using the input data. It should be noted that it is desirable to rescale the input values so they will be between [-1,1]. This is explained in more details in the book. Here we will just use the original data for simplicity.

from jhplot import *
from jhplot.io import *

d=Serialized.read('data.ser')
input=d['input'].getRows('input',0,100)
out=d['out'].getRows('result',0,100)

net = HNeuralNet()
net.read("test.eg") # read NN from a file 

net.setData(input, out)
print net.trainBackpropagation(1,1000,0.001,0.02,0.005)

print "Epoch error=",net.getEpochError()
net.save("test_trained.eg")
net.showNetwork() # show network
net.showWeights() # show weight

Neural Net with input CSV data

In this example we build a neural net using CSV input data as shown below. First 2 columns are input, while the last column is output.

0.10, 0.03, 0
0.11, 0.11, 0
0.11, 0.82, 0
0.13, 0.17, 0
0.20, 0.81, 0
0.21, 0.57, 1
0.25, 0.52, 1
0.26, 0.48, 1
0.28, 0.17, 1
0.28, 0.45, 1
0.37, 0.28, 1
0.41, 0.92, 0
0.43, 0.04, 1
0.44, 0.55, 1
0.47, 0.84, 0
0.50, 0.36, 1
0.51, 0.96, 0
0.56, 0.62, 1
0.65, 0.01, 1
0.67, 0.50, 1
0.73, 0.05, 1
0.73, 0.90, 0
0.73, 0.99, 0
0.78, 0.01, 1
0.83, 0.62, 0
0.86, 0.42, 1
0.86, 0.91, 0
0.89, 0.12, 1
0.95, 0.15, 1
0.98, 0.73, 0

In the code below we read this input file and perform backpropogation. The the script shows the structure of neural net, errors as a function of the epoch and the predicted values.

from jhplot import *
from jhplot.io.csv  import *
from java.io import *

reader =CSVReader( FileReader("/home/sergei/Download/data.csv"))
input=PND("input")
output=PND("output")
while True:
      nextLine = reader.readNext() 
      if nextLine== None: break
      input.add([float(nextLine[0]),float(nextLine[1])])
      output.add([float(nextLine[2])])

net = HNeuralNet() # build NN with 2 hidden layers
net.addFeedForwardLayer(2)
net.addFeedForwardLayer(2)
net.addFeedForwardLayer(1)
net.reset()
net.showNetwork()
net.setData(input, output)
print net.trainBackpropagation(1,5000,0.01,0.025,0.002)
print "Epoch error=",net.getEpochError()

pred=net.predict(input)
for or i in range(  input.size() ):
   print "predicted=",pred.get(i)[0], " expected=",output.get(i)[0]

The code generates a neural net as shown here:

Neural Network for predictions

In this example we will show how to:

  • Creating input data set
  • Normalizing the input
  • Construction of NN
  • Analyzing the output

You can find below a script which does all the above. In addition, it will show the learning rate, the neural net diagram and will print expected and predicted values. The example also shows how to save a neural net in a file and then read it back:

from java.util import Random
from jhplot  import *
from jhplot.io import *

Events=1000
input=PND('Data')
out=PND('Output')
r= Random()
for i in range(Events):
    mass=r.nextDouble()*10
    acceler=r.nextDouble()*10
    force=mass*acceler
    input.add([mass,acceler])
    out.add([force])
d={'input':input,'out':out}

# scale output
input=d['input']
out=d["out"]

# scale_input=input.rescale(1)
# scale_output=out.rescale(1)
input.standardize()
scale=out.rescale(0)

# print input
# print out

# import  sys
# sys.exit(0)

input=d['input'].getRows('input',0,(int)(0.5*Events))
out=d['out'].getRows('result',0,(int)(0.5*Events))

net = HNeuralNet()
net.addFeedForwardLayer(2)
net.addFeedForwardLayer(2)
net.addFeedForwardLayer(1)
net.reset()

net.setData(input, out)
print net.trainBackpropagation(1,5000,0.01,0.025,0.002)
print "Epoch error=",net.getEpochError()
net.save("test.eg")
net.showNetwork()

min,max =(int)(0.5*Events),Events
input=d['input'].getRows('input',min,max)
out=d["out"].getRows('result',min,max)

net=HNeuralNet()
net.read('test.eg');

pred=net.predict(input)
pred.rescale(scale)
out.rescale(scale)

d={'predicted':pred,'expected':out}
predicted=d['predicted']
expected=d['expected']
ratio=predicted.copy('ratio')
ratio.oper(expected,'/')

c1=HPlot()
c1.visible()
c1.setRange(0,2,0,100)

h=H1D('ratio',50,0,2.)
h.fill(ratio)
c1.draw(h)
c1.drawStatBox(h)

for i in range( (int)(0.5*Events)): # check first 100 events
   p=predicted.getRow(i)
   x=expected.getRow(i)
   d1=p.get(0)
   d2=x.get(0)
   print i,'predicted=',d1,' expected=',d2

The script generates 3 figures:

  • The structure of the neural net with 2 hidden layers
  • Error of learning as a function of epoch
  • Error on the prediction, i.e. taking the difference between predicted and expected values.

DMelt example: Backpropogation neural network training/analysis

Neural Network using Joone

Joone is another graphical editor and engine for neural networks. This neural net is available as org.joone.net.NeuralNet org.joone.net.NeuralNet. Let us a show example that train neural network, show the global error and validate the network. We use "in memory" data, unlike data which can be read from files.

from org.joone.engine import NeuralNetListener
from org.joone.engine import SigmoidLayer,FullSynapse,Monitor 
from org.joone.io import MemoryOutputSynapse,MemoryInputSynapse
from org.joone.engine.learning import TeachingSynapse
from org.joone.net import NeuralNet
from java.lang import System
from jhplot  import *

class joone(NeuralNetListener):

  mills=0
  nnet = NeuralNet()
  epochs=1000
  c1 = SPlot()
  c1.visible()
  c1.setAutoRange()
  c1.setMarksStyle('various')
  c1.setConnected(1, 0)
  c1.setNameY('Global Error')
  c1.setNameX('Epoch') 

  def Go(self,epochs,inputArray,desiredOutput):
     self.epochs=epochs
     print "Running NN for ", epochs, " epochs"
     # First, creates the three Layers
     input=SigmoidLayer()
     hidden=SigmoidLayer()
     output=SigmoidLayer()

     input.setLayerName("input")
     hidden.setLayerName("hidden")
     output.setLayerName("output")

     input.setRows(2)
     hidden.setRows(3)
     output.setRows(1)
     synapse_IH = FullSynapse() # input -> hidden conn. 
     synapse_HO = FullSynapse() # hidden -> output conn. 
     synapse_IH.setName("IH")
     synapse_HO.setName("HO")

     # Connect the input layer with the hidden layer
     input.addOutputSynapse(synapse_IH)
     hidden.addInputSynapse(synapse_IH)

     # Connect the hidden layer with the output layer
     hidden.addOutputSynapse(synapse_HO)
     output.addInputSynapse(synapse_HO)

     inputStream = MemoryInputSynapse() # input array 
     inputStream.setInputArray(inputArray)
     inputStream.setAdvancedColumnSelector("1-2") # The first two columns contain the input 

     desiredOutputSynapse = MemoryInputSynapse() # desired  
     desiredOutputSynapse.setInputArray(desiredOutput)
     desiredOutputSynapse.setAdvancedColumnSelector("1")

     # set the input data
     input.addInputSynapse(inputStream)

     trainer = TeachingSynapse()
     trainer.setDesired(desiredOutputSynapse)
     output.addOutputSynapse(trainer) # Connects the Teacher to the last layer 

     # Creates a new NeuralNet. All the layers must be inserted 
     self.nnet.addLayer(input, NeuralNet.INPUT_LAYER)
     self.nnet.addLayer(hidden, NeuralNet.HIDDEN_LAYER)
     self.nnet.addLayer(output, NeuralNet.OUTPUT_LAYER)
     mon = self.nnet.getMonitor()
     mon.setTrainingPatterns(len(inputArray))  # of rows (patterns) contained in the input file
     mon.setTotCicles(self.epochs)             # How many times the net must be trained on the input patterns        
     mon.setLearningRate(0.7)
     mon.setMomentum(0.6)
     mon.setLearning(True)          #  The net must be trained
     mon.setSingleThreadMode(True)  #  Set to false for multi-thread mode
     #The application registers itself as monitor's listener so it can receive
     #the notifications of termination from the net. */
     mon.addNeuralNetListener(self)
     self.mills = System.currentTimeMillis()
     self.nnet.randomize(0.5)
     self.nnet.go(True)  #  The net starts in non async mode

  def netStopped(self,e):
      delay = System.currentTimeMillis() - self.mills
      print "Training finished after ",delay," ms"

  def cicleTerminated(self,e):
      pass 

  def netStarted(self,e):
      pass 
    
  def errorChanged(self,e):
       mon = e.getSource() 
       c = self.epochs-mon.getCurrentCicle()
       cl = c / 1000
       if ((cl * 1000) == c): 
           err=mon.getGlobalError()
           print c," epoch RMSE = ", err 
           self.c1.addPoint(0,c,err,1)
           self.c1.update()
 
  def netStoppedError(self,e):
       pass 

  def getPredict(self,inputArray):
      input = self.nnet.getInputLayer()
      input.removeAllInputs()
      nsize=len(inputArray)
      memInp =  MemoryInputSynapse()
      memInp.setFirstRow(1)
      memInp.setAdvancedColumnSelector("1,2")
      input.addInputSynapse(memInp)
      memInp.setInputArray(inputArray)
      output = self.nnet.getOutputLayer()
      output.removeAllOutputs()
      memOut = MemoryOutputSynapse()
      output.addOutputSynapse(memOut)
      self.nnet.getMonitor().setTotCicles(1)
      self.nnet.getMonitor().setTrainingPatterns(nsize)
      self.nnet.getMonitor().setLearning(False)
      self.nnet.go(True)  #  The net starts in non async mode
      pred=[]
      for i in range(nsize):
           pattern = memOut.getNextPattern()
           # print "Predicted Pattern #",(i+1)," = ",pattern[0]
           pred.append(pattern[0])
      return pred
 
inputArray=[[0.0, 0.0],[0.0, 1.0],[1.0, 0.0],[1.0, 1.0]]
desiredOutput=[[0],[1],[1],[0]]

xor=joone()
xor.Go(20000,inputArray,desiredOutput)
print "Stop. Get predictions"
pred=xor.getPredict(inputArray)
for i in range(len(inputArray)):
    print inputArray[i],"predicted=",pred[i], " expected=",desiredOutput[i]

The global error as a function of epoch is shown below:

DMelt example: Training and veryfing a neural net using Joone

Time series forecast using Joone

Now we will use the Joone Neural Network for forecasting market predictions, using a generated time series in the form cos(x)*sin(x)+Gaussian noise.

from org.joone.engine import NeuralNetListener,TanhLayer,DelayLayer 
from org.joone.engine import SigmoidLayer,FullSynapse,Monitor 
from org.joone.io import FileInputSynapse,FileOutputSynapse 
from org.joone.engine.learning import TeachingSynapse
from org.joone.net import NeuralNet
from java.lang import System
from java.io import *
from jhplot  import *
from java.awt import *
import math
from java.util import  Random


class joone(NeuralNetListener):

  mills=0
  epochs=1000
  fileName=""
  nnet = NeuralNet()
  input = DelayLayer()
  hidden = SigmoidLayer()
  output = TanhLayer()


  # a function to help connect layers
  def connect(self,layer1, syn, layer2):
        layer1.addOutputSynapse(syn)
        layer2.addInputSynapse(syn)

  def createDataSet(self,fileName,firstRow, lastRow, advColSel):
        fInput = FileInputSynapse()
        fInput.setInputFile(File(fileName))
        fInput.setFirstRow(firstRow)
        fInput.setLastRow(lastRow)
        fInput.setAdvancedColumnSelector(advColSel)
        return fInput


  def createNet(self,fileName,epochs,trainingPatterns,temporalWindow):
     self.epochs=epochs
     print "Running NN for ", epochs, " epochs"
     self.fileName=fileName
     self.input.setTaps(temporalWindow-1)
     self.input.setRows(1)
     self.hidden.setRows(15)
     self.output.setRows(1)

     self.connect(self.input,FullSynapse(), self.hidden)
     self.connect(self.hidden,FullSynapse(), self.output)

     self.input.addInputSynapse(self.createDataSet(self.fileName, 1, trainingPatterns, "1"))

     trainer = TeachingSynapse()
     trainer.setDesired(self.createDataSet(self.fileName, 2, trainingPatterns+1, "1"))
     self.output.addOutputSynapse(trainer)

     self.nnet.addLayer(self.input, NeuralNet.INPUT_LAYER)
     self.nnet.addLayer(self.hidden, NeuralNet.HIDDEN_LAYER)
     self.nnet.addLayer(self.output, NeuralNet.OUTPUT_LAYER)
     self.mills = System.currentTimeMillis()
     self.nnet.randomize(0.5) 


  def train(self): 
        mon = self.nnet.getMonitor()
        mon.setLearningRate(0.2)
        mon.setMomentum(0.7)
        mon.setTrainingPatterns(trainingPatterns)
        mon.setTotCicles(epochs)
        mon.setPreLearning(temporalWindow)
        mon.setLearning(True)
        mon.addNeuralNetListener(self)
        self.nnet.start()
        mon.Go()
        self.nnet.join()

  def interrogate(self,outputFile):
        mon = self.nnet.getMonitor()
        self.input.removeAllInputs()
        startRow = trainingPatterns - temporalWindow
        self.input.addInputSynapse(self.createDataSet(self.fileName, startRow+1, startRow+40, "1"))
        self.output.removeAllOutputs()
        fOutput = FileOutputSynapse()
        fOutput.setFileName(outputFile)
        self.output.addOutputSynapse(fOutput)
        mon.setTrainingPatterns(40)
        mon.setTotCicles(1)
        mon.setLearning(False)
        self.nnet.start()
        mon.Go()
        self.nnet.join()

  def netStopped(self,e):
      mon =e.getSource()
      if (mon.isLearning()): 
            epoch = mon.getTotCicles() - mon.getCurrentCicle()
            print "Epoch:",epoch," last RMSE=",mon.getGlobalError()
      else:
         delay = System.currentTimeMillis() - self.mills
         print "Training finished after ",delay," ms"

  def cicleTerminated(self,e):
      mon = e.getSource()
      epoch = mon.getTotCicles() - mon.getCurrentCicle()
      if ((epoch > 0) and ((epoch % 100) == 0)): 
            print "Epoch:",epoch," RMSE=",mon.getGlobalError()

  def netStarted(self,e):
      pass 
    
  def errorChanged(self,e):
      pass
 
  def netStoppedError(self,e):
      pass 

r=Random() 
print "Create a file joone_timeseries.txt with time series cos(x)*sin(x)+noise"
f = open("joone_timeseries.txt", "w")
for x in range(0,1000):
       yEst = 0.9*math.cos(x*0.4)*math.sin(0.7*x)+0.02*r.nextGaussian() # cos(x*0.3)*sim(x)+noise 
       f.write( str(yEst) + "\n"  )
f.close()


# get input file with TimeSeries 
fileName = "joone_timeseries.txt"
print "Downloading .. "+fileName
# Web.get("http://datamelt.org/examples/data/"+fileName)
trainingPatterns = 200
epochs = 10000
temporalWindow = 20

c1 = HPlot("show data")
c1.visible()
c1.setRangeX(0,1100)
c1.setNameX("time")
c1.setNameY("price")
c1.setMarginLeft(90)

# show data
dataIN = [line.strip() for line in open(fileName, 'r')]
print "Total data size=",len(dataIN), " For learning=",trainingPatterns
p1=P1D("Time series")
p1.setDrawLine(True)
p1.setPenWidth(1)
for i in range(len(dataIN)):
    p1.add(i,float(dataIN[i]))
c1.draw(p1)

# run NN for predictions
ts=joone()
ts.createNet(fileName,epochs,trainingPatterns,temporalWindow)
print "Training..."
ts.train()
ts.interrogate("results1.txt")
ts.interrogate("results2.txt")

# show predictions
data = [line.strip() for line in open("results2.txt", 'r')]
p2=P1D("Prediction")
p2.setColor(Color.red)
p2.setDrawLine(True)
for i in range(len(data)):
     p2.add(len(dataIN)+i,float(data[i]))

c1.draw(p2)
print "Done."

Here is the output image that shows the original time series and the predicted trends (n red).

DMelt example: Forecasting time series using Joone neural network

As you can see, the neural net correctly predicts the expected behavior.