The following code implements a neural network using the scripting language Python. The algorithm is from
https://www.edureka.co/blog/scikit-learn-machine-learning/
My code is Open Source without the implied warranty for suitability or fitness for any particular purpose. It can be freely distributed and copied or changed.
It is one of my first attempts at implementing a neural network and I hope I have got the shape of the x and y input vectors right. The input vector x is the opening price of shares of the Post office and the input vector y is the closing price. It gives a prediction of 0.505 which seems reasonable and within range. The prediction needs to be scaled up as all values / prices were normalised to fit between 0 and 1. An example could be, if the prices were scaled down by dividing by 100, then the true value would be 0.505 * 100 = 50.5
Python has to be installed as well as the packages python-numpy and python-sklearn.
Save the code using an editor as a file with a .py extension.
The program can by run from a terminal by
python filename.py
Here is the code.-
import numpy as np
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100)
#opening prices of shares = x
x = np.array([[0.512], [0.514], [0.509], [0.508], [0.510], [0.50750], [0.513], [0.50650], [0.50050], [0.51050], [0.503], [0.505], [0.510], [0.5065], [0.5165], [0.5175], [0.522], [0.5145], [0.5135], [0.5225], [0.525], [0.525], [0.525], [0.5245], [0.508], [0.524]])
print('Shape of x vector:')
print(x, x.shape)
#closing prices of shares = y
y = np.array([0.51050, 0.50950, 0.50750, 0.510, 0.5110, 0.50250, 0.50450, 0.5090, 0.50750, 0.5050, 0.503, 0.5065, 0.50750, 0.505, 0.503, 0.50650, 0.507, 0.5050, 0.49420, 0.5070, 0.499, 0.49330, 0.5095, 0.51650, 0.501, 0.51150]).T
print('Shape of y vector:')
print(y, y.shape)
xtrain, ytrain = x[:-1], y[:-1]
clf.fit(xtrain, ytrain) #train the data
print('Prediction for x:', clf.predict(x[-1])) #predict the data
The output on the screen is:
Shape of x vector:
(array([[ 0.512 ],
[ 0.514 ],
[ 0.509 ],
[ 0.508 ],
[ 0.51 ],
[ 0.5075],
[ 0.513 ],
[ 0.5065],
[ 0.5005],
[ 0.5105],
[ 0.503 ],
[ 0.505 ],
[ 0.51 ],
[ 0.5065],
[ 0.5165],
[ 0.5175],
[ 0.522 ],
[ 0.5145],
[ 0.5135],
[ 0.5225],
[ 0.525 ],
[ 0.525 ],
[ 0.525 ],
[ 0.5245],
[ 0.508 ],
[ 0.524 ]]), (26, 1))
Shape of y vector:
(array([ 0.5105, 0.5095, 0.5075, 0.51 , 0.511 , 0.5025, 0.5045,
0.509 , 0.5075, 0.505 , 0.503 , 0.5065, 0.5075, 0.505 ,
0.503 , 0.5065, 0.507 , 0.505 , 0.4942, 0.507 , 0.499 ,
0.4933, 0.5095, 0.5165, 0.501 , 0.5115]), (26,))
('Prediction for x:', array([ 0.505]))
his is another example of a neural network in Python. The algorithm is from
https://dev.to/shamdasani/build-a-flexible-neural-network-with-backpropagation-in-python
This neural net uses the same example of training a net with the opening and closing prices of post office shares. The code is as follows
import numpy as np
X = np.array([[0.512], [0.514], [0.509], [0.508], [0.510], [0.50750], [0.513],[0.50650], [0.50050], [0.51050], [0.503], [0.505], [0.510], [0.5065], [0.5165], [0.5175], [0.522], [0.5145], [0.5135], [0.5225], [0.525], [0.525], [0.525], [0.5245], [0.508], [0.524]])
print(X, X.shape)
y = np.array([[0.51050], [0.50950], [0.50750], [0.510], [0.5110], [0.50250], [0.50450], [0.5090], [0.50750], [0.5050], [0.503], [0.5065], [0.50750], [0.505], [0.503], [0.50650], [0.507], [0.5050], [0.49420], [0.5070], [0.499],[0.49330], [0.5095], [0.51650], [0.501], [0.5115]])
print(y, y.shape)
class Neural_Network(object):
def __init__(self):
#parameters
self.inputSize = 1
self.outputSize = 1
self.hiddenSize = 3
#weights
self.W1 = np.random.randn(self.inputSize, self.hiddenSize) # (3x2) weight matrix from input to hidden layer
self.W2 = np.random.randn(self.hiddenSize, self.outputSize) # (3x1) weight matrix from hidden to output layer
def forward(self, X):
#forward propagation through our network
self.z = np.dot(X, self.W1) # dot product of X (input) and first set of 3x2 weights
self.z2 = self.sigmoid(self.z) # activation function
self.z3 = np.dot(self.z2, self.W2) # dot product of hidden layer (z2) and second set of 3x1 weights
o = self.sigmoid(self.z3) # final activation function
return o
def sigmoid(self, s):
# activation function
return 1/(1+np.exp(-s))
def sigmoidPrime(self, s):
#derivative of sigmoid
return s * (1 - s)
def backward(self, X, y, o):
# backward propgate through the network
self.o_error = y - o # error in output
self.o_delta = self.o_error*self.sigmoidPrime(o) # applying derivative of sigmoid to error
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error
self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error
self.W1 += X.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights
self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
def train (self, X, y):
o = self.forward(X)
self.backward(X, y, o)
NN = Neural_Network()
for i in xrange(1000): # trains the NN 1,000 times
#print "Input: \n" + str(X)
#print "Actual Output: \n" + str(y)
#print "Predicted Output: \n" + str(NN.forward(X))
#print "Prediction for x: \n" + str(NN.forward(X[-1]))
#print "Loss: \n" + str(np.mean(np.square(y - NN.forward(X)))) # mean sum squared loss
#print "\n"
NN.train(X, y)
print "Prediction for x: \n" + str(NN.forward(X[-1]))
print "Loss: \n" + str(np.mean(np.square(y - NN.forward(X))))
The screen output is as follows
(array([[ 0.512 ],
[ 0.514 ],
[ 0.509 ],
[ 0.508 ],
[ 0.51 ],
[ 0.5075],
[ 0.513 ],
[ 0.5065],
[ 0.5005],
[ 0.5105],
[ 0.503 ],
[ 0.505 ],
[ 0.51 ],
[ 0.5065],
[ 0.5165],
[ 0.5175],
[ 0.522 ],
[ 0.5145],
[ 0.5135],
[ 0.5225],
[ 0.525 ],
[ 0.525 ],
[ 0.525 ],
[ 0.5245],
[ 0.508 ],
[ 0.524 ]]), (26, 1))
(array([[ 0.5105],
[ 0.5095],
[ 0.5075],
[ 0.51 ],
[ 0.511 ],
[ 0.5025],
[ 0.5045],
[ 0.509 ],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.507 ],
[ 0.505 ],
[ 0.4942],
[ 0.507 ],
[ 0.499 ],
[ 0.4933],
[ 0.5095],
[ 0.5165],
[ 0.501 ],
[ 0.5115]]), (26, 1))
Prediction for x:
[ 0.50697534]
Loss:
2.58872126395e-05
No comments:
Post a Comment