In general, a sigmoid function is real-valued and differentiable, having a non-negative or non-positive first derivative, one local minimum, and one local maximum.
Sigmoid functions are often used in artificial neural networks to introduce nonlinearity in the model.
A neural network element computes a linear combination of its input signals, and applies a sigmoid function to the result. A reason for its popularity in neural networks is because the sigmoid function satisfies a property between the derivative and itself such that it is computationally easy to perform.
Derivatives of the sigmoid function are usually employed in learning algorithms
REF: https://excel.ucf.edu/classes/2007/Spring/appsII/Chapter1.pdf
Sigmoid function produces similar results to step function in that the output is between 0 and 1. The curve crosses 0.5 at z=0, which we can set up rules for the activation function, such as: If the sigmoid neuron’s output is larger than or equal to 0.5, it outputs 1; if the output is smaller than 0.5, it outputs 0.
Sigmoid function does not have a jerk on its curve. It is smooth and it has a very nice and simple derivative of σ(z) * (1-σ(z)), which is differentiable everywhere on the curve.
REF: https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f
Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.
REF: https://en.wikipedia.org/wiki/Sigmoid_function
The following is Python code to implement a sigmoid activation function
import numpy as np
import matplotlib.pyplot as plt
def sigmoid(x):
return 1 / (1 + np.e**(-x))
x = np.linspace(-10, 10, 1000)
y = sigmoid(x)
plt.xlabel('$x$', fontsize=22); plt.ylabel('$y = 1 / (1 + np.e**(-x))$', fontsize=22)
plt.plot(x, y,label = 'Sigmoid curve')
plt.legend(loc='upper left')
plt.plot(0.0, sigmoid(0), 'r.')
plt.yticks([0, 0.25, 0.5, 0.75, 1])
plt.show()
REF: https://datascience.stackexchange.com/questions/30676/role-derivative-of-sigmoid-function-in-neural-networks
Tuesday, 12 February 2019
Sunday, 3 February 2019
Python prediction two
This is my second attempt at implementing a neural net in Python to
predict a closing price of Post Office shares after training the net on
opening and closing prices.
The algorithm came from
https://houseofbots.com/news-detail/4242-1-learn-how-to-build-a-simple-neural-network-in-9-lines-of-python-code
and
https://www.kdnuggets.com/2018/10/simple-neural-network-python.html
Here is my code:
from numpy import exp, array, random, dot
import numpy as np
class NeuralNetwork():
def __init__(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
np.random.seed(1)
# We model a single neuron, with 3 input connections and 1 output connection.
# We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
# and mean 0.
self.synaptic_weights = 2 * np.random.random((1, 1)) - 1
# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def sigmoid(self, x):
return 1 / (1 + exp(-x))
# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def sigmoid_derivative(self, x):
return x * (1 - x)
# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in xrange(number_of_training_iterations):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)
# Calculate the error (The difference between the desired output
# and the predicted output).
error = training_set_outputs - output
# Multiply the error by the input and again by the gradient of the Sigmoid curve.
# This means less confident weights are adjusted more.
# This means inputs, which are zero, do not cause changes to the weights.
adjustment = np.dot(training_set_inputs.T, error * self.sigmoid_derivative(output))
# Adjust the weights.
self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
inputs = inputs.astype(float)
# Pass inputs through our neural network (our single neuron).
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output
# return self.__sigmoid(np.dot(training_set_inputs, self.synaptic_weights))
if __name__ == "__main__":
#Intialise a single neuron neural network.
neural_network = NeuralNetwork()
training_set_inputs = np.array([[0.512],
[0.514],
[0.509],
[0.508],
[0.510],
[0.50750],
[0.513],
[0.50650],
[0.50050],
[0.51050],
[0.503],
[0.505],
[0.510],
[0.5065],
[0.5165],
[0.5175],
[0.522],
[0.5145],
[0.5135],
[0.5225],
[0.525],
[0.525],
[0.525],
[0.5245],
[0.508],
[0.524]])
print(training_set_inputs, training_set_inputs.shape)
training_set_outputs = np.array([[0.51050, 0.50950, 0.50750, 0.510, 0.5110, 0.50250, 0.50450, 0.5090, 0.50750, 0.5050, 0.503, 0.5065, 0.50750, 0.505, 0.503, 0.50650, 0.507, 0.5050, 0.49420, 0.5070, 0.499, 0.49330, 0.5095, 0.51650, 0.501, 0.51150]]).T
print(training_set_outputs, training_set_outputs.shape)
print "Random starting synaptic weights: "
print neural_network.synaptic_weights
# Train the neural network using a training set.
# Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)
print "New synaptic weights after training: "
print neural_network.synaptic_weights
user_input_one = str(input("User Input One: "))
# Test the neural network with a new situation.
print("Considering new situation [some input value] -> ?: ", user_input_one)
print(neural_network.think(np.array([user_input_one])))# enter some input value inside brackets
The neural network can be run from a terminal by the command
python myneuralnetcode.py
The screen output after running the neural network is
(array([[ 0.512 ],
[ 0.514 ],
[ 0.509 ],
[ 0.508 ],
[ 0.51 ],
[ 0.5075],
[ 0.513 ],
[ 0.5065],
[ 0.5005],
[ 0.5105],
[ 0.503 ],
[ 0.505 ],
[ 0.51 ],
[ 0.5065],
[ 0.5165],
[ 0.5175],
[ 0.522 ],
[ 0.5145],
[ 0.5135],
[ 0.5225],
[ 0.525 ],
[ 0.525 ],
[ 0.525 ],
[ 0.5245],
[ 0.508 ],
[ 0.524 ]]), (26, 1))
(array([[ 0.5105],
[ 0.5095],
[ 0.5075],
[ 0.51 ],
[ 0.511 ],
[ 0.5025],
[ 0.5045],
[ 0.509 ],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.507 ],
[ 0.505 ],
[ 0.4942],
[ 0.507 ],
[ 0.499 ],
[ 0.4933],
[ 0.5095],
[ 0.5165],
[ 0.501 ],
[ 0.5115]]), (26, 1))
Random starting synaptic weights:
[[-0.16595599]]
New synaptic weights after training:
[[ 0.04566562]]
User Input One: 0.508
('Considering new situation [some input value] -> ?: ', '0.508')
[ 0.50579927]
david@debian:~/neuralnetworks/Scikitnumpy$
The algorithm came from
https://houseofbots.com/news-detail/4242-1-learn-how-to-build-a-simple-neural-network-in-9-lines-of-python-code
and
https://www.kdnuggets.com/2018/10/simple-neural-network-python.html
Here is my code:
from numpy import exp, array, random, dot
import numpy as np
class NeuralNetwork():
def __init__(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
np.random.seed(1)
# We model a single neuron, with 3 input connections and 1 output connection.
# We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
# and mean 0.
self.synaptic_weights = 2 * np.random.random((1, 1)) - 1
# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def sigmoid(self, x):
return 1 / (1 + exp(-x))
# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def sigmoid_derivative(self, x):
return x * (1 - x)
# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in xrange(number_of_training_iterations):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)
# Calculate the error (The difference between the desired output
# and the predicted output).
error = training_set_outputs - output
# Multiply the error by the input and again by the gradient of the Sigmoid curve.
# This means less confident weights are adjusted more.
# This means inputs, which are zero, do not cause changes to the weights.
adjustment = np.dot(training_set_inputs.T, error * self.sigmoid_derivative(output))
# Adjust the weights.
self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
inputs = inputs.astype(float)
# Pass inputs through our neural network (our single neuron).
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output
# return self.__sigmoid(np.dot(training_set_inputs, self.synaptic_weights))
if __name__ == "__main__":
#Intialise a single neuron neural network.
neural_network = NeuralNetwork()
training_set_inputs = np.array([[0.512],
[0.514],
[0.509],
[0.508],
[0.510],
[0.50750],
[0.513],
[0.50650],
[0.50050],
[0.51050],
[0.503],
[0.505],
[0.510],
[0.5065],
[0.5165],
[0.5175],
[0.522],
[0.5145],
[0.5135],
[0.5225],
[0.525],
[0.525],
[0.525],
[0.5245],
[0.508],
[0.524]])
print(training_set_inputs, training_set_inputs.shape)
training_set_outputs = np.array([[0.51050, 0.50950, 0.50750, 0.510, 0.5110, 0.50250, 0.50450, 0.5090, 0.50750, 0.5050, 0.503, 0.5065, 0.50750, 0.505, 0.503, 0.50650, 0.507, 0.5050, 0.49420, 0.5070, 0.499, 0.49330, 0.5095, 0.51650, 0.501, 0.51150]]).T
print(training_set_outputs, training_set_outputs.shape)
print "Random starting synaptic weights: "
print neural_network.synaptic_weights
# Train the neural network using a training set.
# Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)
print "New synaptic weights after training: "
print neural_network.synaptic_weights
user_input_one = str(input("User Input One: "))
# Test the neural network with a new situation.
print("Considering new situation [some input value] -> ?: ", user_input_one)
print(neural_network.think(np.array([user_input_one])))# enter some input value inside brackets
The neural network can be run from a terminal by the command
python myneuralnetcode.py
The screen output after running the neural network is
(array([[ 0.512 ],
[ 0.514 ],
[ 0.509 ],
[ 0.508 ],
[ 0.51 ],
[ 0.5075],
[ 0.513 ],
[ 0.5065],
[ 0.5005],
[ 0.5105],
[ 0.503 ],
[ 0.505 ],
[ 0.51 ],
[ 0.5065],
[ 0.5165],
[ 0.5175],
[ 0.522 ],
[ 0.5145],
[ 0.5135],
[ 0.5225],
[ 0.525 ],
[ 0.525 ],
[ 0.525 ],
[ 0.5245],
[ 0.508 ],
[ 0.524 ]]), (26, 1))
(array([[ 0.5105],
[ 0.5095],
[ 0.5075],
[ 0.51 ],
[ 0.511 ],
[ 0.5025],
[ 0.5045],
[ 0.509 ],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.5075],
[ 0.505 ],
[ 0.503 ],
[ 0.5065],
[ 0.507 ],
[ 0.505 ],
[ 0.4942],
[ 0.507 ],
[ 0.499 ],
[ 0.4933],
[ 0.5095],
[ 0.5165],
[ 0.501 ],
[ 0.5115]]), (26, 1))
Random starting synaptic weights:
[[-0.16595599]]
New synaptic weights after training:
[[ 0.04566562]]
User Input One: 0.508
('Considering new situation [some input value] -> ?: ', '0.508')
[ 0.50579927]
david@debian:~/neuralnetworks/Scikitnumpy$
Subscribe to:
Comments (Atom)
