Wednesday, 18 December 2019

Tensorflow and predicting pulse rate.

The following network was trained with available bank balance as input , and pulse rate obtained using a DFit Fitbit as output.
The bank balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly pulse rate was divided by 100.

 The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py



Here is the code:


import tensorflow as tf
# values of x = [0.0805, 0.0805, 0.04103, 0.06484, 0.30885, 0.24347, 0.2113, 0.32899,
# 0.18724, 0.15731, 0.12432, 0.12432, 0.3872, 0.32124, 0.26415, 0.2073, 0.17515, 0.16939]
# values of y = [0.094, 0.076, 0.1, 0.088, 0.083, 0.074, 0.083, 0.087, 0.079,
# 0.081, 0.080, 0.081, 0.072, 0.078, 0.078, 0.077, 0.087, 0.084]

x_train = [0.0805, 0.0805, 0.04103, 0.06484, 0.30885, 0.24347, 0.2113, 0.32899, 0.18724, 0.15731] #0.12432, 0.12432, 0.3872, 0.32124, 0.26415, 0.2073, 0.17515, 0.16939]
y_train = [0.094, 0.076, 0.1, 0.088, 0.083, 0.074, 0.083, 0.087, 0.079, 0.081]
#0.080, 0.081, 0.072, 0.078, 0.078, 0.077, 0.087, 0.084]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.005
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))




Here is the output:


david@debian:~/tensorflow$ python3 balancepulseii.py
2019-12-19 01:30:16.678368: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497190000 Hz
2019-12-19 01:30:16.678837: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557cb1618e10 executing computations on platform Host. Devices:
2019-12-19 01:30:16.678887: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-19 01:30:16.717659: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = -2.3215976
c = -13.5563755
david@debian:~/tensorflow$




For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = -2.3215976x -13.5563755
13.5563755 = -2.3215976x
x = 13.5563755 / (-2.3215976)
x = −5.839244277
To get the available balance at which my pulse rate starts to change I scale up by 1000 to arrive at the figure £-5839.24428

My available balance ranges from 0 to about £400 maximum. I think the minus value for x indicates that my pulse rate is dependant on financial support in the form of financial gifts in the way of Turkish duty free tobacco, catering sized containers of Nescafe Original, DVDs and yes, cash from my family.
Enjoy!

Cross referencing the above value after scaling up x of £-5839.24428 with the neural network and Tensorflow described in post  http://pythonprediction.blogspot.com/2019/07/tensorflow-and-sigmoid-function.html
which give a value of increasing spending at x = -£5166.7 suggests that comfort spending in an attempt to deal with the fear reactions of schizophrenia and Post Traumatic Stress disorder falls within the range of £-5166.7 to £-5839.24428.

Friday, 9 August 2019

Tensorflow and predicting efficacy of antibiotics.

The following network was trained with micrograms /ml of total antibiotic as input , and % success rate as output.
The antibiotic dose x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the success rate was divided by 100.


Figures for total dose and % success rate were obtained from table 2 from the web page

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/

The link to table 2 is

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/table/t2/?report=objectonly

Table 2 is contained in the chapter entitled Genetic algorithm with the deterministic model.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py



Here is the code:

import tensorflow as tf

# values of x = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
# values of y = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
x_train = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
y_train = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.005
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))


david@debian:~/dadchophedge$ python3 antibioticii.py
2019-12-19 18:35:55.702573: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497180000 Hz
2019-12-19 18:35:55.703284: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d0ed3dccc0 executing computations on platform Host. Devices:
2019-12-19 18:35:55.703357: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-19 18:35:56.019445: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = 0.31500342
c = 2.3551354
david@debian:~/dadchophedge$



For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = 0.31500342x +2.3551354
x = -2.3551354/0.31500342
x = -7.476534699








To scale up after normalising at the start we multiply by 1000
x = -7477 micrograms or -7.5 milligrams.


This is the dose at the gradient of the neural net starting to learn, therefore the therapeutic value of the antibiotic starts at the dose -7.5 milligrams /ml.


Conclusion
--------------------
The therapeutic starting value of the dose is negative which leads me to assume that the therapeutic value depends on the cumulative effect of antibiotics already in the environment.

For a second example of predicting a minimal inhibitory concentration of antibiotic, the dose at which it starts to work, this time using cross entropy for the loss of the model can be seen at

Wednesday, 17 July 2019

Tensorflow and the sigmoid function.

The following network was trained with the available bank balance at the time of going to a cafe as input , and the time spent there on that available balance as output.
The cash balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly the time y (minutes) was divided by 100.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py


Here is the code:


import tensorflow as tf

# values of x = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

# values of y = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

x_train = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

y_train = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.01
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))




Here is the screen output:


david@debian:~/tffin$ python3 sigmoidtensorvii.py
2019-07-17 19:06:31.403105: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497180000 Hz
2019-07-17 19:06:31.403556: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55e52606f7a0 executing computations on platform Host. Devices:
2019-07-17 19:06:31.403606: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-07-17 19:06:31.444097: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = -1.1617903
c = -6.003845
david@debian:~/tffin$




For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = -1.1618x - 6.0039
6.0039 = -1.1618x
x = 6.0039 / -1.1618
x = -5.1667




To scale up after normalising at the start we multiply by 1000
x = -£5166.7


This is the available balance at the gradient of the neural net starting to learn, therefore the balance at which I start to spend is -£5166.7


My available balance ranges from 0 to about £400 maximum. I think the minus value for x indicates that my money is consistent with having received financial gifts in the way of Turkish duty free tobacco, catering sized containers of Nescafe Original, DVDs and yes, cash from my family.
Enjoy!




Monday, 15 July 2019

Tensorflow and linear regression

The following network was trained with the available bank balance at the time of going to a cafe, and the time spent there on that available balance.
The cash balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly the time y (minutes) was divided by 100.
The function y= mx + c = 0 was used to measure the gradient of the data.
In the example that follows lm = W*x + b was used, which is the same function except different variable names have been used.

Algorithm ref:   https://medium.com/datadriveninvestor/an-introduction-to-tensorflow-and-implementing-a-simple-linear-regression-model-d900dd2e9963

The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py




import tensorflow as tf

# values of x = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

# values of y = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

x_train = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

y_train = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

#defining the weight and bias
W = tf.Variable([-.5], dtype=tf.float32)
b = tf.Variable([.5], dtype=tf.float32)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using linear function y = Wx + b
lm = W*x + b

#calculating squared error
loss = tf.reduce_sum(tf.square(lm - y))

#using Gradient Descent with learning rate 0.01
optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.01)

#minimizing loss
train = optimizer.minimize(loss)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 1000 iterations
for i in range(1000):
    session.run(train, {x:x_train, y:y_train})

#final values of W and b
print('')
print('W =', session.run(W))
print('b =', session.run(b))
#print(session.run([W,b]))
#output of the model
#print(session.run(lm,{x:[5,6,7,8]}))




The screen output is as follows


david@debian:~/tensorflow$ python3 y=mx+ctensoriii.py
2019-07-15 17:32:59.107041: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497170000 Hz
2019-07-15 17:32:59.107545: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55b018c09fe0 executing computations on platform Host. Devices:
2019-07-15 17:32:59.107592: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-07-15 17:32:59.147715: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

W = [-0.4330191]
b = [0.4907168]
david@debian:~/tensorflow$



For y= W*x + b 
as we have a value for W and a value for b, for any value of X we can calculate / predict Y the time.

Sunday, 30 June 2019

Predicting time with a neural network

The following network was trained with the available bank balance at the time of going to a cafe, and the time spent there on that available balance.
The balance was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly the time (minutes) was divided by 100.

The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python filename.py

Here is the code:

from numpy import exp, array, random, dot
import numpy as np


class NeuralNetwork():
    def __init__(self):
        # Seed the random number generator, so it generates the same numbers
        # every time the program runs.
        np.random.seed(1)

        # We model a single neuron, with 1 input connection and 1 output connection.
        # We assign random weights to a 1 x 1 matrix, with values in the range -1 to 1
        # and mean 0.
        self.synaptic_weights =   np.random.random((1, 1)) - 1

    # The Sigmoid function, which describes an S shaped curve.
    # We pass the weighted sum of the inputs through this function to
    # normalise them between 0 and 1.
    def sigmoid(self, x):
        return 1 / (1 + exp(-x))

    # The derivative of the Sigmoid function.
    # This is the gradient of the Sigmoid curve.
    # It indicates how confident we are about the existing weight.
    def sigmoid_derivative(self, x):
        return x * (1 - x)

    # We train the neural network through a process of trial and error.
    # Adjusting the synaptic weights each time.
    def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
        for iteration in xrange(number_of_training_iterations):
            # Pass the training set through our neural network (a single neuron).
            output = self.think(training_set_inputs)

            # Calculate the error (The difference between the desired output
            # and the predicted output).
            error = training_set_outputs - output

            # Multiply the error by the input and again by the gradient of the Sigmoid curve.
            # This means less confident weights are adjusted more.
            # This means inputs, which are zero, do not cause changes to the weights.
            adjustment = np.dot(training_set_inputs.T, error * self.sigmoid_derivative(output))

            # Adjust the weights.
            self.synaptic_weights += adjustment

    # The neural network thinks.
    def think(self, inputs):
        inputs = inputs.astype(float)       
    # Pass inputs through our neural network (our single neuron).
        output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
        return output

#        return self.__sigmoid(np.dot(training_set_inputs, self.synaptic_weights))


if __name__ == "__main__":

    #Intialise a single neuron neural network.
    neural_network = NeuralNetwork()

training_set_inputs = np.array([[0.14569],
[0.17944],
[0.15496],
[0.07607],
[0.01223],
[0.00873],
[0.26456],
[0.19928],
[0.01417],
[0.00220],
[0.16451],
[0.09408],
[0.23073],
[0.13403],
[0.11177],
[0.29657],
[0.21266],
[0.11302],
[0.29185],
[0.24873],
[0.15997],
[0.09582],
[0.30616],
[0.11861],
[0.18292],
[0.12121],
[0.08206 ]])

#training_set_inputs = np.array(training_set_inputs, ndmin = 2).T

print(training_set_inputs, training_set_inputs.shape)

#training_set_outputs = np.array([[0.51050, 0.50950, 0.50750, 0.510, 0.5110, 0.50250, 0.50450, 0.5090, 0.50750, 0.5050, 0.503, 0.5065, 0.50750, 0.505, 0.503, 0.50650, 0.507, 0.5050, 0.49420, 0.5070, 0.499, 0.49330, 0.5095, 0.51650, 0.501, 0.51150]]).T

training_set_outputs = np.array([[0.381, 0.96, 0.385, 0.369, 0.3225, 0.28, 0.2776, 0.2641, 0.3415, 0.6881, 0.4925, 0.5263, 0.2965, 0.8622, 0.4678, 0.3493, 0.3008, 0.2553, 0.178, 0.3826, 0.3378, 0.4217, 0.42, 0.4197, 0.3938, 0.5988, 0.5358]]).T

#training_set_outputs = np.array(training_set_outputs, ndmin = 1).T

print(training_set_outputs, training_set_outputs.shape)


print "Random starting synaptic weights: "
print neural_network.synaptic_weights


    # Train the neural network using a training set.
    # Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)

print "New synaptic weights after training: "
print neural_network.synaptic_weights
user_input_one = str(input("User Input: "))
    # Test the neural network with a new situation.
print("Considering new situation of available balance [input value] -> ?: ", user_input_one)
print(neural_network.think(np.array([user_input_one])))# enter some input value inside brackets



The screen output is as follows:

david@debian:~/neuralnetworks/Scikitnumpy$ python hobtimepredict.py
(array([[ 0.14569],
       [ 0.17944],
       [ 0.15496],
       [ 0.07607],
       [ 0.01223],
       [ 0.00873],
       [ 0.26456],
       [ 0.19928],
       [ 0.01417],
       [ 0.0022 ],
       [ 0.16451],
       [ 0.09408],
       [ 0.23073],
       [ 0.13403],
       [ 0.11177],
       [ 0.29657],
       [ 0.21266],
       [ 0.11302],
       [ 0.29185],
       [ 0.24873],
       [ 0.15997],
       [ 0.09582],
       [ 0.30616],
       [ 0.11861],
       [ 0.18292],
       [ 0.12121],
       [ 0.08206]]), (27, 1))
(array([[ 0.381 ],
       [ 0.96  ],
       [ 0.385 ],
       [ 0.369 ],
       [ 0.3225],
       [ 0.28  ],
       [ 0.2776],
       [ 0.2641],
       [ 0.3415],
       [ 0.6881],
       [ 0.4925],
       [ 0.5263],
       [ 0.2965],
       [ 0.8622],
       [ 0.4678],
       [ 0.3493],
       [ 0.3008],
       [ 0.2553],
       [ 0.178 ],
       [ 0.3826],
       [ 0.3378],
       [ 0.4217],
       [ 0.42  ],
       [ 0.4197],
       [ 0.3938],
       [ 0.5988],
       [ 0.5358]]), (27, 1))
Random starting synaptic weights:
[[-0.582978]]
New synaptic weights after training:
[[-1.93043177]]
User Input: 0.23223
('Considering new situation of available balance [input value] -> ?: ', '0.23223')
[ 0.38976404]
david@debian:~/neuralnetworks/Scikitnumpy$


The new case of available balance to input is first divided by 1000, so £232.22 becomes 0.23223 to input.
The resulting output time is multiplied by 100 to give 38.976404 minutes.

Enjoy!!

Wednesday, 26 June 2019

Neural network to work out cashflow and timeflow

The following neural network in Python was trained on available cashflow in my bank account prior to the time of purchase of a coffee and snack at a set location. The time spent at this activity at the cafe was measured using a stopwatch. To normalise the data used in training, the available balances were divided by a 1000 and minutes were divided by 100 so all the values fell between 0 and 1. So after running the network, the predicted output for cashflow is multiplied by 1000 and the minutes multiplied by 100. Predicted X for this example is 0.369 * 1000 = £369 predicted available cashflow on approach to the cafe, and the predicted Y  is  0.42 * 100 = 42 minutes predicted time spent there.

After saving the code to a file filename.py it can be run from a terminal by typing python filename.py   

Here is the code:


import numpy as np
from numpy import  array
from sklearn import svm

clf = svm.SVC(gamma=0.001, C=100)

y = np.array([0.381, 0.96, 0.385, 0.369, 0.3225, 0.28, 0.2776, 0.2641, 0.3415,
0.6881, 0.4925, 0.5263, 0.2965, 0.8622, 0.4678, 0.3493, 0.3008, 0.2553, 0.178,
0.3826, 0.3378, 0.4217, 0.42, 0.4197, 0.3938, 0.5988, 0.5358]).T

y = np.array(y, ndmin = 1).T
print(y, y.shape)

x = np.array([0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266,
0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206 ]).T

x = np.array(x, ndmin = 2).T
print(x, x.shape)  

xtrain, ytrain =  x[:-1], y[:-1]  

clf.fit(xtrain, ytrain) #train the data

print('Prediction for x:', clf.predict(x[-1])) #predict the data
print('Prediction for y:', clf.predict(y[-1])) #predict the data



Here is the screen / terminal output:

david@debian:~/Timecfnn$ python cashflowtime+time.py
(array([ 0.381 ,  0.96  ,  0.385 ,  0.369 ,  0.3225,  0.28  ,  0.2776,
        0.2641,  0.3415,  0.6881,  0.4925,  0.5263,  0.2965,  0.8622,
        0.4678,  0.3493,  0.3008,  0.2553,  0.178 ,  0.3826,  0.3378,
        0.4217,  0.42  ,  0.4197,  0.3938,  0.5988,  0.5358]), (27,))
(array([[ 0.14569],
       [ 0.17944],
       [ 0.15496],
       [ 0.07607],
       [ 0.01223],
       [ 0.00873],
       [ 0.26456],
       [ 0.19928],
       [ 0.01417],
       [ 0.0022 ],
       [ 0.16451],
       [ 0.09408],
       [ 0.23073],
       [ 0.13403],
       [ 0.11177],
       [ 0.29657],
       [ 0.21266],
       [ 0.11302],
       [ 0.29185],
       [ 0.24873],
       [ 0.15997],
       [ 0.09582],
       [ 0.30616],
       [ 0.11861],
       [ 0.18292],
       [ 0.12121],
       [ 0.08206]]), (27, 1))
('Prediction for x:', array([ 0.369]))
('Prediction for y:', array([ 0.42]))
david@debian:~/Timecfnn$


Acknowledgements: I  would like to thank Jack's Snax of Barum Arcade in Barnstaple for the endless supply of coffee, and houmous and haloumi wraps with lettuce and tomato and pleasant conversation.

Picture: me outside Jack's Snax


Tuesday, 12 February 2019

Sigmoid activation function

In general, a sigmoid function is real-valued and differentiable, having a non-negative or non-positive first derivative, one local minimum, and one local maximum.

Sigmoid functions are often used in artificial neural networks to introduce nonlinearity in the model.

A neural network element computes a linear combination of its input signals, and applies a sigmoid function to the result. A reason for its popularity in neural networks is because the sigmoid function satisfies a property between the derivative and itself such that it is computationally easy to perform.

Derivatives of the sigmoid function are usually employed in learning algorithms

REF: https://excel.ucf.edu/classes/2007/Spring/appsII/Chapter1.pdf
Sigmoid function produces similar results to step function in that the output is between 0 and 1. The curve crosses 0.5 at z=0, which we can set up rules for the activation function, such as: If the sigmoid neuron’s output is larger than or equal to 0.5, it outputs 1; if the output is smaller than 0.5, it outputs 0.

Sigmoid function does not have a jerk on its curve. It is smooth and it has a very nice and simple derivative of σ(z) * (1-σ(z)), which is differentiable everywhere on the curve.

REF: https://towardsdatascience.com/multi-layer-neural-networks-with-sigmoid-function-deep-learning-for-rookies-2-bf464f09eb7f
Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.

REF: https://en.wikipedia.org/wiki/Sigmoid_function
The following is Python code to implement a sigmoid activation function

import numpy as np
import matplotlib.pyplot as plt

def sigmoid(x):
    return 1 / (1 + np.e**(-x))

x = np.linspace(-10, 10, 1000)

y = sigmoid(x)

plt.xlabel('$x$', fontsize=22); plt.ylabel('$y  = 1 / (1 + np.e**(-x))$', fontsize=22)
plt.plot(x, y,label = 'Sigmoid curve')
plt.legend(loc='upper left')
plt.plot(0.0, sigmoid(0), 'r.')
plt.yticks([0, 0.25, 0.5, 0.75, 1])
plt.show()  



REF: https://datascience.stackexchange.com/questions/30676/role-derivative-of-sigmoid-function-in-neural-networks

Sunday, 3 February 2019

Python prediction two

This is my second attempt at implementing a neural net in Python to predict a closing price of Post Office shares after training the net on opening and closing prices.
The algorithm came from
https://houseofbots.com/news-detail/4242-1-learn-how-to-build-a-simple-neural-network-in-9-lines-of-python-code
and
https://www.kdnuggets.com/2018/10/simple-neural-network-python.html

Here is my code:

from numpy import exp, array, random, dot
import numpy as np


class NeuralNetwork():
    def __init__(self):
        # Seed the random number generator, so it generates the same numbers
        # every time the program runs.
        np.random.seed(1)

        # We model a single neuron, with 3 input connections and 1 output connection.
        # We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
        # and mean 0.
        self.synaptic_weights = 2 * np.random.random((1, 1)) - 1

    # The Sigmoid function, which describes an S shaped curve.
    # We pass the weighted sum of the inputs through this function to
    # normalise them between 0 and 1.
    def sigmoid(self, x):
        return 1 / (1 + exp(-x))

    # The derivative of the Sigmoid function.
    # This is the gradient of the Sigmoid curve.
    # It indicates how confident we are about the existing weight.
    def sigmoid_derivative(self, x):
        return x * (1 - x)

    # We train the neural network through a process of trial and error.
    # Adjusting the synaptic weights each time.
    def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
        for iteration in xrange(number_of_training_iterations):
            # Pass the training set through our neural network (a single neuron).
            output = self.think(training_set_inputs)

            # Calculate the error (The difference between the desired output
            # and the predicted output).
            error = training_set_outputs - output

            # Multiply the error by the input and again by the gradient of the Sigmoid curve.
            # This means less confident weights are adjusted more.
            # This means inputs, which are zero, do not cause changes to the weights.
            adjustment = np.dot(training_set_inputs.T, error * self.sigmoid_derivative(output))

            # Adjust the weights.
            self.synaptic_weights += adjustment

    # The neural network thinks.
    def think(self, inputs):
        inputs = inputs.astype(float)       
    # Pass inputs through our neural network (our single neuron).
        output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
        return output

#        return self.__sigmoid(np.dot(training_set_inputs, self.synaptic_weights))


if __name__ == "__main__":

    #Intialise a single neuron neural network.
    neural_network = NeuralNetwork()

training_set_inputs = np.array([[0.512],
                                [0.514],
                                [0.509],
                                [0.508],
                                [0.510],
                                [0.50750],
                                [0.513],
                                [0.50650],
                                [0.50050],
                                [0.51050],
                                [0.503],
                                [0.505],
                                [0.510],
                                [0.5065],
                                [0.5165], 
                                [0.5175], 
                                [0.522],
                                [0.5145], 
                                [0.5135],
                                [0.5225], 
                                [0.525],
                                [0.525], 
                                [0.525],
                                [0.5245], 
                                [0.508],
                                [0.524]])


print(training_set_inputs, training_set_inputs.shape)

training_set_outputs = np.array([[0.51050, 0.50950, 0.50750, 0.510, 0.5110, 0.50250, 0.50450, 0.5090, 0.50750, 0.5050, 0.503, 0.5065, 0.50750, 0.505, 0.503, 0.50650, 0.507, 0.5050, 0.49420, 0.5070, 0.499, 0.49330, 0.5095, 0.51650, 0.501, 0.51150]]).T

print(training_set_outputs, training_set_outputs.shape)


print "Random starting synaptic weights: "
print neural_network.synaptic_weights


    # Train the neural network using a training set.
    # Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)

print "New synaptic weights after training: "
print neural_network.synaptic_weights
user_input_one = str(input("User Input One: "))
    # Test the neural network with a new situation.
print("Considering new situation [some input value] -> ?: ", user_input_one)
print(neural_network.think(np.array([user_input_one])))# enter some input value inside brackets


The neural network can be run from a terminal by the command
python myneuralnetcode.py

The screen output after running the neural network is
 (array([[ 0.512 ],
       [ 0.514 ],
       [ 0.509 ],
       [ 0.508 ],
       [ 0.51  ],
       [ 0.5075],
       [ 0.513 ],
       [ 0.5065],
       [ 0.5005],
       [ 0.5105],
       [ 0.503 ],
       [ 0.505 ],
       [ 0.51  ],
       [ 0.5065],
       [ 0.5165],
       [ 0.5175],
       [ 0.522 ],
       [ 0.5145],
       [ 0.5135],
       [ 0.5225],
       [ 0.525 ],
       [ 0.525 ],
       [ 0.525 ],
       [ 0.5245],
       [ 0.508 ],
       [ 0.524 ]]), (26, 1))
(array([[ 0.5105],
       [ 0.5095],
       [ 0.5075],
       [ 0.51  ],
       [ 0.511 ],
       [ 0.5025],
       [ 0.5045],
       [ 0.509 ],
       [ 0.5075],
       [ 0.505 ],
       [ 0.503 ],
       [ 0.5065],
       [ 0.5075],
       [ 0.505 ],
       [ 0.503 ],
       [ 0.5065],
       [ 0.507 ],
       [ 0.505 ],
       [ 0.4942],
       [ 0.507 ],
       [ 0.499 ],
       [ 0.4933],
       [ 0.5095],
       [ 0.5165],
       [ 0.501 ],
       [ 0.5115]]), (26, 1))
Random starting synaptic weights:
[[-0.16595599]]
New synaptic weights after training:
[[ 0.04566562]]
User Input One: 0.508
('Considering new situation [some input value] -> ?: ', '0.508')
[ 0.50579927]
david@debian:~/neuralnetworks/Scikitnumpy$


Monday, 7 January 2019

A single layer perceptron with bias

A neural network will calculate lines through an origin (0,0). To calculate other lines, i.e. lines that do not pass through the origin, a bias is needed.
From the following link, I have put a neural net together
https://www.python-course.eu/neural_networks.php
Here a network classifies two clusters in a 2-dimensional space. The network will find a line that seperates the two classes. This line is called a decision boundary.
It can be run by saving the code to myfile.py and typing from a terminal in the same directory as myfile.py, python myfile.py
Here is the code

from matplotlib import pyplot as plt
import numpy as np
from collections import Counter
class Perceptron:
   
    def __init__(self, input_length, weights=None):
        if weights==None:
            self.weights = np.random.random((input_length)) * 2 - 1
        self.learning_rate = 0.1
       
    @staticmethod
    def unit_step_function(x):
        if x < 0:
            return 0
        return 1
       
    def __call__(self, in_data):
        weighted_input = self.weights * in_data
        weighted_sum = weighted_input.sum()
        return Perceptron.unit_step_function(weighted_sum)
   
    def adjust(self,
               target_result,
               calculated_result,
               in_data):
        error = target_result - calculated_result
        for i in range(len(in_data)):
            correction = error * in_data[i] *self.learning_rate
            self.weights[i] += correction
    
def above_line(point, line_func):
    x, y = point
    if y > line_func(x):
        return 1
    else:
        return 0
 
points = np.random.randint(1, 100, (100, 2))

class1 = [(3, 4), (4.2, 5.3), (4, 3), (6, 5), (4, 6), (3.7, 5.8),
          (3.2, 4.6), (5.2, 5.9), (5, 4), (7, 4), (3, 7), (4.3, 4.3) ]
class2 = [(-3, -4), (-2, -3.5), (-1, -6), (-3, -4.3), (-4, -5.6),
          (-3.2, -4.8), (-2.3, -4.3), (-2.7, -2.6), (-1.5, -3.6),
          (-3.6, -5.6), (-4.5, -4.6), (-3.7, -5.8) ]
X, Y = zip(*class1)
plt.scatter(X, Y, c="r")
X, Y = zip(*class2)
plt.scatter(X, Y, c="b")
plt.show()
from itertools import chain
p = Perceptron(2)
def lin1(x):
    return  x + 4
for point in class1:
    p.adjust(1,
             p(point),
             point)
for point in class2:
    p.adjust(0,
             p(point),
             point)
   
evaluation = Counter()
for point in chain(class1, class2):
    if p(point) == 1:
        evaluation["correct"] += 1
    else:
        evaluation["wrong"] += 1
       
testpoints = [(3.9, 6.9), (-2.9, -5.9)]
for point in testpoints:
    print(p(point))
       
print(evaluation.most_common())

from matplotlib import pyplot as plt
X, Y = zip(*class1)
plt.scatter(X, Y, c="r")
X, Y = zip(*class2)
plt.scatter(X, Y, c="b")
x = np.arange(-7, 10)
y = 5*x + 10
m = -p.weights[0] / p.weights[1]
plt.plot(x, m*x)
plt.show()