Saturday, 23 January 2021

Tensorflow and predicting votes in Northern Ireland

Using the data from 

 SF Westinster election results

a neural network was trained with number of candidates in an election as input, and number of votes as output. Here is the python code:


import tensorflow as tf



x_train = [0.102, 0.005, 0.002, 0.012, 0.012, 0.014, 0.014, 0.014, 0.017, 0.018, 0.018, 0.017, 0.018, 0.018, 0.015]
y_train = [0.417211, 0.34181, 0.23362, 0.15231, 0.63415, 0.10270, 0.83389, 0.78291, 0.126921, 0.175933, 0.17453, 0.171942, 0.176232, 0.238915, 0.181853]


m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.placeholder(dtype = tf.float32)
y = tf.placeholder(dtype=tf.float32)



# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

pred = tf.add(tf.multiply(x, m),c)
error = pred - y
loss = tf.reduce_mean(tf.square(error))

learn_rate = 0.005
num_epochs = 350

#using Gradient Descent with learning rate 0.005
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(loss)
session = tf.Session()
init = tf.global_variables_initializer()

loss_trace = []


session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run([train], {x:x_train, y:y_train})
#   lossval = session.run([loss], {x:x_train, y:y_train})
    loss_trace.append(session.run([loss], {x:x_train, y:y_train}))
    pred = tf.add(tf.multiply(x, m),c)
    error = pred - y
    error = session.run([error], {x:x_train, y:y_train})
    prediction = session.run([pred],{x:x_train, y:y_train})
    print('Iter: ', epoch, 'MSE in training: ', loss_trace[-1])
 
#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))
print('Final loss: ', loss_trace[-1])
print('Prediction :' , prediction)
print('Error: ', error)

import matplotlib.pyplot as plt
plt.xlabel('Number of epochs --------->')
plt.ylabel('Error (MSE) --------->')
plt.plot(loss_trace)
plt.show()


(virtualenvironment.) david@debian:~/pythonvirenv$ python3 sfseats.py
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:455: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:456: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:457: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:458: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:459: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:462: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2021-01-23 19:45:45.073012: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2021-01-23 19:45:45.073108: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2021-01-23 19:45:45.073137: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Iter:  0 MSE in training:  [0.15254016]
Iter:  1 MSE in training:  [0.15058766]
Iter:  2 MSE in training:  [0.14867406]
Iter:  3 MSE in training:  [0.14679854]
Iter:  4 MSE in training:  [0.14496036]
Iter:  5 MSE in training:  [0.14315876]
.

.

'

Iter:  345 MSE in training:  [0.054557513]
Iter:  346 MSE in training:  [0.054555614]
Iter:  347 MSE in training:  [0.05455375]
Iter:  348 MSE in training:  [0.054551926]
Iter:  349 MSE in training:  [0.054550137]

m = 0.0074915974
c = 0.30681604
Final loss:  [0.054550137]
Prediction : [array([0.30758017, 0.3068535 , 0.30683103, 0.30690596, 0.30690596,
       0.30692092, 0.30692092, 0.30692092, 0.3069434 , 0.3069509 ,
       0.3069509 , 0.3069434 , 0.3069509 , 0.3069509 , 0.30692843],
      dtype=float32)]
Error:  [array([-0.10963082, -0.03495649,  0.07321103,  0.15459596, -0.32724407,
        0.20422092, -0.5269691 , -0.47598907,  0.18002239,  0.1310179 ,
        0.1324209 ,  0.13500139,  0.1307189 ,  0.0680359 ,  0.12507543],
      dtype=float32)]


From y = mx + c

The votes for Westminster start to change when  x is 0 for y = mx + c 

When x is 0 on the sigmoid curve it is going from a negative value to a positive one indicating that the neural network has started to learn and has identified a pattern. 

Where y is the number of votes 

y = 0.0074915974x + 0.30681604

When x = 0   

y = 0.0074915974*0 + 0.30681604

 y = 0 + 0.30681604

y = 0.30681604

To scale up by a thousand as the input vectors were normalised to fall between 0 and 1 by dividing them by a thousand,

y = 306.81604 

we get there must be 306 to 307 supporters or potential voters for the number of seats, or, the voting situation for Westminster to change in a general election.


Monday, 16 November 2020

Tensorflow and percolation pennies.

The following network was trained with available bank balance as input , and time spent at a cafe as output.
The bank balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the time spent at a location was divided by 1000.


The function y= mx + c = 0 was used to measure the gradient of the data.

REF: Algorithm for calculating mean squared error (MSE) from 

Pro Deep Learning with Tensorflow  by Santanu Pattanayak, page 144:


This model computes loss using MSE.
 
 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python filename.py

Here is the code:
 
 
 import tensorflow as tf


# values of x = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]
# values of y = [0.381, 0.96, 0.385, 0.369, 0.3225, 0.28, 0.2776, 0.2641, 0.3415,0.6881, 0.4925, 0.5263, 0.2965, 0.8622, 0.4678, 0.3493, 0.3008, 0.2553, 0.178, 0.3826, 0.3378, 0.4217, 0.42, 0.4197, 0.3938, 0.5988, 0.5358]
x_train = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]
y_train = [0.381, 0.96, 0.385, 0.369, 0.3225, 0.28, 0.2776, 0.2641, 0.3415, 0.6881, 0.4925, 0.5263, 0.2965, 0.8622, 0.4678, 0.3493, 0.3008, 0.2553, 0.178,
0.3826, 0.3378, 0.4217, 0.42, 0.4197, 0.3938, 0.5988, 0.5358]


m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.placeholder(dtype = tf.float32)
y = tf.placeholder(dtype=tf.float32)



# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

pred = tf.add(tf.multiply(x, m),c)
error = pred - y
loss = tf.reduce_mean(tf.square(error))

learn_rate = 0.005
num_epochs = 350

#using Gradient Descent with learning rate 0.005
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(loss)
session = tf.Session()
init = tf.global_variables_initializer()

loss_trace = []


session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run([train], {x:x_train, y:y_train})
#   lossval = session.run([loss], {x:x_train, y:y_train})
    loss_trace.append(session.run([loss], {x:x_train, y:y_train}))
    pred = tf.add(tf.multiply(x, m),c)
    error = pred - y
    error = session.run([error], {x:x_train, y:y_train})
    prediction = session.run([pred],{x:x_train, y:y_train})
    print('Iter: ', epoch, 'MSE in training: ', loss_trace[-1])
 
#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))
print('Final loss: ', loss_trace[-1])
print('Prediction :' , prediction)
print('Error: ', error)

import matplotlib.pyplot as plt
plt.xlabel('Number of epochs --------->')
plt.ylabel('Error (MSE) --------->')
plt.plot(loss_trace)
plt.show()
 
 
 
Here is the output:


(virtualenvironment.) david@debian:~/pythonvirenv$ python tftimeii.py
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:455: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:456: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:457: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:458: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:459: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:462: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-11-16 19:17:06.884453: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2020-11-16 19:17:06.884550: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2020-11-16 19:17:06.884579: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Iter:  0 MSE in training:  [0.002089466]
Iter:  1 MSE in training:  [0.0020533486]
Iter:  2 MSE in training:  [0.0020179658]
Iter:  3 MSE in training:  [0.001983303]
Iter:  4 MSE in training:  [0.0019493455]
Iter:  5 MSE in training:  [0.0019160783]
Iter:  6 MSE in training:  [0.001883488]
Iter:  7 MSE in training:  [0.0018515604]
Iter:  8 MSE in training:  [0.0018202825]
Iter:  9 MSE in training:  [0.0017896408]
Iter:  10 MSE in training:  [0.0017596222]
.
.
.
.
Iter:  340 MSE in training:  [0.00031488354]
Iter:  341 MSE in training:  [0.0003148476]
Iter:  342 MSE in training:  [0.00031481232]
Iter:  343 MSE in training:  [0.00031477772]
Iter:  344 MSE in training:  [0.0003147438]
Iter:  345 MSE in training:  [0.0003147105]
Iter:  346 MSE in training:  [0.00031467783]
Iter:  347 MSE in training:  [0.0003146458]
Iter:  348 MSE in training:  [0.0003146143]
Iter:  349 MSE in training:  [0.00031458342]

m = 0.0048132725
c = 0.040683668
Final loss:  [0.00031458342]
Prediction : [array([0.04138491, 0.04154736, 0.04142953, 0.04104981, 0.04074254,
       0.04072569, 0.04195707, 0.04164286, 0.04075187, 0.04069426,
       0.0414755 , 0.0411365 , 0.04179423, 0.04132879, 0.04122165,
       0.04211114, 0.04170726, 0.04122766, 0.04208842, 0.04188087,
       0.04145365, 0.04114488, 0.0421573 , 0.04125457, 0.04156411,
       0.04126709, 0.04107865], dtype=float32)]
Error:  [array([ 0.00328491, -0.05445264,  0.00292953,  0.00414981,  0.00849254,
        0.01272569,  0.01419707,  0.01523286,  0.00660187, -0.02811574,
       -0.0077745 , -0.0114935 ,  0.01214423, -0.04489121, -0.00555835,
        0.00718114,  0.01162726,  0.01569767,  0.02428842,  0.00362087,
        0.00767365, -0.00102512,  0.0001573 , -0.00071543,  0.00218411,
       -0.01861291, -0.01250136], dtype=float32)]
(virtualenvironment.) david@debian:~/pythonvirenv$





Results
-------------

For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
 
0 =  0.0048132725x + 0.040683668
-0.040683668 = 0.0048132725x
x =  -0.040683668 / 0.0048132725
x =  −8452.392421165


To scale up after normalising at the start we multiply by 1000.
 
The available bank balance at which a time spent at a location starts to change is
£ −8452.392421165
Conclusion
-------------------
The pattern that I infer from the results is that I am fixated in my time from 
a) In 1983 I received a sum of £8,700 received for a comminuted fracture of the left patella together with  severed anterior and posterior cruciate ligaments and a tendon.
 b) At the time of receiving this compensation I had a £700 pound overdraft from living expenses during my final year at Bristol university.
c) After receiving this compensation I bought a Honda CB400/4 F2 for £550
d) I earned two weekly salaries of £250 each  while dispatch riding.
e) After being knocked off of this motorbike I received another £1100 pounds in compensation.
f) I bough a replacement motorbike of a Honda CB750 F2 for £350.
g) I had a blow out of the front tyre, and later the gearbox needed a new layshaft and I also put new tyres on it.
h) When coming down from an LSD trip I allowed someone into talking me into swapping my bike for a 550cc motorbike that was not as tidy and later got arrested for drinking and driving.
i) I was later stabbed and unable to hold down a job for any great length of time due to Post-traumatic stress disorder (PTSD). 
j) My monetary situation has not changed since the balance of £8700. The deficit of £248 for that year was for beer, amphetamines, mushrooms, one line of cocaine, weed, cannabis, takeaways, petrol and a bottle of Smirnoff Blue Label vodka.

Friday, 14 August 2020

Tensorflow, antibiotics and MSE

The following network was trained with micrograms /ml of total antibiotic as input , and % success rate as output.
The antibiotic dose x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the success rate was divided by 100.


Figures for total dose and % success rate were obtained from table 2 from the web page

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/

The link to table 2 is

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/table/t2/?report=objectonly

Table 2 is contained in the chapter entitled Genetic algorithm with the deterministic model.
The function y= mx + c = 0 was used to measure the gradient of the data.

REF: Algorithm for calculating mean squared error (MSE) from 

Pro Deep Learning with Tensorflow  by Santanu Pattanayak, page 144:


This model computes loss using MSE.

A previous post uses cross entropy to calculate loss. The link is

http://pythonprediction.blogspot.com/2020/08/tensorflow-and-antibiotics.html


The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py

Here is the code:

import tensorflow as tf


# values of x = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
# values of y = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
x_train = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
y_train = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]


m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.placeholder(dtype = tf.float32)
y = tf.placeholder(dtype=tf.float32)



# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

pred = tf.add(tf.multiply(x, m),c)
error = pred - y
loss = tf.reduce_mean(tf.square(error))

learn_rate = 0.005
num_epochs = 350

#using Gradient Descent with learning rate 0.005
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(loss)
session = tf.Session()
init = tf.global_variables_initializer()

loss_trace = []


session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run([train], {x:x_train, y:y_train})
#   lossval = session.run([loss], {x:x_train, y:y_train})
    loss_trace.append(session.run([loss], {x:x_train, y:y_train}))
    pred = tf.add(tf.multiply(x, m),c)
    error = pred - y
    error = session.run([error], {x:x_train, y:y_train})
    prediction = session.run([pred],{x:x_train, y:y_train})
    print('Iter: ', epoch, 'MSE in training: ', loss_trace[-1])
 
#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))
print('Final loss: ', loss_trace[-1])
print('Prediction :' , prediction)
print('Error: ', error)

import matplotlib.pyplot as plt
plt.xlabel('Number of epochs --------->')
plt.ylabel('Error (MSE) --------->')
plt.plot(loss_trace)
plt.show()


Here is the output:


(virtualenvironment.) david@debian:~/pythonvirenv$ python3 antibioticmsev.py
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:455: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:456: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:457: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:458: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:459: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:462: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-08-14 18:47:47.336227: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2020-08-14 18:47:47.336326: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2020-08-14 18:47:47.336355: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Iter:  0 MSE in training:  [0.85414684]
Iter:  1 MSE in training:  [0.8368502]
Iter:  2 MSE in training:  [0.819904]
Iter:  3 MSE in training:  [0.803301]
Iter:  4 MSE in training:  [0.7870342]
.
.
.
Iter:  347 MSE in training:  [0.0008233281]
Iter:  348 MSE in training:  [0.0008090532]
Iter:  349 MSE in training:  [0.00079507055]

m = 0.11940679
c = 0.89168817
Final loss:  [0.00079507055]
Prediction : [array([0.90577817, 0.9069722 , 0.9062558 , 0.9069722 , 0.9085245 ,
       0.90744984, 0.90876335, 0.91031563], dtype=float32)]
Error:  [array([-0.00622183, -0.03602779, -0.0167442 , -0.02502775, -0.03547549,
       -0.01755017, -0.03123665, -0.03968436], dtype=float32)]
(virtualenvironment.) david@debian:~/pythonvirenv$






Results
-------------

For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
 
−0.89168817 =  0.11940679x
x = -0.89168817 / 0.11940679
x = -7.467650458



To scale up after normalising at the start we multiply by 1000
x = -7467 micrograms or -7.5 milligrams.
 

This is the dose at the gradient of the neural net starting to learn, therefore the therapeutic value of the antibiotic starts at the dose -7.5 milligrams /ml.

Conclusion
--------------------
The therapeutic starting value of the dose is negative which leads me to assume that the therapeutic value depends on the cumulative effect of antibiotics already in the environment or previous dosage.



Monday, 3 August 2020

Tensorflow and antibiotics

The following network was trained with micrograms /ml of total antibiotic as input , and % success rate as output.
The antibiotic dose x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the success rate was divided by 100.


Figures for total dose and % success rate were obtained from table 2 from the web page

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/

The link to table 2 is

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/table/t2/?report=objectonly

Table 2 is contained in the chapter entitled Genetic algorithm with the deterministic model.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 114:

This model computes loss with cross entropy.


The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py

Here is the code:


import tensorflow as tf
from math import log
from numpy import mean
# values of x = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
# values of y = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
x_train = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
y_train = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.placeholder(dtype = tf.float32)
y = tf.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(_sentinel = None, labels = y, logits = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c)), name = None))


# calculate cross entropy
def cross_entropy(x_train, ytrain):
    return -sum([x_train[i]*log(y_train[i]) for i in range(len(x_train))])


results = list()
for i in range(len(x_train)):
    # create the distribution for each event {0, 1}
    expected = [1.0 - x_train[i], x_train[i]]
    predicted = [1.0 - y_train[i], y_train[i]]
    # calculate cross entropy for the two events
    ce = cross_entropy(expected, predicted)
    print('>[y=%.1f, yhat=%.1f] ce: %.3f nats' % (x_train[i], y_train[i], ce))
    results.append(ce)
 
# calculate the average cross entropy
mean_ce = mean(results)
print('Average Cross Entropy: %.3f nats' % mean_ce)


learn_rate = 0.005
num_epochs = 350

#using Gradient Descent with learning rate 0.005
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(loss)
session = tf.Session()
init = tf.global_variables_initializer()


session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run([train], {x:x_train, y:y_train})
    lossval = session.run([loss], {x:x_train, y:y_train})
                

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))
print('Final loss: ', lossval)

Here is the output:


(virtualenvironment.) david@debian:~/pythonvirenv$ python3 sigmodelsigloss.py
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:455: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:456: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:457: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:458: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:459: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/david/pythonvirenv/virtualenvironment./lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:462: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
>[y=0.1, yhat=0.9] ce: 0.088 nats
>[y=0.1, yhat=0.9] ce: 0.088 nats
>[y=0.1, yhat=0.9] ce: 0.088 nats
>[y=0.1, yhat=0.9] ce: 0.088 nats
>[y=0.1, yhat=0.9] ce: 0.087 nats
>[y=0.1, yhat=0.9] ce: 0.088 nats
>[y=0.1, yhat=0.9] ce: 0.087 nats
>[y=0.2, yhat=0.9] ce: 0.087 nats
Average Cross Entropy: 0.088 nats
2020-08-06 22:15:04.888524: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2020-08-06 22:15:04.888610: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2020-08-06 22:15:04.888639: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

m = 0.017963253
c = 0.13417429
Final loss:  [0.49679244]
(virtualenvironment.) david@debian:~/pythonvirenv$



Results
-------------

For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = 0.017963253*x + 0.13417429
-0.13417429 =  0.017963253*x
x = -0.13417429 / 0.017963253


x = −−7.469375953




To scale up after normalising at the start we multiply by 1000
x = -7469 micrograms or -7.5 milligrams.
 

This is the dose at the gradient of the neural net starting to learn, therefore the therapeutic value of the antibiotic starts at the dose -7.5 milligrams /ml.

Conclusion
--------------------
The therapeutic starting value of the dose is negative which leads me to assume that the therapeutic value depends on the cumulative effect of antibiotics already in the environment or previous dosage.


Wednesday, 18 December 2019

Tensorflow and predicting pulse rate.

The following network was trained with available bank balance as input , and pulse rate obtained using a DFit Fitbit as output.
The bank balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly pulse rate was divided by 100.

 The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py



Here is the code:


import tensorflow as tf
# values of x = [0.0805, 0.0805, 0.04103, 0.06484, 0.30885, 0.24347, 0.2113, 0.32899,
# 0.18724, 0.15731, 0.12432, 0.12432, 0.3872, 0.32124, 0.26415, 0.2073, 0.17515, 0.16939]
# values of y = [0.094, 0.076, 0.1, 0.088, 0.083, 0.074, 0.083, 0.087, 0.079,
# 0.081, 0.080, 0.081, 0.072, 0.078, 0.078, 0.077, 0.087, 0.084]

x_train = [0.0805, 0.0805, 0.04103, 0.06484, 0.30885, 0.24347, 0.2113, 0.32899, 0.18724, 0.15731] #0.12432, 0.12432, 0.3872, 0.32124, 0.26415, 0.2073, 0.17515, 0.16939]
y_train = [0.094, 0.076, 0.1, 0.088, 0.083, 0.074, 0.083, 0.087, 0.079, 0.081]
#0.080, 0.081, 0.072, 0.078, 0.078, 0.077, 0.087, 0.084]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.005
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))




Here is the output:


david@debian:~/tensorflow$ python3 balancepulseii.py
2019-12-19 01:30:16.678368: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497190000 Hz
2019-12-19 01:30:16.678837: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557cb1618e10 executing computations on platform Host. Devices:
2019-12-19 01:30:16.678887: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-19 01:30:16.717659: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = -2.3215976
c = -13.5563755
david@debian:~/tensorflow$




For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = -2.3215976x -13.5563755
13.5563755 = -2.3215976x
x = 13.5563755 / (-2.3215976)
x = −5.839244277
To get the available balance at which my pulse rate starts to change I scale up by 1000 to arrive at the figure £-5839.24428

My available balance ranges from 0 to about £400 maximum. I think the minus value for x indicates that my pulse rate is dependant on financial support in the form of financial gifts in the way of Turkish duty free tobacco, catering sized containers of Nescafe Original, DVDs and yes, cash from my family.
Enjoy!

Cross referencing the above value after scaling up x of £-5839.24428 with the neural network and Tensorflow described in post  http://pythonprediction.blogspot.com/2019/07/tensorflow-and-sigmoid-function.html
which give a value of increasing spending at x = -£5166.7 suggests that comfort spending in an attempt to deal with the fear reactions of schizophrenia and Post Traumatic Stress disorder falls within the range of £-5166.7 to £-5839.24428.

Friday, 9 August 2019

Tensorflow and predicting efficacy of antibiotics.

The following network was trained with micrograms /ml of total antibiotic as input , and % success rate as output.
The antibiotic dose x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the success rate was divided by 100.


Figures for total dose and % success rate were obtained from table 2 from the web page

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/

The link to table 2 is

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/table/t2/?report=objectonly

Table 2 is contained in the chapter entitled Genetic algorithm with the deterministic model.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py



Here is the code:

import tensorflow as tf

# values of x = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
# values of y = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
x_train = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
y_train = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.005
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))


david@debian:~/dadchophedge$ python3 antibioticii.py
2019-12-19 18:35:55.702573: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497180000 Hz
2019-12-19 18:35:55.703284: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d0ed3dccc0 executing computations on platform Host. Devices:
2019-12-19 18:35:55.703357: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-19 18:35:56.019445: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = 0.31500342
c = 2.3551354
david@debian:~/dadchophedge$



For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = 0.31500342x +2.3551354
x = -2.3551354/0.31500342
x = -7.476534699








To scale up after normalising at the start we multiply by 1000
x = -7477 micrograms or -7.5 milligrams.


This is the dose at the gradient of the neural net starting to learn, therefore the therapeutic value of the antibiotic starts at the dose -7.5 milligrams /ml.


Conclusion
--------------------
The therapeutic starting value of the dose is negative which leads me to assume that the therapeutic value depends on the cumulative effect of antibiotics already in the environment.

For a second example of predicting a minimal inhibitory concentration of antibiotic, the dose at which it starts to work, this time using cross entropy for the loss of the model can be seen at

Wednesday, 17 July 2019

Tensorflow and the sigmoid function.

The following network was trained with the available bank balance at the time of going to a cafe as input , and the time spent there on that available balance as output.
The cash balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly the time y (minutes) was divided by 100.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py


Here is the code:


import tensorflow as tf

# values of x = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

# values of y = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

x_train = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

y_train = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.01
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))




Here is the screen output:


david@debian:~/tffin$ python3 sigmoidtensorvii.py
2019-07-17 19:06:31.403105: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497180000 Hz
2019-07-17 19:06:31.403556: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55e52606f7a0 executing computations on platform Host. Devices:
2019-07-17 19:06:31.403606: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-07-17 19:06:31.444097: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = -1.1617903
c = -6.003845
david@debian:~/tffin$




For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = -1.1618x - 6.0039
6.0039 = -1.1618x
x = 6.0039 / -1.1618
x = -5.1667




To scale up after normalising at the start we multiply by 1000
x = -£5166.7


This is the available balance at the gradient of the neural net starting to learn, therefore the balance at which I start to spend is -£5166.7


My available balance ranges from 0 to about £400 maximum. I think the minus value for x indicates that my money is consistent with having received financial gifts in the way of Turkish duty free tobacco, catering sized containers of Nescafe Original, DVDs and yes, cash from my family.
Enjoy!