Friday, 9 August 2019

Tensorflow and predicting efficacy of antibiotics.

The following network was trained with micrograms /ml of total antibiotic as input , and % success rate as output.
The antibiotic dose x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly, y the success rate was divided by 100.


Figures for total dose and % success rate were obtained from table 2 from the web page

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/

The link to table 2 is

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5124968/table/t2/?report=objectonly

Table 2 is contained in the chapter entitled Genetic algorithm with the deterministic model.
The function y= mx + c = 0 was used to measure the gradient of the data.

Algorithm ref: Tensorflow for dummies by Mathew Scarpino, page 109 listing 6.3:

 The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py



Here is the code:

import tensorflow as tf

# values of x = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
# values of y = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
x_train = [0.118, 0.128, 0.122, 0.128, 0.141, 0.132, 0.143, 0.156]
y_train = [0.912, 0.943, 0.923, 0.932, 0.944, 0.925, 0.940, 0.950]
m = tf.Variable(0.)
c = tf.Variable(0.)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using sigmoid function y = mx + c
model = tf.nn.sigmoid(tf.add(tf.multiply(x, m),c))

cost = -1. * tf.reduce_sum(tf.convert_to_tensor(y) * tf.math.log(model) + (1. - tf.convert_to_tensor(y)) * (1. - tf.math.log(model)))
#cost = tf.sigmoid(model)

learn_rate = 0.005
num_epochs = 350
#using Gradient Descent with learning rate 0.005
train = tf.compat.v1.train.GradientDescentOptimizer(learn_rate).minimize(cost)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 350 iterations
for epoch in range(num_epochs):
    session.run(train, {x:x_train, y:y_train})

#final values of m and c
print('')
print('m =', session.run(m))
print('c =', session.run(c))


david@debian:~/dadchophedge$ python3 antibioticii.py
2019-12-19 18:35:55.702573: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497180000 Hz
2019-12-19 18:35:55.703284: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d0ed3dccc0 executing computations on platform Host. Devices:
2019-12-19 18:35:55.703357: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-19 18:35:56.019445: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

m = 0.31500342
c = 2.3551354
david@debian:~/dadchophedge$



For y= m*x + c 
as we have a value for m and a value for c,  we can calculate the value of X when 0 = mx + c for the gradient at which the neural network starts to learn.
These are the steps:
y = m*x + c
0 = m*x + c
0 = 0.31500342x +2.3551354
x = -2.3551354/0.31500342
x = -7.476534699








To scale up after normalising at the start we multiply by 1000
x = -7477 micrograms or -7.5 milligrams.


This is the dose at the gradient of the neural net starting to learn, therefore the therapeutic value of the antibiotic starts at the dose -7.5 milligrams /ml.


Conclusion
--------------------
The therapeutic starting value of the dose is negative which leads me to assume that the therapeutic value depends on the cumulative effect of antibiotics already in the environment.

For a second example of predicting a minimal inhibitory concentration of antibiotic, the dose at which it starts to work, this time using cross entropy for the loss of the model can be seen at

No comments:

Post a Comment