Monday, 15 July 2019

Tensorflow and linear regression

The following network was trained with the available bank balance at the time of going to a cafe, and the time spent there on that available balance.
The cash balance x was normalised by dividing by 1000 so that the values fell between 0 and 1. Similarly the time y (minutes) was divided by 100.
The function y= mx + c = 0 was used to measure the gradient of the data.
In the example that follows lm = W*x + b was used, which is the same function except different variable names have been used.

Algorithm ref:   https://medium.com/datadriveninvestor/an-introduction-to-tensorflow-and-implementing-a-simple-linear-regression-model-d900dd2e9963

The code from this program can be saved as filename.py

The network can be run from a terminal by typing:
python3 filename.py




import tensorflow as tf

# values of x = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

# values of y = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

x_train = [0.14569, 0.17944, 0.15496, 0.07607, 0.01223, 0.00873, 0.26456, 0.19928, 0.01417, 0.00220, 0.16451, 0.09408, 0.23073, 0.13403, 0.11177, 0.29657, 0.21266, 0.11302, 0.29185, 0.24873, 0.15997, 0.09582, 0.30616, 0.11861, 0.18292, 0.12121, 0.08206]

y_train = [0.38100, 0.96000, 0.38500, 0.36900, 0.32250, 0.28000, 0.27760, 0.26410, 0.34150, 0.68810, 0.49250, 0.52630, 0.29650, 0.86220, 0.46780, 0.34930, 0.30080, 0.25530, 0.17800, 0.38260, 0.33780, 0.42170, 0.42000, 0.41970, 0.39380, 0.59880, 0.53580]

#defining the weight and bias
W = tf.Variable([-.5], dtype=tf.float32)
b = tf.Variable([.5], dtype=tf.float32)

x = tf.compat.v1.placeholder(dtype=tf.float32)
y = tf.compat.v1.placeholder(dtype=tf.float32)

# using linear function y = Wx + b
lm = W*x + b

#calculating squared error
loss = tf.reduce_sum(tf.square(lm - y))

#using Gradient Descent with learning rate 0.01
optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.01)

#minimizing loss
train = optimizer.minimize(loss)

session = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
session.run(init)

#training model for 1000 iterations
for i in range(1000):
    session.run(train, {x:x_train, y:y_train})

#final values of W and b
print('')
print('W =', session.run(W))
print('b =', session.run(b))
#print(session.run([W,b]))
#output of the model
#print(session.run(lm,{x:[5,6,7,8]}))




The screen output is as follows


david@debian:~/tensorflow$ python3 y=mx+ctensoriii.py
2019-07-15 17:32:59.107041: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1497170000 Hz
2019-07-15 17:32:59.107545: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55b018c09fe0 executing computations on platform Host. Devices:
2019-07-15 17:32:59.107592: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-07-15 17:32:59.147715: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

W = [-0.4330191]
b = [0.4907168]
david@debian:~/tensorflow$



For y= W*x + b 
as we have a value for W and a value for b, for any value of X we can calculate / predict Y the time.

No comments:

Post a Comment