## TensorFlow basics The most striking difference between TensorFlow and other numerical computation libraries such as NumPy is that operations in TensorFlow are symbolic. This is a powerful concept that allows TensorFlow to do all sort of things (e.g. automatic differentiation) that are not possible with imperative libraries such as NumPy. But it also comes at the cost of making it harder to grasp. Our attempt here is to demystify TensorFlow and provide some guidelines and best practices for more effective use of TensorFlow. Let's start with a simple example, we want to multiply two random matrices. First we look at an implementation done in NumPy: ```python import numpy as np x = np.random.normal(size=[10, 10]) y = np.random.normal(size=[10, 10]) z = np.dot(x, y) print(z) ``` Now we perform the exact same computation this time in TensorFlow: ```python import tensorflow as tf x = tf.random_normal([10, 10]) y = tf.random_normal([10, 10]) z = tf.matmul(x, y) sess = tf.Session() z_val = sess.run(z) print(z_val) ``` Unlike NumPy that immediately performs the computation and produces the result, tensorflow only gives us a handle (of type Tensor) to a node in the graph that represents the result. If we try printing the value of z directly, we get something like this: ``` Tensor("MatMul:0", shape=(10, 10), dtype=float32) ``` Since both the inputs have a fully defined shape, tensorflow is able to infer the shape of the tensor as well as its type. In order to compute the value of the tensor we need to create a session and evaluate it using Session.run() method. *** __Tip__: When using Jupyter notebook make sure to call tf.reset_default_graph() at the beginning to clear the symbolic graph before defining new nodes. *** To understand how powerful symbolic computation can be let's have a look at another example. Assume that we have samples from a curve (say f(x) = 5x^2 + 3) and we want to estimate f(x) based on these samples. We define a parametric function g(x, w) = w0 x^2 + w1 x + w2, which is a function of the input x and latent parameters w, our goal is then to find the latent parameters such that g(x, w) ≈ f(x). This can be done by minimizing the following loss function: L(w) = ∑ (f(x) - g(x, w))^2. Although there's a closed form solution for this simple problem, we opt to use a more general approach that can be applied to any arbitrary differentiable function, and that is using stochastic gradient descent. We simply compute the average gradient of L(w) with respect to w over a set of sample points and move in the opposite direction. Here's how it can be done in TensorFlow: ```python import numpy as np import tensorflow as tf # Placeholders are used to feed values from python to TensorFlow ops. We define # two placeholders, one for input feature x, and one for output y. x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) # Assuming we know that the desired function is a polynomial of 2nd degree, we # allocate a vector of size 3 to hold the coefficients. The variable will be # automatically initialized with random noise. w = tf.get_variable("w", shape=[3, 1]) # We define yhat to be our estimate of y. f = tf.stack([tf.square(x), x, tf.ones_like(x)], 1) yhat = tf.squeeze(tf.matmul(f, w), 1) # The loss is defined to be the l2 distance between our estimate of y and its # true value. We also added a shrinkage term, to ensure the resulting weights # would be small. loss = tf.nn.l2_loss(yhat - y) + 0.1 * tf.nn.l2_loss(w) # We use the Adam optimizer with learning rate set to 0.1 to minimize the loss. train_op = tf.train.AdamOptimizer(0.1).minimize(loss) def generate_data(): x_val = np.random.uniform(-10.0, 10.0, size=100) y_val = 5 * np.square(x_val) + 3 return x_val, y_val sess = tf.Session() # Since we are using variables we first need to initialize them. sess.run(tf.global_variables_initializer()) for _ in range(1000): x_val, y_val = generate_data() _, loss_val = sess.run([train_op, loss], {x: x_val, y: y_val}) print(loss_val) print(sess.run([w])) ``` By running this piece of code you should see a result close to this: ``` [4.9924135, 0.00040895029, 3.4504161] ``` Which is a relatively close approximation to our parameters. This is just tip of the iceberg for what TensorFlow can do. Many problems such as optimizing large neural networks with millions of parameters can be implemented efficiently in TensorFlow in just a few lines of code. TensorFlow takes care of scaling across multiple devices, and threads, and supports a variety of platforms.