## Debugging TensorFlow models Symbolic nature of TensorFlow makes it relatively more difficult to debug TensorFlow code compared to regular python code. Here we introduce a number of tools included with TensorFlow that make debugging much easier. Probably the most common error one can make when using TensorFlow is passing Tensors of wrong shape to ops. Many TensorFlow ops can operate on tensors of different ranks and shapes. This can be convenient when using the API, but may lead to extra headache when things go wrong. For example, consider the tf.matmul op, it can multiply two matrices: ```python a = tf.random_uniform([2, 3]) b = tf.random_uniform([3, 4]) c = tf.matmul(a, b) # c is a tensor of shape [2, 4] ``` But the same function also does batch matrix multiplication: ```python a = tf.random_uniform([10, 2, 3]) b = tf.random_uniform([10, 3, 4]) tf.matmul(a, b) # c is a tensor of shape [10, 2, 4] ``` Another example that we talked about before in the [broadcasting](#broadcast) section is add operation which supports broadcasting: ```python a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = a + b # c is a tensor of shape [2, 2] ``` ### Validating your tensors with tf.assert* ops One way to reduce the chance of unwanted behavior is to explicitly verify the rank or shape of intermediate tensors with tf.assert* ops. ```python a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) check_a = tf.assert_rank(a, 1) # This will raise an InvalidArgumentError exception check_b = tf.assert_rank(b, 1) with tf.control_dependencies([check_a, check_b]): c = a + b # c is a tensor of shape [2, 2] ``` Remember that assertion nodes like other operations are part of the graph and if not evaluated would get pruned during Session.run(). So make sure to create explicit dependencies to assertion ops, to force TensorFlow to execute them. You can also use assertions to validate the value of tensors at runtime: ```python check_pos = tf.assert_positive(a) ``` See the official docs for a [full list of assertion ops](https://www.tensorflow.org/api_guides/python/check_ops). ### Logging tensor values with tf.Print Another useful built-in function for debugging is tf.Print which logs the given tensors to the standard error: ```python input_copy = tf.Print(input, tensors_to_print_list) ``` Note that tf.Print returns a copy of its first argument as output. One way to force tf.Print to run is to pass its output to another op that gets executed. For example if we want to print the value of tensors a and b before adding them we could do something like this: ```python a = ... b = ... a = tf.Print(a, [a, b]) c = a + b ``` Alternatively we could manually define a control dependency. ### Check your gradients with tf.compute_gradient_error __Not__ all the operations in TensorFlow come with gradients, and it's easy to unintentionally build graphs for which TensorFlow can not compute the gradients. Let's look at an example: ```python import tensorflow as tf def non_differentiable_softmax_entropy(logits): probs = tf.nn.softmax(logits) return tf.nn.softmax_cross_entropy_with_logits(labels=probs, logits=logits) w = tf.get_variable("w", shape=[5]) y = -non_differentiable_softmax_entropy(w) opt = tf.train.AdamOptimizer() train_op = opt.minimize(y) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(10000): sess.run(train_op) print(sess.run(tf.nn.softmax(w))) ``` We are using tf.nn.softmax_cross_entropy_with_logits to define entropy over a categorical distribution. We then use Adam optimizer to find the weights with maximum entropy. If you have passed a course on information theory, you would know that uniform distribution contains maximum entropy. So you would expect for the result to be [0.2, 0.2, 0.2, 0.2, 0.2]. But if you run this you may get unexpected results like this: ``` [ 0.34081486 0.24287023 0.23465775 0.08935683 0.09230034] ``` It turns out tf.nn.softmax_cross_entropy_with_logits has undefined gradients with respect to labels! But how may we spot this if we didn't know? Fortunately for us TensorFlow comes with a numerical differentiator that can be used to find symbolic gradient errors. Let's see how we can use it: ```python with tf.Session(): diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff) ``` If you run this, you would see that the difference between the numerical and symbolic gradients are pretty high (0.06 - 0.1 in my tries). Now let's fix our function with a differentiable version of the entropy and check again: ```python import tensorflow as tf import numpy as np def softmax_entropy(logits, dim=-1): plogp = tf.nn.softmax(logits, dim) * tf.nn.log_softmax(logits, dim) return -tf.reduce_sum(plogp, dim) w = tf.get_variable("w", shape=[5]) y = -softmax_entropy(w) print(w.get_shape()) print(y.get_shape()) with tf.Session() as sess: diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff) ``` The difference should be ~0.0001 which looks much better. Now if you run the optimizer again with the correct version you can see the final weights would be: ``` [ 0.2 0.2 0.2 0.2 0.2] ``` which are exactly what we wanted. [TensorFlow summaries](https://www.tensorflow.org/api_guides/python/summary), and [tfdbg (TensorFlow Debugger)](https://www.tensorflow.org/api_guides/python/tfdbg) are other tools that can be used for debugging. Please refer to the official docs to learn more.