## Broadcasting the good and the ugly TensorFlow supports broadcasting elementwise operations. Normally when you want to perform operations like addition and multiplication, you need to make sure that shapes of the operands match, e.g. you can’t add a tensor of shape [3, 2] to a tensor of shape [3, 4]. But there’s a special case and that’s when you have a singular dimension. TensorFlow implicitly tiles the tensor across its singular dimensions to match the shape of the other operand. So it’s valid to add a tensor of shape [3, 2] to a tensor of shape [3, 1] ```python import tensorflow as tf a = tf.constant([[1., 2.], [3., 4.]]) b = tf.constant([[1.], [2.]]) # c = a + tf.tile(b, [1, 2]) c = a + b ``` Broadcasting allows us to perform implicit tiling which makes the code shorter, and more memory efficient, since we don’t need to store the result of the tiling operation. One neat place that this can be used is when combining features of varying length. In order to concatenate features of varying length we commonly tile the input tensors, concatenate the result and apply some nonlinearity. This is a common pattern across a variety of neural network architectures: ```python a = tf.random_uniform([5, 3, 5]) b = tf.random_uniform([5, 1, 6]) # concat a and b and apply nonlinearity tiled_b = tf.tile(b, [1, 3, 1]) c = tf.concat([a, tiled_b], 2) d = tf.layers.dense(c, 10, activation=tf.nn.relu) ``` But this can be done more efficiently with broadcasting. We use the fact that f(m(x + y)) is equal to f(mx + my). So we can do the linear operations separately and use broadcasting to do implicit concatenation: ```python pa = tf.layers.dense(a, 10) pb = tf.layers.dense(b, 10) d = tf.nn.relu(pa + pb) ``` In fact this piece of code is pretty general and can be applied to tensors of arbitrary shape as long as broadcasting between tensors is possible: ```python def merge(a, b, units, activation=tf.nn.relu): pa = tf.layers.dense(a, units) pb = tf.layers.dense(b, units) c = pa + pb if activation is not None: c = activation(c) return c ``` A slightly more general form of this function is [included](#merge) in the cookbook. So far we discussed the good part of broadcasting. But what’s the ugly part you may ask? Implicit assumptions almost always make debugging harder to do. Consider the following example: ```python a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = tf.reduce_sum(a + b) ``` What do you think the value of c would be after evaluation? If you guessed 6, that’s wrong. It’s going to be 12. This is because when rank of two tensors don’t match, TensorFlow automatically expands the first dimension of the tensor with lower rank before the elementwise operation, so the result of addition would be [[2, 3], [3, 4]], and the reducing over all parameters would give us 12. The way to avoid this problem is to be as explicit as possible. Had we specified which dimension we would want to reduce across, catching this bug would have been much easier: ```python a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = tf.reduce_sum(a + b, 0) ``` Here the value of c would be [5, 7], and we immediately would guess based on the shape of the result that there’s something wrong. A general rule of thumb is to always specify the dimensions in reduction operations and when using tf.squeeze.