## Tiling and broadcasting One of the common operations in TensorFlow is `tf.tile` which takes as input a tensor and generates a new tensor where each dimension is repeated a given number of times: ```jsx tf.tensor([1, 2]).tile().print(); ``` We can tile 2d or higher dimensional tensors as well. For example here we use `tf.tile` to generate a checkerboard: ```jsx let board = tf.tensor([[1, 0], [0, 1]]).tile([4, 4]); let scaledBoard = tf.image.resizeNearestNeighbor(board.expandDims(-1), [256, 256]); imshow(scaledBoard); ``` Note that we used the `tf.image.resizeNearestNeighbor` to scale the checkerboard for better visualization. `imshow` is a built-in function on our platform that simply visualizes an image. One common operation in image processing is `meshgrid` function. Both Matlab and numpy have `meshgrid` as a built-in function. For a given 2d image size, `meshgrid` generates two matrices corresponding to `x` and `y` coordinates of the image. We can use `tf.range` and `tf.tile` to implement `meshgrid`: ```jsx function meshgrid(width, height) { let xs = tf.range(0, width).expandDims(0).tile([height, 1]); let ys = tf.range(0, height).expandDims(1).tile([1, width]); return [xs, ys]; } let [xs, ys] = meshgrid(256, 256); imshow(xs.concat(ys, -1)); ``` TensorFlow.js also supports broadcasting elementwise operations. Broadcasting makes tiling unnecessary in certain sitations. Normally when you want to perform operations like addition and multiplication, you need to make sure that shapes of the operands match, e.g. you can’t add a tensor of shape [3, 2] to a tensor of shape [3, 4]. But there’s a special case and that’s when you have a singular dimension. TensorFlow.js implicitly tiles the tensor across its singular dimensions to match the shape of the other operand. So it’s valid to add a tensor of shape [3, 2] to a tensor of shape [3, 1] ```jsx let a = tf.tensor([[1., 2.], [3., 4.]]); let b = tf.tensor([[1.], [2.]]); // let c = a + tf.tile(b, [1, 2]); let c = a.add(b); c.print(); ``` Broadcasting allows us to perform implicit tiling which makes the code shorter, and more memory efficient, since we don’t need to store the result of the tiling operation. So far we discussed the good part of broadcasting. But what’s the ugly part you may ask? Implicit assumptions almost always make debugging harder to do. Consider the following example: ```jsx let a = tf.tensor([[1.], [2.]]); let b = tf.tensor([1., 2.]); let c = a.add(b).sum(); c.print(); ``` What do you think the value of c would be after evaluation? If you guessed 6, that’s wrong. It’s going to be 12. This is because when rank of two tensors don’t match, TensorFlow.js automatically expands the first dimension of the tensor with lower rank before the elementwise operation, so the result of addition would be [[2, 3], [3, 4]], and the reducing over all parameters would give us 12. The way to avoid this problem is to be as explicit as possible. Had we specified which dimension we would want to reduce across, catching this bug would have been much easier: ```jsx let a = tf.tensor([[1.], [2.]]); let b = tf.tensor([1., 2.]); let c = a.add(b).sum(0); c.print(); ``` Here the value of c would be [5, 7], and we immediately would guess based on the shape of the result that there’s something wrong. A general rule of thumb is to always specify the dimensions in reduction operations and when using tf.squeeze.