Tensors** are a collection of scalars organized along several axes or dimensions. In other words, tensors are basically generalized structures of vectors and matrices.
Deep learning frameworks often handle data in the form of tensors. For example, the following data are stored in tensors: *signals (input data), trainable parameters, activations (intermediary values).
Tensors make computations faster by having the shape representation stored separately from the storage layout.
The shape representation is the dimension of the tensor. The storage layout is the actual data stored in the tensor.
By having the shape representation stored separately from the storage layout, libraries can reshape and reorganize tensors without needing to copy data every time it does so.
To understand why copying data is inefficient, read this: [[Graphical Processing Units]].