Hands-on Exploration of TensorFlow’s Low-Level API
This project demonstrates direct usage of TensorFlow’s low-level operations to provide maximum control over tensor creation, manipulation, and mathematical computation — bypassing high-level abstractions when needed. The goal is to understand TensorFlow’s computation model from the ground up and to build a foundation for advanced, custom deep learning workflows.
The notebook follows a structured, example-driven workflow:
- Environment setup & imports – Initialize TensorFlow and supporting libraries.
- Tensor creation – Use
tf.constant
,tf.Variable
, and various initializers (zeros
,ones
,fill
,range
,random
). - Inspecting tensors – Retrieve shape, rank, and dtype information.
- Type casting – Change tensor data types with
tf.cast
. - Mathematical operations – Element-wise math, reductions (
reduce_sum
,reduce_mean
), matrix multiplication (matmul
). - Shape manipulation – Reshaping, expanding/squeezing dimensions, transposition.
- Tensor composition – Concatenation, stacking, splitting, tiling, broadcasting.
- Indexing & slicing – Python slicing and TensorFlow ops (
tf.gather
,tf.slice
). - Random tensors – Generate tensors with controlled distributions.
- Practical demonstrations – Show integration of these low-level ops into workflows.
From the code:
- TensorFlow – Low-level ops, tensor manipulation, math functions.
- NumPy – Array creation and interoperability with tensors.
Not provided – The notebook uses only synthetic/random tensors generated in memory.
Requirements:
pip install tensorflow numpy
Run the notebook:
jupyter notebook low_level_api.ipynb
or in JupyterLab:
jupyter lab low_level_api.ipynb
Execute cells sequentially to follow the learning progression.
- Illustrated complete control over tensor creation and manipulation using TensorFlow’s low-level API.
- Demonstrated immutable vs mutable tensors (
tf.constant
vstf.Variable
). - Showed graph-friendly slicing/indexing with TensorFlow ops instead of Python-only indexing.
- Provided examples of broadcasting and shape alignment.
- Generated random tensors for reproducibility and experimentation.
Sample output snippets:
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
Tensor shape: (3, 2)
Tensor dtype: float32
Tensor rank: 2
Creating a constant tensor:
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 3], dtype=int32)>
Reshaping:
Original shape: (6,)
Reshaped to: (2, 3)
Concatenating tensors:
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)>
- Mastering the low-level API gives maximum flexibility for building custom ops and experimenting with graph execution.
- Broadcasting rules are crucial to simplify math operations without explicit reshaping.
- Understanding TensorFlow indexing methods avoids runtime issues in compiled graphs.
- These skills directly support more advanced custom layers, ops, and training loops.
💡 Some interactive outputs (e.g., plots, widgets) may not display correctly on GitHub. If so, please view this notebook via nbviewer.org for full rendering.
Mehran Asgari Email: imehranasgari@gmail.com GitHub: https://github.com/imehranasgari
This project is licensed under the Apache 2.0 License – see the LICENSE
file for details.