Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Graph Execution | Basics of TensorFlow
Introduction to TensorFlow

Graph ExecutionGraph Execution

Function Decorator

A Function Decorator is a tool that 'wraps' around a function to modify its behavior. In TensorFlow, the most commonly used decorator is @tf.function, which converts a Python function into a TensorFlow graph.


Purpose of @tf.function

The primary purpose of using decorators like @tf.function is to optimize computations. When a function is decorated with @tf.function, TensorFlow converts the function into a highly efficient graph that can be executed much faster, particularly for complex operations. This conversion enables TensorFlow to apply optimizations and exploit parallelism, which is crucial for performance in machine learning tasks.


Example

Let’s go through an example to understand better.

In this code, compute_area() is converted into a TensorFlow graph, making it run faster and more efficiently.


How Graph Execution Works?

TensorFlow operates in two modes: Eager Execution and Graph Execution. By default, TensorFlow runs in Eager Execution mode, which means operations are executed as they are defined, providing a flexible and intuitive interface. However, Eager Execution can be less efficient for complex computations and large-scale models.

This is where @tf.function and Graph Execution come into play. When you use the @tf.function decorator on a function, TensorFlow converts that function into a static computation graph of operations.


Optimization Techniques

  1. Graph Optimization: TensorFlow optimizes the graph by pruning unused nodes, merging duplicate subgraphs, and performing other graph-level optimizations. This results in faster execution and reduced memory usage;
  2. Faster Execution: Graphs are executed faster than eager operations because they reduce the Python overhead. Python is not involved in the execution of the graph, which eliminates the overhead of Python interpreter calls;
  3. Parallelism and Distribution: Graphs enable TensorFlow to easily identify opportunities for parallelism and distribute computations across multiple devices, such as CPUs and GPUs;
  4. Caching and Reuse: When a function decorated with @tf.function is called with the same input signature, TensorFlow reuses the previously created graph, avoiding the need to recreate the graph, which saves time.

Example with Gradient Tape

In this example, compute_gradient is a function that calculates the gradient of y = x^3 at a given point x. The @tf.function decorator ensures that the function is executed as a TensorFlow graph.


Example with Conditional Logic

In this example, the function computes different gradients based on a condition. TensorFlow's @tf.function not only converts the static computation graph but also handles dynamic elements like conditionals and loops effectively.

Task

In this task, you will compare the execution times of two TensorFlow functions that perform matrix multiplication: one with the @tf.function decorator and one without it.

Steps

  1. Define matrix_multiply_optimized function ensuring that it includes the @tf.function decorator.
  2. Complete both functions by calculating the mean of the resulting matrices.
  3. Generate two uniformly distributed random matrices using TensorFlow's random matrix generation functions.

Everything was clear?

Section 2. Chapter 2
toggle bottom row
course content

Course Content

Introduction to TensorFlow

Graph ExecutionGraph Execution

Function Decorator

A Function Decorator is a tool that 'wraps' around a function to modify its behavior. In TensorFlow, the most commonly used decorator is @tf.function, which converts a Python function into a TensorFlow graph.


Purpose of @tf.function

The primary purpose of using decorators like @tf.function is to optimize computations. When a function is decorated with @tf.function, TensorFlow converts the function into a highly efficient graph that can be executed much faster, particularly for complex operations. This conversion enables TensorFlow to apply optimizations and exploit parallelism, which is crucial for performance in machine learning tasks.


Example

Let’s go through an example to understand better.

In this code, compute_area() is converted into a TensorFlow graph, making it run faster and more efficiently.


How Graph Execution Works?

TensorFlow operates in two modes: Eager Execution and Graph Execution. By default, TensorFlow runs in Eager Execution mode, which means operations are executed as they are defined, providing a flexible and intuitive interface. However, Eager Execution can be less efficient for complex computations and large-scale models.

This is where @tf.function and Graph Execution come into play. When you use the @tf.function decorator on a function, TensorFlow converts that function into a static computation graph of operations.


Optimization Techniques

  1. Graph Optimization: TensorFlow optimizes the graph by pruning unused nodes, merging duplicate subgraphs, and performing other graph-level optimizations. This results in faster execution and reduced memory usage;
  2. Faster Execution: Graphs are executed faster than eager operations because they reduce the Python overhead. Python is not involved in the execution of the graph, which eliminates the overhead of Python interpreter calls;
  3. Parallelism and Distribution: Graphs enable TensorFlow to easily identify opportunities for parallelism and distribute computations across multiple devices, such as CPUs and GPUs;
  4. Caching and Reuse: When a function decorated with @tf.function is called with the same input signature, TensorFlow reuses the previously created graph, avoiding the need to recreate the graph, which saves time.

Example with Gradient Tape

In this example, compute_gradient is a function that calculates the gradient of y = x^3 at a given point x. The @tf.function decorator ensures that the function is executed as a TensorFlow graph.


Example with Conditional Logic

In this example, the function computes different gradients based on a condition. TensorFlow's @tf.function not only converts the static computation graph but also handles dynamic elements like conditionals and loops effectively.

Task

In this task, you will compare the execution times of two TensorFlow functions that perform matrix multiplication: one with the @tf.function decorator and one without it.

Steps

  1. Define matrix_multiply_optimized function ensuring that it includes the @tf.function decorator.
  2. Complete both functions by calculating the mean of the resulting matrices.
  3. Generate two uniformly distributed random matrices using TensorFlow's random matrix generation functions.

Everything was clear?

Section 2. Chapter 2
toggle bottom row
some-alt