Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Basic Operations: Linear Algebra | Tensors
course content

Contenido del Curso

Introduction to TensorFlow

Basic Operations: Linear AlgebraBasic Operations: Linear Algebra

Linear Algebra Operations

TensorFlow offers a suite of functions dedicated to linear algebra operations, making matrix operations straightforward.

We won't delve into the fundamentals of linear algebra in this lesson. If you're not well-acquainted with the subject, consider taking the Mathematics for Data Analysis and Modeling course to enhance your understanding.

Matrix Multiplication

Here's a quick reminder of how matrix multiplication works.

There are two equivalent approaches for matrix multiplication:

  • The tf.matmul() function;
  • Using the @ operator.

Note

Multiplying matrices of size 3x2 and 2x4 will give a matrix of 3x4.


Matrix Inversion

You can obtain the inverse of a matrix using the tf.linalg.inv() function. Additionally, let's verify a fundamental property of the inverse matrix.

Note

Multiplying a matrix with its inverse should yield an identity matrix, which has ones on its main diagonal and zeros everywhere else. Additionally, the tf.linalg module offers a wide range of linear algebra functions. For further details or more advanced operations, you might want to refer to its official documentation.


Transpose

You can obtain a transposed matrix using the tf.transpose() function.


Dot Product

You can obtain a dot product using the tf.tensordot() function. By setting up an axes argument you can choose along which axes to calculate a dot product. E.g. for two vectors by setting up axes=1 you will get the classic dot product between vectors. But when setting axes=0 you will get broadcasted matrix along 0 axes:

Note

If you take two matrices with appropriate dimensions (NxM @ MxK, where NxM represents the dimensions of the first matrix and MxK the second), and compute the dot product along axes=1, it essentially performs matrix multiplication.

Tarea

Background

A system of linear equations can be represented in matrix form using the equation:

AX = B

Where:

  • A is a matrix of coefficients.
  • X is a column matrix of variables.
  • B is a column matrix representing the values on the right side of the equations.

The solution to this system can be found using the formula:

X = A^-1 B

Where A^-1 is the inverse of matrix A.

Objective

Given a system of linear equations, use TensorFlow to solve it. You are given the following system of linear equations:

  1. 2x + 3y - z = 1.
  2. 4x + y + 2z = 2.
  3. -x + 2y + 3z = 3.
Dot Product
  1. Represent the system of equations in matrix form (separate it into matrices A and B).
  2. Using TensorFlow, find the inverse of matrix A.
  3. Multiply the inverse of matrix A by matrix B to find the solution matrix X, which contains the values of x, y, and z.

Note

Slicing in TensorFlow operates similarly to NumPy. Therefore, X[:, 0] will retrieve all elements from the column at index 0. We will get to the slicing later in the course.

¿Todo estuvo claro?

Sección 1. Capítulo 9
toggle bottom row
course content

Contenido del Curso

Introduction to TensorFlow

Basic Operations: Linear AlgebraBasic Operations: Linear Algebra

Linear Algebra Operations

TensorFlow offers a suite of functions dedicated to linear algebra operations, making matrix operations straightforward.

We won't delve into the fundamentals of linear algebra in this lesson. If you're not well-acquainted with the subject, consider taking the Mathematics for Data Analysis and Modeling course to enhance your understanding.

Matrix Multiplication

Here's a quick reminder of how matrix multiplication works.

There are two equivalent approaches for matrix multiplication:

  • The tf.matmul() function;
  • Using the @ operator.

Note

Multiplying matrices of size 3x2 and 2x4 will give a matrix of 3x4.


Matrix Inversion

You can obtain the inverse of a matrix using the tf.linalg.inv() function. Additionally, let's verify a fundamental property of the inverse matrix.

Note

Multiplying a matrix with its inverse should yield an identity matrix, which has ones on its main diagonal and zeros everywhere else. Additionally, the tf.linalg module offers a wide range of linear algebra functions. For further details or more advanced operations, you might want to refer to its official documentation.


Transpose

You can obtain a transposed matrix using the tf.transpose() function.


Dot Product

You can obtain a dot product using the tf.tensordot() function. By setting up an axes argument you can choose along which axes to calculate a dot product. E.g. for two vectors by setting up axes=1 you will get the classic dot product between vectors. But when setting axes=0 you will get broadcasted matrix along 0 axes:

Note

If you take two matrices with appropriate dimensions (NxM @ MxK, where NxM represents the dimensions of the first matrix and MxK the second), and compute the dot product along axes=1, it essentially performs matrix multiplication.

Tarea

Background

A system of linear equations can be represented in matrix form using the equation:

AX = B

Where:

  • A is a matrix of coefficients.
  • X is a column matrix of variables.
  • B is a column matrix representing the values on the right side of the equations.

The solution to this system can be found using the formula:

X = A^-1 B

Where A^-1 is the inverse of matrix A.

Objective

Given a system of linear equations, use TensorFlow to solve it. You are given the following system of linear equations:

  1. 2x + 3y - z = 1.
  2. 4x + y + 2z = 2.
  3. -x + 2y + 3z = 3.
Dot Product
  1. Represent the system of equations in matrix form (separate it into matrices A and B).
  2. Using TensorFlow, find the inverse of matrix A.
  3. Multiply the inverse of matrix A by matrix B to find the solution matrix X, which contains the values of x, y, and z.

Note

Slicing in TensorFlow operates similarly to NumPy. Therefore, X[:, 0] will retrieve all elements from the column at index 0. We will get to the slicing later in the course.

¿Todo estuvo claro?

Sección 1. Capítulo 9
toggle bottom row
some-alt