Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Sammanfattning | Grunderna i TensorFlow
Introduktion till Tensorflow

bookSammanfattning

De viktigaste ämnena som behandlats i denna kurs sammanfattas nedan. Översiktsmaterialet finns tillgängligt för nedladdning längst ner på denna sida.

Tensorflow-installation

Installation

pip install tensorflow

Importera

# Import the TensorFlow library with the alias tf
import tensorflow as tf

Tensortyper

Enkel tensor-skapande

# Create a 1D tensor
tensor_1D = tf.constant([1, 2, 3])

# Create a 2D tensor
tensor_2D = tf.constant([[1, 2, 3], [4, 5, 6]])

# Create a 3D tensor
tensor_3D = tf.constant([[[1, 2], [3, 4]], [[5, 6],[7, 8]]])

Tensor-egenskaper

  • Rank: anger antalet dimensioner som finns i tensorn. Till exempel har en matris en rank på 2. Du kan få ranken för tensorn med attributet .ndim:
print(f'Rank of a tensor: {tensor.ndim}')
  • Shape: beskriver hur många värden som finns i varje dimension. En 2x3-matris har formen (2, 3). Längden på shape-parametern motsvarar tensorens rank (dess antal dimensioner). Du kan få formen på tensorn med attributet .shape:
print(f'Shape of a tensor: {tensor.shape}')
  • Typer: Tensorer finns i olika datatyper. Några vanliga är float32, int32 och string. Du kan få datatypen för tensorn med attributet .dtype:
print(f'Data type of a tensor: {tensor.dtype}')

Tensoraxlar

Tillämpningar av tensorer

  • Table Data
  • Textsekvenser
  • Numeriska sekvenser
  • Bildbehandling
  • Videobehandling

Batchar

Metoder för att skapa tensorer

# Create a 2x2 constant tensor
tensor_const = tf.constant([[1, 2], [3, 4]])

# Create a variable tensor
tensor_var = tf.Variable([[1, 2], [3, 4]])

# Zero tensor of shape (3, 3)
tensor_zeros = tf.zeros((3, 3))

# Ones tensor of shape (2, 2)
tensor_ones = tf.ones((2, 2))

# Tensor of shape (2, 2) filled with 6
tensor_fill = tf.fill((2, 2), 6)

# Generate a sequence of numbers starting from 0, ending at 9
tensor_range = tf.range(10)

# Create 5 equally spaced values between 0 and 10
tensor_linspace = tf.linspace(0, 10, 5)

# Tensor of shape (2, 2) with random values normally distributed 
tensor_random = tf.random.normal((2, 2), mean=4, stddev=0.5)

# Tensor of shape (2, 2) with random values uniformly distributed 
tensor_random = tf.random.uniform((2, 2), minval=-2, maxval=2)

Konverteringar

  • NumPy till Tensor
# Create a NumPy array based on a Python list
numpy_array = np.array([[1, 2], [3, 4]])

# Convert a NumPy array to a tensor
tensor_from_np = tf.convert_to_tensor(numpy_array)
  • Pandas till Tensor
# Create a DataFrame based on dictionary
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})

# Convert a DataFrame to a tensor
tensor_from_df = tf.convert_to_tensor(df.values)
  • Konstant tensor till variabel tensor
# Create a variable from a tensor
tensor = tf.random.normal((2, 3))
variable_1 = tf.Variable(tensor)

# Create a variable based on other generator
variable_2 = tf.Variable(tf.zeros((2, 2)))

Datatyper

# Creating a tensor of type float16
tensor_float = tf.constant([1.2, 2.3, 3.4], dtype=tf.float16)

# Convert tensor_float from float32 to int32
tensor_int = tf.cast(tensor_float, dtype=tf.int32)

Aritmetik

  • Addition
c1 = tf.add(a, b)  
c2 = a + b

# Changes the object inplace without creating a new one
a.assign_add(b)
  • Subtraktion
c1 = tf.subtract(a, b)  
c2 = a - b 

# Inplace substraction
a.assign_sub(b)
  • Elementvis multiplikation
c1 = tf.multiply(a, b)  
c2 = a * b
  • Division
c1 = tf.divide(a, b)  
c2 = a / b 

Broadcasting

Linjär algebra

  • Matris­multiplikation
product1 = tf.matmul(matrix1, matrix2)
product2 = matrix1 @ matrix2
  • Matrisinversion
inverse_mat = tf.linalg.inv(matrix)
  • Transponering
transposed = tf.transpose(matrix)
  • Skalärprodukt
# Dot product along axes
dot_product_axes1 = tf.tensordot(matrix1, matrix2, axes=1)
dot_product_axes0 = tf.tensordot(matrix1, matrix2, axes=0)

Omformning

# Create a tensor with shape (3, 2)
tensor = tf.constant([[1, 2], [3, 4], [5, 6]])

# Reshape the tensor to shape (2, 3)
reshaped_tensor = tf.reshape(tensor, (2, 3))

Slicing

# Create a tensor
tensor = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Slice tensor to extract sub-tensor from index (0, 1) of size (1, 2)
sliced_tensor = tf.slice(tensor, begin=(0, 1), size=(1, 2))

# Slice tensor to extract sub-tensor from index (1, 0) of size (2, 2)
sliced_tensor = tf.slice(tensor, (1, 0), (2, 2))

Modifiering med slicing

# Create a tensor
tensor = tf.Variable([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Change the entire first row 
tensor[0, :].assign([0, 0, 0])

# Modify the second and the third columns 
tensor[:, 1:3].assign(tf.fill((3,2), 1))

Sammanfogning

# Create two tensors
tensor1 = tf.constant([[1, 2, 3], [4, 5, 6]])
tensor2 = tf.constant([[7, 8, 9]])

# Concatenate tensors vertically (along rows)
concatenated_tensor = tf.concat([tensor1, tensor2], axis=0)

# Concatenate tensors horizontally (along columns)
concatenated_tensor = tf.concat([tensor3, tensor4], axis=1)

Reduktionsoperationer

# Calculate sum of all elements
total_sum = tf.reduce_sum(tensor)

# Calculate mean of all elements
mean_val = tf.reduce_mean(tensor)

# Determine the maximum value
max_val = tf.reduce_max(tensor)

# Find the minimum value
min_val = tf.reduce_min(tensor)

Gradient Tape

# Define input variables
x = tf.Variable(tf.fill((2, 3), 3.0))
z = tf.Variable(5.0)

# Start recording the operations
with tf.GradientTape() as tape:
    # Define the calculations
    y = tf.reduce_sum(x * x + 2 * z)
    
# Extract the gradient for the specific inputs (x and z)
grad = tape.gradient(y, [x, z])

print(f"The gradient of y with respect to x is:\n{grad[0].numpy()}")
print(f"The gradient of y with respect to z is: {grad[1].numpy()}")

@tf.function

@tf.function
def compute_gradient_conditional(x):
    with tf.GradientTape() as tape:
        if tf.reduce_sum(x) > 0:
            y = x * x
        else:
            y = x * x * x
    return tape.gradient(y, x)

x = tf.constant([-2.0, 2.0])
grad = compute_gradient_conditional(x)
print(f"The gradient at x = {x.numpy()} is {grad.numpy()}")
question mark

Vilken roll spelar en förlustfunktion i ett neuralt nätverk?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 2. Kapitel 5

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

Awesome!

Completion rate improved to 5.56

bookSammanfattning

Svep för att visa menyn

De viktigaste ämnena som behandlats i denna kurs sammanfattas nedan. Översiktsmaterialet finns tillgängligt för nedladdning längst ner på denna sida.

Tensorflow-installation

Installation

pip install tensorflow

Importera

# Import the TensorFlow library with the alias tf
import tensorflow as tf

Tensortyper

Enkel tensor-skapande

# Create a 1D tensor
tensor_1D = tf.constant([1, 2, 3])

# Create a 2D tensor
tensor_2D = tf.constant([[1, 2, 3], [4, 5, 6]])

# Create a 3D tensor
tensor_3D = tf.constant([[[1, 2], [3, 4]], [[5, 6],[7, 8]]])

Tensor-egenskaper

  • Rank: anger antalet dimensioner som finns i tensorn. Till exempel har en matris en rank på 2. Du kan få ranken för tensorn med attributet .ndim:
print(f'Rank of a tensor: {tensor.ndim}')
  • Shape: beskriver hur många värden som finns i varje dimension. En 2x3-matris har formen (2, 3). Längden på shape-parametern motsvarar tensorens rank (dess antal dimensioner). Du kan få formen på tensorn med attributet .shape:
print(f'Shape of a tensor: {tensor.shape}')
  • Typer: Tensorer finns i olika datatyper. Några vanliga är float32, int32 och string. Du kan få datatypen för tensorn med attributet .dtype:
print(f'Data type of a tensor: {tensor.dtype}')

Tensoraxlar

Tillämpningar av tensorer

  • Table Data
  • Textsekvenser
  • Numeriska sekvenser
  • Bildbehandling
  • Videobehandling

Batchar

Metoder för att skapa tensorer

# Create a 2x2 constant tensor
tensor_const = tf.constant([[1, 2], [3, 4]])

# Create a variable tensor
tensor_var = tf.Variable([[1, 2], [3, 4]])

# Zero tensor of shape (3, 3)
tensor_zeros = tf.zeros((3, 3))

# Ones tensor of shape (2, 2)
tensor_ones = tf.ones((2, 2))

# Tensor of shape (2, 2) filled with 6
tensor_fill = tf.fill((2, 2), 6)

# Generate a sequence of numbers starting from 0, ending at 9
tensor_range = tf.range(10)

# Create 5 equally spaced values between 0 and 10
tensor_linspace = tf.linspace(0, 10, 5)

# Tensor of shape (2, 2) with random values normally distributed 
tensor_random = tf.random.normal((2, 2), mean=4, stddev=0.5)

# Tensor of shape (2, 2) with random values uniformly distributed 
tensor_random = tf.random.uniform((2, 2), minval=-2, maxval=2)

Konverteringar

  • NumPy till Tensor
# Create a NumPy array based on a Python list
numpy_array = np.array([[1, 2], [3, 4]])

# Convert a NumPy array to a tensor
tensor_from_np = tf.convert_to_tensor(numpy_array)
  • Pandas till Tensor
# Create a DataFrame based on dictionary
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})

# Convert a DataFrame to a tensor
tensor_from_df = tf.convert_to_tensor(df.values)
  • Konstant tensor till variabel tensor
# Create a variable from a tensor
tensor = tf.random.normal((2, 3))
variable_1 = tf.Variable(tensor)

# Create a variable based on other generator
variable_2 = tf.Variable(tf.zeros((2, 2)))

Datatyper

# Creating a tensor of type float16
tensor_float = tf.constant([1.2, 2.3, 3.4], dtype=tf.float16)

# Convert tensor_float from float32 to int32
tensor_int = tf.cast(tensor_float, dtype=tf.int32)

Aritmetik

  • Addition
c1 = tf.add(a, b)  
c2 = a + b

# Changes the object inplace without creating a new one
a.assign_add(b)
  • Subtraktion
c1 = tf.subtract(a, b)  
c2 = a - b 

# Inplace substraction
a.assign_sub(b)
  • Elementvis multiplikation
c1 = tf.multiply(a, b)  
c2 = a * b
  • Division
c1 = tf.divide(a, b)  
c2 = a / b 

Broadcasting

Linjär algebra

  • Matris­multiplikation
product1 = tf.matmul(matrix1, matrix2)
product2 = matrix1 @ matrix2
  • Matrisinversion
inverse_mat = tf.linalg.inv(matrix)
  • Transponering
transposed = tf.transpose(matrix)
  • Skalärprodukt
# Dot product along axes
dot_product_axes1 = tf.tensordot(matrix1, matrix2, axes=1)
dot_product_axes0 = tf.tensordot(matrix1, matrix2, axes=0)

Omformning

# Create a tensor with shape (3, 2)
tensor = tf.constant([[1, 2], [3, 4], [5, 6]])

# Reshape the tensor to shape (2, 3)
reshaped_tensor = tf.reshape(tensor, (2, 3))

Slicing

# Create a tensor
tensor = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Slice tensor to extract sub-tensor from index (0, 1) of size (1, 2)
sliced_tensor = tf.slice(tensor, begin=(0, 1), size=(1, 2))

# Slice tensor to extract sub-tensor from index (1, 0) of size (2, 2)
sliced_tensor = tf.slice(tensor, (1, 0), (2, 2))

Modifiering med slicing

# Create a tensor
tensor = tf.Variable([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

# Change the entire first row 
tensor[0, :].assign([0, 0, 0])

# Modify the second and the third columns 
tensor[:, 1:3].assign(tf.fill((3,2), 1))

Sammanfogning

# Create two tensors
tensor1 = tf.constant([[1, 2, 3], [4, 5, 6]])
tensor2 = tf.constant([[7, 8, 9]])

# Concatenate tensors vertically (along rows)
concatenated_tensor = tf.concat([tensor1, tensor2], axis=0)

# Concatenate tensors horizontally (along columns)
concatenated_tensor = tf.concat([tensor3, tensor4], axis=1)

Reduktionsoperationer

# Calculate sum of all elements
total_sum = tf.reduce_sum(tensor)

# Calculate mean of all elements
mean_val = tf.reduce_mean(tensor)

# Determine the maximum value
max_val = tf.reduce_max(tensor)

# Find the minimum value
min_val = tf.reduce_min(tensor)

Gradient Tape

# Define input variables
x = tf.Variable(tf.fill((2, 3), 3.0))
z = tf.Variable(5.0)

# Start recording the operations
with tf.GradientTape() as tape:
    # Define the calculations
    y = tf.reduce_sum(x * x + 2 * z)
    
# Extract the gradient for the specific inputs (x and z)
grad = tape.gradient(y, [x, z])

print(f"The gradient of y with respect to x is:\n{grad[0].numpy()}")
print(f"The gradient of y with respect to z is: {grad[1].numpy()}")

@tf.function

@tf.function
def compute_gradient_conditional(x):
    with tf.GradientTape() as tape:
        if tf.reduce_sum(x) > 0:
            y = x * x
        else:
            y = x * x * x
    return tape.gradient(y, x)

x = tf.constant([-2.0, 2.0])
grad = compute_gradient_conditional(x)
print(f"The gradient at x = {x.numpy()} is {grad.numpy()}")
question mark

Vilken roll spelar en förlustfunktion i ett neuralt nätverk?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 2. Kapitel 5
some-alt