Mathematical Operations with Tensors
Element-wise Operations
Element-wise operations are applied to each element in the tensor individually. These operations, such as addition, subtraction, and division, work similarly to how they do in NumPy:
123456789101112131415import torch a = torch.tensor([1, 2, 3]) b = torch.tensor([4, 5, 6]) # Element-wise addition addition_result = a + b print(f"Addition: {addition_result}") # Element-wise subtraction subtraction_result = a - b print(f"Subtraction: {subtraction_result}") # Element-wise multiplication multiplication_result = a * b print(f"Multiplication: {multiplication_result}") # Element-wise division division_result = a / b print(f"Division: {division_result}")
Matrix Operations
PyTorch also supports matrix multiplication and dot product, which are performed using the torch.matmul()
function:
123456import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) # Matrix multiplication z = torch.matmul(x, y) print(f"Matrix multiplication: {z}")
You can also use the @
operator for matrix multiplication:
12345import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) z = x @ y print(f"Matrix Multiplication with @: {z}")
Aggregation Operations
Aggregation operations compute summary statistics from tensors, such as sum, mean, maximum, and minimum values, which can be calculated using their respective methods.
12345678910import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]).float() # Sum of all elements print(f"Sum: {tensor.sum()}") # Mean of all elements print(f"Mean: {tensor.mean()}") # Maximum value print(f"Max: {tensor.max()}") # Minimum value print(f"Min: {tensor.min()}")
Aggregation methods also have two optional parameters:
dim
: specifies the dimension (similarly toaxis
in NumPy) along which the operation is applied. By default, ifdim
is not provided, the operation is applied to all elements of the tensor;keepdim
: a boolean flag (False
by default). If set toTrue
, the reduced dimension is retained as a size1
dimension in the output, preserving the original number of dimensions.
12345678import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) # Aggregation operations along specific dimensions print(f"Sum along rows (dim=1): {tensor.sum(dim=1)}") print(f"Sum along columns (dim=0): {tensor.sum(dim=0)}") # Aggregation with keepdim=True print(f"Sum along rows with keepdim (dim=1): {tensor.sum(dim=1, keepdim=True)}") print(f"Sum along columns with keepdim (dim=0): {tensor.sum(dim=0, keepdim=True)}")
Broadcasting
Broadcasting allows operations between tensors of different shapes by automatically expanding dimensions. IIf you need a refresher on broadcasting, you can find more details here.
123456import torch a = torch.tensor([[1, 2, 3]]) # Shape (1, 3) b = torch.tensor([[4], [5]]) # Shape (2, 1) # Broadcasting addition c = a + b print(f"Broadcasted addition: {c}")
Useful Mathematical Functions
PyTorch also provides various mathematical functions such as exponentials, logarithms, and trigonometric functions.
1234567tensor = torch.tensor([1.0, 2.0, 3.0]) # Exponentiation print(f"Exponent: {tensor.exp()}") # Logarithm print(f"Logarithm: {tensor.log()}") # Sine print(f"Sine: {tensor.sin()}")
Дякуємо за ваш відгук!
Запитати АІ
Запитати АІ
Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат
Запитайте мені питання про цей предмет
Сумаризуйте цей розділ
Покажіть реальні приклади
Awesome!
Completion rate improved to 5
Mathematical Operations with Tensors
Свайпніть щоб показати меню
Element-wise Operations
Element-wise operations are applied to each element in the tensor individually. These operations, such as addition, subtraction, and division, work similarly to how they do in NumPy:
123456789101112131415import torch a = torch.tensor([1, 2, 3]) b = torch.tensor([4, 5, 6]) # Element-wise addition addition_result = a + b print(f"Addition: {addition_result}") # Element-wise subtraction subtraction_result = a - b print(f"Subtraction: {subtraction_result}") # Element-wise multiplication multiplication_result = a * b print(f"Multiplication: {multiplication_result}") # Element-wise division division_result = a / b print(f"Division: {division_result}")
Matrix Operations
PyTorch also supports matrix multiplication and dot product, which are performed using the torch.matmul()
function:
123456import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) # Matrix multiplication z = torch.matmul(x, y) print(f"Matrix multiplication: {z}")
You can also use the @
operator for matrix multiplication:
12345import torch x = torch.tensor([[1, 2], [3, 4]]) y = torch.tensor([[5, 6], [7, 8]]) z = x @ y print(f"Matrix Multiplication with @: {z}")
Aggregation Operations
Aggregation operations compute summary statistics from tensors, such as sum, mean, maximum, and minimum values, which can be calculated using their respective methods.
12345678910import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]).float() # Sum of all elements print(f"Sum: {tensor.sum()}") # Mean of all elements print(f"Mean: {tensor.mean()}") # Maximum value print(f"Max: {tensor.max()}") # Minimum value print(f"Min: {tensor.min()}")
Aggregation methods also have two optional parameters:
dim
: specifies the dimension (similarly toaxis
in NumPy) along which the operation is applied. By default, ifdim
is not provided, the operation is applied to all elements of the tensor;keepdim
: a boolean flag (False
by default). If set toTrue
, the reduced dimension is retained as a size1
dimension in the output, preserving the original number of dimensions.
12345678import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) # Aggregation operations along specific dimensions print(f"Sum along rows (dim=1): {tensor.sum(dim=1)}") print(f"Sum along columns (dim=0): {tensor.sum(dim=0)}") # Aggregation with keepdim=True print(f"Sum along rows with keepdim (dim=1): {tensor.sum(dim=1, keepdim=True)}") print(f"Sum along columns with keepdim (dim=0): {tensor.sum(dim=0, keepdim=True)}")
Broadcasting
Broadcasting allows operations between tensors of different shapes by automatically expanding dimensions. IIf you need a refresher on broadcasting, you can find more details here.
123456import torch a = torch.tensor([[1, 2, 3]]) # Shape (1, 3) b = torch.tensor([[4], [5]]) # Shape (2, 1) # Broadcasting addition c = a + b print(f"Broadcasted addition: {c}")
Useful Mathematical Functions
PyTorch also provides various mathematical functions such as exponentials, logarithms, and trigonometric functions.
1234567tensor = torch.tensor([1.0, 2.0, 3.0]) # Exponentiation print(f"Exponent: {tensor.exp()}") # Logarithm print(f"Logarithm: {tensor.log()}") # Sine print(f"Sine: {tensor.sin()}")
Дякуємо за ваш відгук!