Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Random Vectors | Additional Statements From The Probability Theory
Advanced Probability Theory
course content

Зміст курсу

Advanced Probability Theory

Advanced Probability Theory

1. Additional Statements From The Probability Theory
2. The Limit Theorems of Probability Theory
3. Estimation of Population Parameters
4. Testing of Statistical Hypotheses

bookRandom Vectors

Random vectors represent multiple related random variables grouped together. They're commonly used in Data Science to model systems with interconnected random quantities, such as datasets where each vector corresponds to a data point.

The probability distribution of a random vector is described by its joint distribution function, which shows how all variables in the vector are distributed simultaneously.

To create a simple random vector with n dimensions:

  1. Generate n independent random variables, each following its distribution function;
  2. Use these variables as coordinates for the vector;
  3. Apply the multiplication rule to determine the joint distribution: f = f1 * f2 * ... * fn.

Discrete random vectors

For discrete values, the joint distribution is still described using the PMF, where the function takes a combination of coordinate values and returns their probability.

For example, consider tossing two coins and recording the results in a vector. The PMF for this vector will look like:

12345678910111213141516
import numpy as np import matplotlib.pyplot as plt from scipy.stats import binom # Define parameters for the binomial distribution n = 1 # Number of trials p = 0.5 # Probability of success # Generate values for the two dimensions x = np.arange(0, n + 1) y = np.arange(0, n + 1) X, Y = np.meshgrid(x, y) # Compute the PMF for each pair of values in the grid pmf = binom.pmf(X, n, p) * binom.pmf(Y, n, p) print(pmf)
copy

Continuous random vectors

For continuous variables, multivariate PDF is used:

12345678910111213141516171819202122232425
import numpy as np import matplotlib.pyplot as plt # Define mean and covariance matrix for the Gaussian distribution mu = np.array([0, 0]) # Mean vector cov = np.array([[1, 0], [0, 1]]) # Covariance matrix # Generate values for the two dimensions x = np.linspace(-3, 3, 100) y = np.linspace(-3, 3, 100) X, Y = np.meshgrid(x, y) pos = np.dstack((X, Y)) # Compute the PDF for each pair of values in the grid pdf = (1 / (2 * np.pi * np.sqrt(np.linalg.det(cov)))) * np.exp(-0.5 * np.sum(np.dot(pos - mu, np.linalg.inv(cov)) * (pos - mu), axis=2)) # Plot the 2D PDF fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(X, Y, pdf, cmap='viridis') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('PDF') ax.set_title('Two-dimensional Gaussian PDF') plt.show()
copy

Note

The characteristics of random vectors are also given in vector form: this vector consists of coordinates that correspond to the statistics of the coordinates of the original vector. In the example above mu vector corresponds to the mean values of the coordinates, cov corresponds to the covariance matrix of a two-dimensional random vector.

Vectors with dependent coordinates

But there are also vectors with coordinates dependent on each other. Then, we will no longer be able to define the joint distribution as the product of the distributions of coordinates. In this case, the joint distribution is given based on the knowledge of the domain area or some additional information about the dependencies between the coordinates.

Let's look at the example of two-dimensional Gaussian samples with dependent coordinates. The nature of the dependencies will be determined using the covariance matrix (off-diagonal elements are responsible for the dependencies between the corresponding coordinates):

1234567891011121314151617
import numpy as np import matplotlib.pyplot as plt from scipy.stats import multivariate_normal # Define mean vector and covariance matrix for the Gaussian distribution mu = np.array([1, 3]) # Mean vector cov = np.array([[4, -4], [-4, 5]]) # Covariance matrix # Generate samples from the Gaussian distribution samples = multivariate_normal.rvs(mean=mu, cov=cov, size=1000) # Plot the sampled data plt.scatter(samples[:, 0], y = samples[:, 1]) plt.xlabel('X') plt.ylabel('Y') plt.title('Samples from a Two-dimensional Gaussian Distribution with dependant coordinates') plt.show()
copy

We can see that these coordinates exhibit a strong linear dependency: as the X coordinate increases, the Y coordinate decreases.

Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 1. Розділ 5
some-alt