Contenido del Curso
Algorithms and Data Structures Overview
Algorithms and Data Structures Overview
Big-O Notation
O-notation, also known as Big O notation, is a mathematical notation used in computer science to describe the asymptotic behavior of algorithms. It represents the upper bound or worst-case scenario of an algorithm's time complexity regarding the input size. In simpler terms, O-notation expresses how an algorithm's runtime or space requirements grow relative to its input size.
Note
The asymptotic behavior of algorithms refers to how their performance changes as the size of their input approaches infinity. In other words, it describes the algorithm's behavior as the input size becomes very large.
Commonly used functions in O-notation
- O(1): Constant time. The algorithm's runtime remains constant regardless of the input size. Example: Accessing an element in an array by index;
- O(log n): Logarithmic time. The runtime grows logarithmically with the size of the input. Example: Binary search in a sorted array;
- O(n): Linear time. The runtime grows linearly with the size of the input. Example: Iterating through all elements in an array;
- O(n log n): Log-linear time. The runtime grows in proportion to n multiplied by the logarithm of n. Example: Sorting algorithms like Merge Sort and Quick Sort;
- O(n^2): Quadratic time. The runtime grows quadratically with the size of the input. Example: Nested loops iterate through all pairs of elements in an array;
- O(2^n): Exponential time. The runtime grows exponentially with the size of the input. Example: Generating all subsets of a set;
- O(n!): Factorial time. The runtime grows factorial with the size of the input. Example: Generating all permutations of a sequence.
¡Gracias por tus comentarios!