Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Big-O Notation | Introduction to ADS
Algorithms and Data Structures Overview

Big-O NotationBig-O Notation

O-notation, also known as Big O notation, is a mathematical notation used in computer science to describe the asymptotic behavior of algorithms. It represents the upper bound or worst-case scenario of an algorithm's time complexity regarding the input size. In simpler terms, O-notation expresses how an algorithm's runtime or space requirements grow relative to its input size.

Note

The asymptotic behavior of algorithms refers to how their performance changes as the size of their input approaches infinity. In other words, it describes the algorithm's behavior as the input size becomes very large.

Commonly used functions in O-notation

  1. O(1): Constant time. The algorithm's runtime remains constant regardless of the input size. Example: Accessing an element in an array by index;
  2. O(log n): Logarithmic time. The runtime grows logarithmically with the size of the input. Example: Binary search in a sorted array;
  3. O(n): Linear time. The runtime grows linearly with the size of the input. Example: Iterating through all elements in an array;
  4. O(n log n): Log-linear time. The runtime grows in proportion to n multiplied by the logarithm of n. Example: Sorting algorithms like Merge Sort and Quick Sort;
  5. O(n^2): Quadratic time. The runtime grows quadratically with the size of the input. Example: Nested loops iterate through all pairs of elements in an array;
  6. O(2^n): Exponential time. The runtime grows exponentially with the size of the input. Example: Generating all subsets of a set;
  7. O(n!): Factorial time. The runtime grows factorial with the size of the input. Example: Generating all permutations of a sequence.

What does Big O notation describe?

Виберіть правильну відповідь

Все було зрозуміло?

Секція 1. Розділ 4
course content

Зміст курсу

Algorithms and Data Structures Overview

Big-O NotationBig-O Notation

O-notation, also known as Big O notation, is a mathematical notation used in computer science to describe the asymptotic behavior of algorithms. It represents the upper bound or worst-case scenario of an algorithm's time complexity regarding the input size. In simpler terms, O-notation expresses how an algorithm's runtime or space requirements grow relative to its input size.

Note

The asymptotic behavior of algorithms refers to how their performance changes as the size of their input approaches infinity. In other words, it describes the algorithm's behavior as the input size becomes very large.

Commonly used functions in O-notation

  1. O(1): Constant time. The algorithm's runtime remains constant regardless of the input size. Example: Accessing an element in an array by index;
  2. O(log n): Logarithmic time. The runtime grows logarithmically with the size of the input. Example: Binary search in a sorted array;
  3. O(n): Linear time. The runtime grows linearly with the size of the input. Example: Iterating through all elements in an array;
  4. O(n log n): Log-linear time. The runtime grows in proportion to n multiplied by the logarithm of n. Example: Sorting algorithms like Merge Sort and Quick Sort;
  5. O(n^2): Quadratic time. The runtime grows quadratically with the size of the input. Example: Nested loops iterate through all pairs of elements in an array;
  6. O(2^n): Exponential time. The runtime grows exponentially with the size of the input. Example: Generating all subsets of a set;
  7. O(n!): Factorial time. The runtime grows factorial with the size of the input. Example: Generating all permutations of a sequence.

What does Big O notation describe?

Виберіть правильну відповідь

Все було зрозуміло?

Секція 1. Розділ 4
some-alt