Algorithm’s Efficiency | Big O “In Simple English”

A quick guide to Big O notation

Yann Mulonda
Bits and Pieces

--

Algorithm’s Time Complexity

Let’s start with a short popular fun story:

I’m originally from the D.R.Congo, in Central Africa and we have a very low internet speed. For illustration, opening a Gmail might take about 2 to 3 min of loading time (sometimes the whole process might just failed and time out).

In 2009, a company in South Africa had a similar issue: “really slow internet speed”. The company had two offices located about 50 miles away from each other and they decided to set up a fun test to see if it would be faster to transfer data over their very much slow internet or via carrier pigeon.

Pic credit: HackerRank’s Cracking The Coding Interview Tutorial

So they stored 4GB of data in a USB drive, strapped it to a pigeon and flew it from one office to the other office, 50 miles away. Guess what…

The carrier pigeon beat the internet.

The pigeon won by a huge margin otherwise it wouldn’t be a fun story. Furthermore, Only about 4% of the data was sent through the internet as the pigeon reached the second office located 50 miles away in 2 hours.

Useful tip: Use Bit to encapsulate components with all their dependencies and setup. Build truly modular applications with better code reuse, simpler maintenance and less overhead. Share and collaborate on individual components.

Complexity | Runtime Analysis | Big O Notation

From a programming concept, Big O notation is used as a sort of measurement unit that helps programmers evaluate or estimate the efficiency of a written bloc of code, a script or an algorithm: “What the amount of time it’s going to take to run? What is the complexity depending on the variation of data to be processed by that piece of code”.

It’s hard to determine the exact runtime of a script or an algorithm. Which is also dependent on other factors such as the speed of the processor and other specifications of the computer in which that script or algorithm is running. So instead of evaluating the runtime directly, big O notation is used to evaluate how quickly the runtime grows relative to the input data to be processed by that algorithm.

Time and Space Complexity

When you write code, any piece of code, in any programming language; you deal with two types of complexities:

Time complexity. The time complexity of an algorithm determines the number of steps taken by the algorithm, measured with respect to n (input data to be processed), the size of the input.

Space complexity. The space complexity of an algorithm determines the amount of space required by the algorithm to execute, measured with respect to n (input data to be processed).

Constant | O(1)

  • Notice that in the scenario above, the pigeon would take the same amount of time to carry 5KB, 10MB or 2TB of data stored in the USB drive. The pigeon will always take the same amount of time to move any amount of data from office A to office B, It just has to fly 50 miles — considering certain assumptions and simplifications of course.

So in Big O Notation, the time the pigeon takes to move data from office A to office B is referred to as constant time: O(1). Meaning the time is constant with respect to the size of the input.

Here is an example of a piece of JavaScript code that has a runtime of O(1):

O(1) —JavaScript Code Example

Linear | O(n)

  • Whereas, transferring data over the internet would take longer and longer as the amount of data to be transferred increase.

So in Big O Notation, the time the internet takes to transferred data from office A to Office B will grow linearly and in direct proportion to the size of the input data set and represented as O(n). with n being the amount of data to be transmitted.

In computer programming, Big O favors the worst-case performance scenario; meaning, for example, a case where we are looking for a matching number in an array of number which could be found during any iteration of the for loop and the function would return early. Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations to find the matching number (if the number was the last element stored in the array).

Here is an example of a piece of JavaScript code that has a runtime of O(n):

O(n) — JavaScript Code Example

Quadratic | O( n²)

Quadratic or O(N²) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.

A nested iteration over the data set ( such as nest for loop) is a common example of a script or algorithm that involve quadratic runtime. Thus, deeper nested iterations will result in O(N³), O(N⁴), etc.

O( n²)-JavaScript Code Example

Exponential | O(2^n)

An algorithm is said to have an exponential time or O(2^n) if its runtime doubles with each addition to the input data set. The growth curve of an O(2^n) function is exponential — starting off very shallow, then rising meteorically. A recursive calculation of Fibonacci numbers is one example of an O(2^n) function is:

Exponential | O(2^n) — JavaScript Code Example

Logarithmic| O(log n)

Logarithmic time complexity is a bit trickier to get at first. So I’m going to use a common example to explain it: Binary search concept.

Binary search is a technique used to search sorted data sets. It works by selecting the middle element of the data set and compares it against a target value. If the values match it will return success. If the searched value is higher than the value of the probe element it will take the upper half of the data set and perform the same operation against it.

otherwise, if the searched value is lower than the value of the probe element it will perform the operation against the lower half. this set of actions will continue to halve the data set with each iteration until the searched value has been found or until it can no longer split the data set:

O(log n) — JavaScript Code Example

Here are some typical orders of growth:

  • O(1) — constant time
  • O(log n) — logarithmic
  • O(n) — linear time
  • O(n²) — quadratic
  • O(2^n) — exponential
  • O(n!) — factorial

if you are entry-level programmer, try to make a habit of thinking about the time and space complexity as youdesign algorithm and write code.” It’ll allow you to optimize your code and solve potential future performance issues right away.

“writing code is good, writing code that works is better, writing optimized working code is best.”

it’s quite evident that a script that takes 2 min to run is better than one that takes 5 min to run. Writing code that works, easy to understand, and meets its functionalities requirement is good, any programmer can do that. Writing code that is optimized (time and space-efficient) with the right balance of readability, runtime and maintainability are what the word “engineer” represent in the title “Software Engineering.”

Cheers!!!

Related Stories

Encapsulates components with Bit to run them anywhere across your projects and applications

Bit encapsulates components in your projects with all their files and dependencies, so they can run anywhere across your applications.

Build faster by making your components reusable out-of-the-box, and collaborate as a team to share and discover components. No refactoring or configurations needed, just share components and build truly modular apps.

LEARN MORE

--

--

Co-Founder & CIO @ITOT | DevOps | Senior Site Reliability Engineer @ICF󠁧󠁢󠁳󠁣󠁴 | "Learning is experience; everything else is just information!”