1. Computational Complexity
Computational Complexity
It is important to analyze and compare the runtime complexity, or efficiency, of algorithms that we use. As an example, we can intuitively argue that using binary search is faster than using linear search to find a target value in an array. Binary search can decrease the search space by half per iteration. With a sorted array with 32 elements, binary search will perform, in the worst case, 5 comparisons to determine if the array contains a target value (the worst case being when the target does not exist in the array, if the target exists, fewer comparisons may be needed). In contrast, linear search on the same array will perform, in the worst case, 32 comparisons. For very large arrays (say with 1000000 elements), the difference is much more pronounced (~20 comparisons for binary search versus 1000000 for linear search.)
To analyze how fast an algorithm is, we can count (or estimate) how many operations it performs. Let's first analyze linear search. For an array of length 10, linear search will take at most 10 comparisons; for an array of length 10000, linear search will take at most 10000 comparisons. We can say that linear search takes about n operations given an array of size n. Similarly, finding the minimum (smallest) number in an int array takes roughly n operations for an array of size n. Binary search, on the other hand, has a different runtime complexity. For an array of size 32, binary search takes at most 5 comparisons. Since binary search halves the search space after each comparison, after 5 comparison, the search space decreases to 32 / 2 / 2 / 2 / 2 / 2 = 1, and the algorithm terminates afterwards. We can use base-2 logarithm to describe the rumtime complexity of binary search; for an array of size n, binary search takes roughly log(n) operations.
In computer science, we commonly describe the runtime complexity of an algorithm using the Big-O notation. The exact definition of Big-O is beyond the scope of this textbook, but we will simply say that Big-O estimates the number of operations an algorithm takes as a proportion to a mathematical function. Linear search and finding minimum in array are both algorithms that have a worst case runtime complexity of n operations, thus, their worst case runtime complexity is proportional to n. With Big-O notation, we write that as O(n) (which reads "big O of n"). For binary search, its worst case time complexity is proportional to log(n), and we can write that as O(log(n)) (which reads "big O of log n").
Let's now analyze the runtime complexity of the sorting algorithms we have seen so far. We assume that the algorithms sorts an int array of size n. Selection sort first finds the smallest element in the array, then swaps it into position 0; then it finds the second smallest element in the sub-array from position 1 to n-1, then swaps it into position 1 of the array, and so on. In other words, it needs to find the smallest element in the array (and subsequent sub-arrays) with sizes n, n-1, n-2,n-3,...2, 1. As we have discussed earlier, finding the smallest element in an array of size x takes x operations. So the total number of operations required by selection sort is n + n-1 + n-2 + n-3 + ... 2 + 1. This summation is proportional to n^2 (it's actually n*(n-1) / 2). Thus, we say that the selection sort has a runtime complexity of O(n^2). With insertion sort, we pick a element from the unsorted part of the array, then does a linear search to insert it into the right position in the sort part of the array. It does that n times, one for each element. Thus, the runtime complexity of insertion sort is actually equivalent to selection sort, at O(n^2).
Big-O notation is usually used to describe the worst case runtime complexity of an algorithm, but can be used to describe the average case or best case runtime complexity as well. For example, linear search as a worst case runtime complexity of n operations. However, the best case would be when the target value is at the beginning of the array; in that case, only 1 operation is needed. Thus the best case runtime complexity of linear search is O(1). On average, finding a target that is guaranteed to be in the array will take n/2 operations (since that element is equally likely to be anywhere in the array). Since n/2 is also proportional to n, the average runtime complexity is also O(n). Bubble sort, as mentioned in an earlier chapter, has a best-case runtime complexity of O(n) with a sorted (or almost sorted) array, while selection sort and insertion sort algorithms do not have a best case advantage.
Besides analyzing runtime complexity, it is also important to analyze the space complexity of algorithms. Space complexity refers to the amount of memory used by the algorithm, and we can use Big-O to approximate that as well. All of our previous examples using arrays have a space complexity of O(n) (since it takes about n units of memory to store an array of size n). In later chapters, we will dive into advanced algorithms with more interesting space complexity.