Non-Comparison Sort
It is proved in the section of sorting overview that comparison-based sorting scheme has the best worst-case time of Ο(n ⋅ log n). However, there are certain conditions where sorting algorithms could be designed to have linear time complexity. They are further discussed here in this module:
Counting Sort
Unlike sorting on randomly distributed inputs in the context of most comparison-based sorting, counting sort assumes that inputs are non-negative integers; if inputs are alphanumeric or mixed with different character sets, it is still possible to map their respective values to same weight integers before sorting is performed.
Fundamental Idea:
There are n inputs ranging from 0 to k; then for each input 𝓍, determine the number of entries that are less than or equal to 𝓍 in order to put 𝓍 into the right location. (e.g. if 8 entries less than (or equal to) 𝓍, put 𝓍 into the 9th slot).
Specifically, first use an array of length k to store the occurrences of all entries individually; then for each index i of array, summing up the values of which the indexes smaller than i; finally, insert entries into positions indicated by the occurrence array.
Algorithm Analysis:
The total amount of operations is 2k + 2n; according to asymptotic notations, time complexity is Θ(k + n) and in real scenarios where k = Θ(n), the total running time is Θ(n).
It is worthy of notion that counting sort is a stable sort and provides a solid foundation for radix sort as a subroutine.
Bucket Sort
Bucket Sort assumes the inputs are uniformly distributed on the universe in order to achieve time complexity of about Ο(n);
The fundamental idea is to partition the inputs range into segments of inputs with equal length, called bucket; traverse the entire inputs and map individual entry to its relative bucket; apply insertion sort to entries in each bucket then traverse the buckets to list all the entries in order.
Radix Sort
Same as the counting sort, radix sort is a non-comparative integer sorting algorithm while requiring a stable sort subroutine such as counting sort during the process of sorting each digit place numbers.
The fundamental idea is that starting from location of the least significant bit (LSB) to the most significant bit (MSB) (if higher digit places are empty, fill up with zeros), all digits in that location of all input numbers are sorted.
An example: given input sequence {329, 457, 657, 839, 436, 720, 355}; sort the bit numbers, then the tens place finally the hundreds, sorting completes.
A variant of this algorithm (MSB radix sort) is to start sorting the bit place from MSB to LSB however it is more efficient to adopt the basic one (LSB radix sort) since the MSB radix sort is a recursive approach but LSB radix sort is non-recursive.
Note: the reason of performing recursion in MSB radix sort is to assist memoization of previous computation results. (e.g. sorting a digit place with a 2 and a 3 does not guarantee the same ordering in those positions after sorting completion).
Algorithm Analysis:
Radix sort could sort numbers of not large digit places in a promising speed; if sorting subroutine takes Θ(n + k), where n is the number of inputs, k is the number of possible values each digit can take.
Then, for d number of sorting subroutines (d is the digit amount of the largest number), the total time complexity would be Θ(d(n + k)); since the sorting subroutine only deals with single digit, it is intuitive that the largest value of k is 10 ranging from 0 to 9.
When d is relatively smaller compared to k, the running time is Ο(n + k); when d is far larger than k, which is common in large-scale computing problems, the time complexity is Ο(d ⋅ n).
Last updated