Input Enhancement (Computer Science)

From HandWiki
Jump to: navigation, search

In computer science, input enhancement is the principle that processing a given input to a problem and altering it in a specific way will increase runtime efficiency or space efficiency, or both. The altered input is usually stored and accessed to simplify the problem. By exploiting the structure and properties of the inputs, input enhancement creates various speed-ups in the efficiency of the algorithm.


Input enhancement when searching has been an essential component of the algorithm world for some time in computer science. The main idea behind this principle is that the efficiency of a search is much faster when the time is taken to create or sort a data structure of the given input before attempting to search for the element in said data structure.


Presorting is the technique of sorting an input before attempting to search it. Because the addition of a sorting component to an algorithm is added to the runtime of the searching algorithm, and not multiplied, it only competes for the slowest portion of the algorithm. Since the efficiency of algorithms is measured by the slowest component, the addition of the sorting component is negligible if the search is less efficient. Unfortunately, presorting is usually the slowest component of the algorithm. Contrasting, a searching algorithm without a presort is almost always slower than that with a presort.

The sorting portion of the algorithm processes the input of the problem before the searching portion of the algorithm is even reached. Having the elements of the input sorted in some sort of order makes the search trivial in practice. The simplest sorting algorithms – insertion sort, selection sort, and bubble sort – all have a worst case runtime of O(n2), while the more advanced sorting algorithms – heapsort, merge sort – which have a worst case runtime of O(n log n) – and quicksort – which has a worst case of O(n2) but is almost always O(n log n). Utilizing these sorting algorithms, a search algorithm that incorporates presorting will yield these big-O efficiencies.

A simple example of the benefits of presorting can be seen with an algorithm that checks an array for unique elements: If an array of n elements is given, return true if every element in the array is unique, otherwise return false. The pseudocode is presented below:

algorithm uniqueElementSearch(A[0...n])
    for i = 0 to n – 1 do
       for j = i + 1 to n do
          if A[i] = A[j]
             return false
    return true

Without a presort, at worst case, this algorithm would require every element to be checked against every other element with two possible outcomes: either there is no duplicate element in the array, or the last two elements in the array are the duplicates. This results in an O(n2) efficiency.

Now compare this to a similar algorithm that utilizes presorting. This algorithm sorts the inputted array, and then checks each pair of elements for a duplicate. The pseudocode is presented below:

algorithm presortUniqueElementSearch(A[0...n])
    for i = 0 to n – 1 do
       if A[i] = A[i + 1]
          return false
    return true

As previously stated, the least efficient part of this algorithm is the sorting of the array, which, if an efficient sort is selected, would run in O(n log n). But after the array is sorted, the array only needs to be traversed once, which would run in O(n). This results in an O(n log n) efficiency.

This simple example demonstrates what is capable with an input enhancement technique such as presorting. The algorithm went from quadratic runtime to linearithmic runtime which will result in speed-ups for large inputs.

In Trees

Creating data structures to more efficiently search through data is also a form of input enhancement. Placing data into a tree to store and search through inputs is another popular technique. Trees are used throughout computer science and many different types of trees - binary search trees, AVL trees, red-black trees, and 2-3 trees to name just a small few - have been developed to properly store, access, and manipulate data while maintaining their structure. Trees are a principal data structure for dictionary implementation.

The benefits of putting data in a tree are great, especially if the data is being manipulated or repeatedly searched through. Binary search trees are the most simplest, yet most common type of tree for this implementation. The insertion, deletion, and searching of items in a tree are all worst case O(n), but are most often executed in O(log n). This makes the repeated searching of elements even quicker for large inputs. There are many different types of binary search trees that work more efficiently and even self-balance upon addition and removal of items, like the AVL tree which has a worst case O(log n) for all searching, inserting, and deletion.

Taking the time to put the inputted data into such a structure will have great speed-ups for repeated searching of elements, as opposed to searching through the data that hasn't enhanced.

String Matching

String matching is a complex issue in the world of programming now that search engines are the forefront of the internet and the online world. When given a keyword or a string that needs to be searched among millions upon millions of words, it would take an unbelievable amount of time to match this string character per character. Input enhancement allows an input to be altered to make this process that much faster.

The brute-force algorithm for this problem would perform as follows: When presented with a string of n characters, often called the key or pattern, the string would be compared to every single character of a longer string m, often called the text. If a matched character occurs, it checks the second character of the key to see if it matches. If it does, the next character is checked and so on until the string matches or the subsequent character doesn’t match and the entire key shifts a single character. This continues until the key is found or until the text is exhausted.

This algorithm is extremely inefficient. The maximum number of check trials would be m-n+1 trials, making the big-O efficiency at worst case O(mn). On average case, the maximum number of check trials would never be reached and only a few would be executed, resulting in an average time efficiency of O(m+n).

Because of the necessity of more efficient string matching algorithms, several faster algorithms have been developed, with most of them utilizing the idea of input enhancement. The key is preprocessed to gather information about what to look for in the text and that information is stored in order to refer back to them when necessary. The accessing of this information is constant time and greatly increases the runtime efficiency of the algorithms that use it, most famously the Knuth-Morris-Pratt algorithm and the Boyer-Moore algorithm. These algorithms, for the most part, use the same methods to obtain its efficiency with the main difference being on how the key is composed.

Horspool's Algorithm

As a demonstration of input enhancement in string matching, one should examine a simplified version of the Boyer-Moore algorithm, Horspool’s algorithm. The algorithm starts at the nth character of the text m and compares the character. Let’s call this character x. There are 4 possible cases of what can happen next.

Case 1: The first possible case is that the character x is not in the key. If this occurs, the entire key can be shifted the length of the key.

Case 2: The second possible case is that the character x is not the current character, but x is in the key. If this occurs, the key is shifted to align the rightmost occurrence of the character x.

Case 3: The third possible case is that the character x matches with the last character in the key but the other characters don’t fully match the key and x doesn’t occur again in the key. If this occurs, the entire key can be shifted the length of the key.

Case 4: The fourth and last possible case is that character x matches the key but the other characters don’t fully match the key and x does occur again in the key. If this occurs, the key is shifted to align the rightmost occurrence if the character x.

This may seem like it is not more efficient than the brute-force algorithm since it has to check all of the characters on every check. However, this is not the case. Horspool’s algorithm utilizes a shift table to store the number of characters the algorithm should shift if it runs into a specific character. The input is precomputed into a table with every possible character that can be encountered in the text. The shift size is computed with two options: one, if the character is in not in the key, then the shift size is n, the length of the key; or two, if the character appears in the key, then its shift value is the distance from the rightmost occurrence of the character in the first n-1 characters in the key. The algorithm for the shift table generator is given the key and an alphabet of possible characters that could appear in the string (K[0...n-1]) as input and returns the shift table (T[0...s-1]). Pseudocode for the shift table generator and an example of the shift table for the string ‘POTATO’ is displayed below:

algorithm shiftTableGenerator(K[0...n-1])
    for i = 0 to s – 1 do
       T[i] = m
          for j = 0 to n – 2 do
             T[P[j]] = n – 1 – j
    return T
Shift Table for 'POTATO'
character x A B C O P T Z _
shift value 2 6 6 6 4 5 6 1 6 6 6

After the shift table is constructed in the input enhancement stage, the algorithm lines up the key and starts executing. The algorithm executes until a matching substring of text m is found or the key overlaps the last characters of text m. If the algorithm encounters a pair of characters that do not match, it accesses the table for the character’s shift value and shifts accordingly. Horspool’s algorithm takes the key (K[0...n-1]) and the text (M[0...m-1]) and outputs either the index of the matching substring or the string “Key not found” depending on the result. Pseudocode for Horspool’s algorithm is presented below:

algorithm HorspoolsAlgorithm(K[0...n-1]), M[0...m-1])
    i = n – 1
    while im – 1 do
       k = 0
       while km – 1 and K[n – 1 – k] = M[ik] do
          k = k + 1
          if k = m
             return in + 1
             i = i + T[M[i]]
    return “Key not found”

Although it may not be evident, the worst case runtime efficiency of this algorithm is O(mn). Fortunately, on texts that are random, the runtime efficiency is linear, O(n/m). This places Horspool’s algorithm, which utilizes input enhancement, in a much faster class than the brute-force algorithm for this problem.

Related Concepts

Input enhancement is often used interchangeably with precomputation and preprocessing. Although they are related, there are several important differences that must be noted.

  • Precomputing and input enhancement can sometimes be used synonymously. More specifically, precomputation is the calculation of a given input before anything else is done to the input. Oftentimes a table is generated to be looked back on during the actual execution of the algorithm. Input enhancement that calculates values and assigns them to elements of the input can be classified as precomputation, but the similarities stop there. There are sections of input enhancement that do not utilize precomputing and the terms should not be mutually used.
  • When speaking about altering inputs, preprocessing is often misused. In computer science, a preprocessor and preprocessing are entirely different. When preprocessing is used in context, the usual intention is to portray the concept of input enhancement, and not that of utilizing a preprocessor. Implementing a preprocessor is the concept in which a program takes an input and processes it into an output to be used by another program entirely. This sounds like input enhancement, but the application of preprocessor applies to the generic program that processes the source input to be outputted in a format that a compiler can read and can then be compiled.


  • Levitin, Anany (2012). Introduction to The Design & Analysis of Algorithms (Third Edition). Pearson. ISBN:978-0-13-231681-1
  • Sebesta, Robert W. (2012). Concepts of Programming Languages (Tenth Edition). Pearson. ISBN:978-0-13-139531-2