Introduction to Algorithm
- An algorithm is a procedure for solving a problem in a finite number of steps.
- It is a sequence of computational steps that transform the input into the output.
- Sequence of steps performed on the data using efficient data structures to solve a given problem.
- Algorithm is the blueprint of a solution of a particular problem.
Algorithm
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following characteristics:
- Definiteness: Each step of an algorithm must be clear and unambiguous. Each of its steps, and their inputs/outputs should be clear and must lead to only one meaning.
- Well-defined Input: An algorithm should have 0 or more well-defined inputs.
- Well-defined Output: An algorithm should have 1 or more well-defined outputs, and should match the desired output.
- Finiteness: An algorithm must have a finite number of steps.
- Feasibility: An algorithm must be simple, generic and practical, such that it can be executed upon with the available resources.
- Language independent: An algorithm should have step-by-step directions, which should be independent of any programming code.
Advantages of Algorithm
- It is easy to understand.
- Algorithm is a step-wise representation of a solution to a given problem.
- In algorithm, the problem is broken into smaller pieces or steps, hence it is easier for the programmer to convert it into the actual program.
Disadvantages of Algorithm
- Writing an algorithm takes a long time so it is time-consuming.
- Branching and looping statements are difficult to show in algorithm.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are two main factors which decides the efficiency of X.
- Time Factor: Time is measured by counting the number of key operations such as comparisons in the sorting algorithm.
- Space Factor: Space is measured by counting the maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms of n as the size of input data.
Space Complexity
- Space complexity of an algorithm represents the amount of memory space required by an algorithm in its life cycle.
- For an algorithm, the space is required for the following purposes:
- To store program instructions.
- To store constant values.
- To store variable values.
- To track the function calls, jumping statements etc.
- Auxiliary space: The extra space required by the algorithm, excluding the input size, is known as an auxiliary space.
- The space complexity considers both the spaces i.e., auxiliary space, and space used by the input.
Space Complexity = Auxiliary space + Input space
Time Complexity
- Time complexity is the study of the efficiency of algorithms.
- It tells us how much time is taken by an algorithm to process a given input.
- Time requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided each step consumes constant time.
- In order to calculate the order (time complexity), the most impactful term containing n is taken into account, and the rest of the smaller terms are ignored.
Asymptotic Notations
- Asymptotic notations gives us an idea about how good a given algorithm is compared to some other algorithm.
- Asymptotic notations are used to represent time complexity of algorithms mathematically for asymptotic analysis.
- We can express the number of statements executed in the programs as a function ƒ(n) for n input data elements.
- Primarily there are three types of widely used asymptotic notations:
- Big Oh notation (O)
- Big Omega notation (Ω)
- Big Theta notation (θ)
Big Oh notation (O)
- A function ƒ(n) is said to be O(g(n)) if and only if there exists positive constant c and a constant n0 such that
0 ≤ ƒ(n) ≤ cg(n), ∀ n ≥ n0. - This gives a tight upper bound for a function ƒ(n) to within a constant factor.
- It indicates the maximum time required by an algorithm for all input values.
- It represents the worst case of an algorithm's time complexity.
Big Omega Notation (Ω)
- A function ƒ(n) is said to be Ω(g(n)) if and only if there exists positive constant c and a constant n0 such that
0 ≤ cg(n) ≤ ƒ(n), ∀ n ≥ n0. - This gives a tight lower lower bound for a function ƒ(n) to within a constant factor.
- It indicates the minimum time required by an algorithm for all input values.
- It represents the best case of an algorithm's time complexity.
Big Theta Notation (θ)
- A function ƒ(n) is said to be θ(g(n)) if and only if there exists positive constants c1, c2 and a constant n0 such that
0 ≤ c1g(n) ≤ ƒ(n) ≤ c2g(n), ∀ n ≥ n0. - This gives bound for a function ƒ(n) to within a constant factor.
- It indicates the average bound of an algorithm for all input values.
- It represents the average case of an algorithm's time complexity.
Analysis of Linear Search Algorithm
- Best case: If we have to find the first element of an array, we have to made only one comparison which is obviously constant for any size of an array.
Best case complexity = O(1) - Worst-case: If we have to find the last element of an array, we have to compare all the elements of the array, which varies with the input size of array.
Worst-case complexity = O(n) - Average case: For average case time, we have to sum the list of all the possible case's runtime and divide it with the total number of cases.
Average case complexity = O(n)
Analysis of Binary Search Algorithm
- Best case: If we have to find the middlemost element of an array, we have to made only one comparison which is obviously constant for any size of an array.
Best case complexity = O(1) - Worst-case: If we have to keep dividing the array into halves until we get a single element, or if the element is not present in the array. Hence, the time taken will be : n + n/2 + n/4 +.....+ 1 = log2n
Worst-case complexity = O(log2n)
Comments
Post a Comment