banner
Lainbo

Lainbo's Blog

If you’re nothing without the suit, then you shouldn't have it.
github

Six Algorithmic Paradigms

Recursive Algorithm#

Algorithm Strategy#

A recursive algorithm is one that directly or indirectly calls itself through functions or methods.

The essence of a recursive algorithm is to decompose a problem into smaller subproblems of the same type, and then recursively call methods to represent the solution to the problem. Recursive algorithms are very effective for solving a wide range of problems, as they can make the algorithm concise and easy to understand.

Advantages and Disadvantages:

  • Advantages: Simple and easy to implement
  • Disadvantages: Recursive algorithms have lower efficiency compared to common algorithms like ordinary loops; additionally, during the recursive call process, the system allocates a stack to store return points, local variables, etc., for each level, which can lead to stack overflow if the recursion is too deep.

Applicable Scenarios#

Recursive algorithms are generally used to solve three types of problems:

  • The definition of the data is recursively defined. (Fibonacci sequence)
  • The solution to the problem is implemented using a recursive algorithm. (Backtracking)
  • The structural form of the data is recursively defined. (Tree traversal, graph search)

Recursive Problem-Solving Strategy:

  • Step 1: Clarify the input and output of your function, without worrying about the code inside the function at first; instead, understand what the input is, what the output is, what the function does, and what task it needs to accomplish.
  • Step 2: Identify the base case for recursion; we need to determine when the recursion should end, and then directly return the result.
  • Step 3: Clarify the recursive relationship, how to combine various recursive calls to solve the current problem.

Classic Problems Solved Using Recursive Algorithms#

  • Fibonacci sequence
  • Tower of Hanoi
  • Tree traversal and related operations

Example of a DOM Tree#

Below is an example using the DOM tree to implement a document.getElementById function.

Since the DOM is a tree, and the definition of a tree itself is recursive, handling trees using recursion is very simple and natural.

Step 1: Clarify the input and output of your function

Recursively traverse down from the root node of the DOM tree, checking if the current node's id is the one we are looking for, id='d-cal'.

Input: DOM tree root node document, the id we are looking for id='d-cal'

Output: Return the child node that satisfies id='sisteran'

function getElementById(node, id){}

Step 2: Identify the base case for recursion

Starting from the document, recursively search for all child nodes, checking layer by layer:

  • If the current node's id matches the search criteria, return the current node.
  • If we reach a leaf node and still haven't found it, return null.
function getElementById(node, id){
    // The current node does not exist, we have reached a leaf node and still haven't found it, return null
    if(!node) return null
    // The current node's id matches the search criteria, return the current node
    if(node.id === id) return node
}

Step 3: Clarify the recursive relationship

If the current node's id does not match the search criteria, recursively search each of its child nodes.

function getElementById(node, id){
    // The current node does not exist, we have reached a leaf node and still haven't found it, return null
    if(!node) return null
    // The current node's id matches the search criteria, return the current node
    if(node.id === id) return node
    // The current node's id does not match the search criteria, continue searching its child nodes
    for(var i = 0; i < node.childNodes.length; i++){
        // Recursively search each of its child nodes
        var found = getElementById(node.childNodes[i], id);
        if(found) return found;
    }
    return null;
}

Thus, we have implemented a document.getElementById function:

function getElementById(node, id){
    if(!node) return null;
    if(node.id === id) return node;
    for(var i = 0; i < node.childNodes.length; i++){
        var found = getElementById(node.childNodes[i], id);
        if(found) return found;
    }
    return null;
}
getElementById(document, "d-cal");

The advantage of using recursion is that the code is simple and easy to understand, while the disadvantage is that its efficiency is not as good as non-recursive implementations. The Chrome browser uses a non-recursive implementation for DOM searching. How can we implement it non-recursively? Here’s the code:

function getByElementId(node, id){
    // Traverse all Nodes
    while(node){
        if(node.id === id) return node;
        node = nextElement(node);
    }
    return null;
}

We still traverse all DOM nodes sequentially, but this time we use a while loop, and the function nextElement is responsible for finding the next node. So the key is how to implement the non-recursive node search functionality in nextElement:

// Depth-first traversal
function nextElement(node){
    // First check if there are child nodes
    if(node.children.length) {
        // If so, return the first child node
        return node.children[0];
    }
    // Then check if there are sibling nodes
    if(node.nextElementSibling){
        // If so, return its next sibling node
        return node.nextElementSibling;
    }
    // Otherwise, go up to return the next sibling element of its parent node, equivalent to incrementing i in the for loop of the above recursive implementation
    while(node.parentNode){
        if(node.parentNode.nextElementSibling) {
            return node.parentNode.nextElementSibling;
        }
        node = node.parentNode;
    }
    return null;
}

Running this code in the console will also yield correct results. Whether recursive or non-recursive, they both perform depth-first traversal. In fact, the getElementById function in browsers uses a hash map to store and directly map ids to DOM nodes, while getElementsByClassName uses such a non-recursive search.

Divide and Conquer Algorithm#

Algorithm Strategy#

In computer science, the divide and conquer algorithm is a very important algorithm; quicksort, mergesort, etc., are all based on the divide and conquer strategy, so it is recommended to understand and master it.

Divide and conquer, as the name suggests, is to divide and conquer, breaking a complex problem into two or more similar subproblems, and then further dividing the subproblems into smaller subproblems until the smaller subproblems can be solved easily. Solving the subproblems then leads to the solution of the original problem by merging the solutions of the subproblems.

Applicable Scenarios#

When encountering problems that meet the following conditions, you can try to solve them using only the divide and conquer strategy:

  • The original problem can be divided into multiple similar subproblems.
  • The subproblems can be solved very simply.
  • The solution to the original problem is the combination of the solutions to the subproblems.
  • Each subproblem is independent and does not contain the same subproblem.

Divide and Conquer Problem-Solving Strategy:

  • Step 1: Decompose the original problem into several smaller, independent subproblems that are of the same form as the original problem.
  • Step 2: Solve each of the subproblems.
  • Step 3: Merge the solutions of the subproblems into the solution of the original problem.

Classic Problems Solved Using Divide and Conquer#

  • Binary search
  • Mergesort
  • Quicksort
  • Tower of Hanoi
  • React time slicing

Also known as the half-interval search algorithm, it is a simple and easy-to-understand fast search algorithm. For example, if I randomly write a number between 0 and 100 and ask you to guess what it is, each time you guess, I will tell you whether your guess is too high or too low, until you guess correctly.

Step 1: Decompose

Each time you guess, you split the previous result into a larger group and a smaller group, with both groups being independent.

  • Choose the middle number in the array.
function binarySearch(items, item) {
    // low, mid, high divide the array into two groups
    var low = 0,
        high = items.length - 1,
        mid = Math.floor((low + high) / 2),
        elem = items[mid]
    // ...
}

Step 2: Solve the Subproblem

Compare the search number with the middle number.

  • If it is lower than the middle number, search in the left subarray;
  • If it is higher than the middle number, search in the right subarray;
  • If equal, return success.
while(low <= high) {
    if(elem < item) { // Higher than the middle number
        low = mid + 1
    } else if(elem > item) { // Lower than the middle number
        high = mid - 1
    } else { // Equal
        return mid
    }
}

Step 3: Merge

function binarySearch(items, item) {
    var low = 0,
        high = items.length - 1,
        mid, elem
    while(low <= high) {
        mid = Math.floor((low + high) / 2)
        elem = items[mid]
        if(elem < item) {
            low = mid + 1
        } else if(elem > item) {
            high = mid - 1
        } else {
            return mid
        }
    }
    return -1
}

Finally, the binary search can only be applied when the array is sorted; if the array is unsorted, binary search will not work.

function binarySearch(items, item) {
    // Quick sort
    quickSort(items)
    var low = 0,
        high = items.length - 1,
        mid, elem
    while(low <= high) {
        mid = Math.floor((low + high) / 2)
        elem = items[mid]
        if(elem < item) {
            low = mid + 1
        } else if(elem > item) {
            high = mid - 1
        } else {
            return mid
        }
    }
    return -1
}

// Test
var arr = [2, 3, 1, 4]
binarySearch(arr, 3)
// 2

binarySearch(arr, 5)
// -1

Greedy Algorithm#

Algorithm Strategy#

The greedy algorithm, as the name suggests, always makes the current optimal choice, hoping to achieve the overall optimal solution through local optimal choices.

In a sense, the greedy algorithm is quite greedy and shortsighted; it does not consider the overall situation and only focuses on the current maximum benefit. Therefore, the choices it makes are only locally optimal in some sense. However, the greedy algorithm can still obtain optimal or relatively optimal solutions for many problems, so its existence is still meaningful.

Applicable Scenarios#

In daily life, we often use greedy algorithms, for example:

From 100 bills of varying denominations, if we draw 10 bills, how can we obtain the maximum value?

We only need to choose the largest denomination from the remaining bills each time, and in the end, we will definitely get the optimal solution. This is the use of the greedy algorithm, and we ultimately achieve the overall optimal solution.

However, we still need to clarify that hoping to achieve the overall optimal solution through local optimal choices is merely an expectation, and the final result may not necessarily be the overall optimal solution.

So when can we try to use the greedy algorithm?

When the following conditions are met, it can be used:

  • The original problem is too complex.
  • The mathematical model for finding the global optimal solution is difficult to establish or the computational load is too large.
  • There is no great necessity to require a global optimal solution; a "comparatively optimal" solution is sufficient.

If using a greedy algorithm to find the optimal solution, you can follow these steps:

  • First, we need to clarify what the optimal solution (expectation) is.
  • Then, break the problem into multiple steps, each step must satisfy:
  • Feasibility: Each step meets the constraints of the problem.
  • Local optimal: Each step makes a locally optimal choice.
  • Irrevocability: Once a choice is made, it cannot be undone in any subsequent situation.
  • Finally, the sum of the optimal solutions of all steps is the global optimal solution.

Classic Case: Activity Selection Problem#

Classic problems solved using the greedy algorithm include:

  • Minimum spanning tree algorithm
  • Dijkstra's algorithm for single-source shortest paths
  • Huffman coding
  • Knapsack problem
  • Activity selection problem, etc.

Backtracking Algorithm#

Algorithm Strategy#

The backtracking algorithm is a search method that makes a choice at each step, and once it finds that this choice cannot yield the desired result, it backtracks and makes a new choice. Depth-first search utilizes the idea of the backtracking algorithm.

Applicable Scenarios#

The backtracking algorithm is very simple; it is about continuously trying until a solution is found. This algorithmic idea makes it typically used to solve breadth-first search problems, that is, selecting a solution from a set of possible solutions that meets the requirements.

Classic Cases Using Backtracking Algorithm#

  • Depth-first search
  • 0-1 knapsack problem
  • Regular expression matching
  • Eight queens
  • Sudoku
  • Permutations

Here, we take regular expression matching as an example.

Regular Expression Matching#

var string = "abbc"
var regex = /ab{1,3}c/
console.log(string.match(regex))
// \\["abbc", index: 0, input: "abbc", groups: undefined\\]

Its matching process:

image

In step 5, the match fails; at this point, b{1,3} has already matched two bs and is attempting to match a third b, but then it finds a c. At this point, it needs to backtrack to the previous step, where b{1,3} has completed matching (matched bb), and then match c, successfully matching c and ending the match.

Dynamic Programming#

Algorithm Strategy#

Dynamic programming is also a strategy for breaking down complex problems into smaller problems for solving. Unlike divide and conquer algorithms, which require subproblems to be independent, dynamic programming involves interrelated subproblems.

Thus, dynamic programming is suitable for situations where subproblems overlap, meaning different subproblems have common sub-subproblems. In such cases, the divide and conquer strategy would perform a lot of unnecessary work, repeatedly solving those common sub-subproblems, while dynamic programming solves each sub-subproblem once and stores the results in a table. If the same problem arises again, it retrieves the result from the table, avoiding the need to solve each sub-subproblem repeatedly and thus preventing a lot of unnecessary operations.

Applicable Scenarios#

Dynamic programming is suitable for solving optimal solution problems, such as selecting multiple coins of varying denominations from 100 coins to make exactly 10 yuan, seeking to minimize the number of coins selected while ensuring the total equals 10 yuan. This is a typical dynamic programming problem. It can be broken down into subproblems (selecting coins), each of which has common sub-subproblems (selecting coins), and the subproblems are interrelated (the total amount of selected coins cannot exceed 10 yuan), with the boundary condition being that the total amount of selected coins equals 10 yuan.

For the above example, you might also say that we could use the backtracking algorithm to continuously try, but the backtracking algorithm is used to find breadth-first solutions (solutions that meet the requirements). If we use the backtracking algorithm, we would need to try to find all solutions that meet the conditions and then find the optimal solution, resulting in a time complexity of O(2^n), which is quite poor in performance. Most problems suitable for dynamic programming can also be solved using backtracking algorithms, but the time complexity of backtracking algorithms is generally higher.

Finally, to summarize, when we use dynamic programming to solve problems, we need to follow these important steps:

  • Define the subproblems
  • Implement the part that needs to be solved repeatedly for sub-subproblems
  • Identify and solve the boundary conditions

Classic Problems Solved Using Dynamic Programming#

  • Climbing stairs problem: Suppose you are climbing stairs. You need n steps to reach the top. Each time you can climb 1 or 2 steps. How many different ways can you climb to the top?
  • Knapsack problem: Given some resources (with total amount and value), given a knapsack (with total capacity), fill the knapsack with resources, aiming to maximize the value without exceeding the total capacity.
  • Coin change: Given a certain number of coins of varying denominations and the amount of change needed, find how many ways there are to make change.
  • All-pairs shortest path in a graph: Given a graph containing vertices u and v, find the shortest path from vertex u to vertex v.
  • Longest common subsequence: Find the longest common subsequence of a set of sequences (which can be achieved by deleting elements from another sequence without changing the order of the remaining elements).

Here, we take the longest common subsequence as an example.

Climbing Stairs Problem#

Here, we take the classic dynamic programming problem of climbing stairs as an example to introduce the steps for solving dynamic programming problems.

Step 1: Define the subproblem

If we let dp[n] represent the number of ways to reach the nth step, and from the problem we know: the last step can either be 2 steps or 1 step, meaning the number of ways to reach the nth step equals the number of ways to reach the n-1 step plus the number of ways to reach the n-2 step.

Step 2: Implement the part that needs to be solved repeatedly for sub-subproblems

dp[n] = dp[n1] + dp[n2]

Step 3: Identify and solve the boundary conditions

// 1 way to reach step 0
dp[0] = 1
// 1 way to reach step 1
dp[1] = 1

Final Step: Translate the pseudocode into code, handling some boundary cases

let climbStairs = function(n) {
    let dp = [1, 1]
    for(let i = 2; i <= n; i++) {
        dp[i] = dp[i - 1] + dp[i - 2]
    }
    return dp[n]
}

Complexity Analysis:

  • Time Complexity: O(n)
  • Space Complexity: O(n)

Optimizing Space Complexity:

let climbStairs = function(n) {
    let res = 1, n1 = 1, n2 = 1
    for(let i = 2; i <= n; i++) {
        res = n1 + n2
        n1 = n2
        n2 = res
    }
    return res
}

Space Complexity: O(1)

Enumeration Algorithm#

Algorithm Strategy#

The idea of the enumeration algorithm is to list all possible answers to the problem one by one, and then judge whether this answer is suitable based on the conditions, keeping the suitable ones and discarding the unsuitable ones.

Problem-Solving Thought#

  • Determine the enumeration object, enumeration range, and judgment conditions.
  • Enumerate possible solutions one by one, verifying whether each solution is a solution to the problem.

Summary#

Algorithms are the "essence" of programming; whether you are a front-end or back-end developer, having a certain level of algorithmic ability is a basic requirement for a computer engineer.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.