Lazy evaluation is a code execution mechanism which defers the evaluation of an expression until its value is needed. This comes in handy when the process of evaluating an expression becomes expensive or impossible. It is also conceivably the most powerful tool for modularization in a functional programmer’s toolbox. For lazy evaluation to work, data required for a computation must be accessible and meaningful at the time of evaluation. This necessitates that all computational actions must be devoid of side effects, which is a condition guaranteed by pure functional programming languages like Haskell. Lazy evaluation is the most heavily used method of executing Haskell programs. This article introduces laziness into Haskell applications.

*“I’ve heard that hard work never killed anyone, but I say why take the chance?”*

–**Ronald Reagan**

**Haskell is non-strict, not lazy**

Technically, Haskell specification states that the language is non-strict, not lazy. With non-strict semantics, an expression can have a value even though some of its subexpressions do not. This is achieved by processing computational reductions from the outside in. Therefore, an expression like ( s – ( t * e ) ) will reduce the “-” before the ( t * e).

Outermost reduction may sometimes give a result where innermost reduction fails to terminate.

And non-strict evaluation separates expression binding and expression evaluation. Consequently, expressions are not evaluated when they are bound to variables — They are delayed for as long as possible.

Most programming languages are strict. Strict languages perform reductions from the inside out. In the preceding example, (t*e) will be reduced before “-”. In this approach, if you have an expression that evaluates to an error or endless loop( known as bottom) then the evaluation will always find the bottom value since it is performed inside out. And all bindings of variables to expressions concur with the evaluation of the expression.

Laziness is a common way of implementing non-strict languages, but it is not the only possible way. Also, strict languages are usually evaluated using eager evaluation.

**The good & the bad**

Aside from the obvious computational benefit of always doing the barest minimum computation in order to execute code by avoiding needless calculations, laziness can also facilitate a more modularized approach to program construction by fostering a good compositional programming style. Lazy evaluation also makes it possible to define and work with infinite data structures. This is made possible because the entire structure will not be evaluated immediately; it will be produced gradually on an as-needed basis. Notwithstanding, lazy evaluation demands a lot of bookkeeping, which incurs a considerable overhead. You can easily run into a large collection of unevaluated expressions that could have easily been stored as values, resulting in an unpredictable space behavior. Lazy evaluation also makes it harder to determine code performance.

**Laziness in vogue**

For a more practical view of lazy evaluation, consider the following code snippets in both Haskell and Python.

–Haskelladd x y = x + x |

#Pythondef add( x , y ) :return x + x |

Notice that in both functions, the argument y is not used in the function’s body. Given that Haskell uses lazy evaluation and Python implements eager evaluation, let’s put them to use and see what happens:

–Haskell add 4 5 |

Result : 8 |

add 10 (89/0) |

Result : 20 |

#Python print( add(4 , 5) ) |

Result : 8 |

print( add(10 , (89/0) ) ) |

Result : Traceback (most recent call last): File “test.py”, line 7, in <module> print( add(10 , (89/0) ) ) ZeroDivisionError: division by zero |

Clearly, the call to add with 4 and 5 as arguments gives the same result, as expected. The problem lies with the second call to add. Now Haskell conveniently ignores the second argument (89/0) because as explained before, it delays computation until it is absolutely necessary. So 89/0 is never computed because it’s never used, giving us the expected result of 20.

On the other hand, Python is strict, so it eagerly executes 89/0 before calling the add function, resulting in a bang!

In Lazy languages like Haskell, function arguments are not computed before invocation.

In lieu of a complete evaluation, lazy evaluation engines build and save thunks which are basically data structures containing the information required to represent the computation and are eventually forced when the computation becomes necessary.

**Infinite lists**

One important result of lazy evaluation is the ability to define infinite data structures. In practice, they would require an infinite amount of time to evaluate. However, lazy evaluating allows us to compute only portions of the structure rather than the whole. Infinite lists are the most widely used infinite structures in Haskell.

In Haskell, the construct [n .. ] denotes an infinite list starting from n, n + 1 onwards. Obviously, one cannot examine the whole list, as it would never finish. But we can examine any finite part.

Example:

take 5 [ 4 .. ] which gives us the first 5 elements of [ 4 .. ] which are [4,5,6,7,8].

Infinite data structures like lists allow us to craft programs that are more abstract and easy to write. To demonstrate this, let’s write a Haskell program that computes the first four Armstrong numbers, starting from 100.

In number theory, a number *n* is an Armstrong number or narcissistic number if it is equal to the sum of its own digits raised to the power of the number of digits. Clearly, 0 and 1 are trivially Armstrong numbers.

Example:

1 , 153 and 370 are Armstrong numbers because:

1 = 1 ^ 1 = 1

153 = 1^3 + 5^3 + 3^3 = 1 + 125 + 27 = 153

370 = 3^3 + 7^3 + 0^3 = 27 + 343 + 0 = 370

The example below gives the code for computing the first *n* Armstrong numbers, starting from 100.

addDigits :: Int -> Int addDigits n = foldl (\acc char -> acc + digitToInt char^len ) 0 $ show n where len = length (show n) isArmstrong :: Int -> Bool isArmstrong n = addDigits n == n armstrongNumbers :: Int -> [Int] armstrongNumbers n = take n $ filter isArmstrong [100..] |

First, addDigits takes a number *n*, and finds the sum of its digits raised to the power of the number of its digits. isArmstrong is a predicate function that determines whether a given number is an Armstrong number or not.

The heart of the program is the armstrongNumbers function. It takes a number and then extracts the first *n* number of a set of all filtered Armstrong numbers, starting from 100. We don’t really need to worry about how large a list is needed to accommodate the first *n* Armstrong numbers (which would have been the case when working with finite lists). Do you want the first 10 Armstrong numbers starting from 100? Go ahead and compute armstrongNumbers 10. (As a fun fact, there are five Armstrong Numbers between 1 and 999.)

**The Sieve of Eratosthenes**

This is an efficient way to find all prime numbers up to a given limit — designed by the Greek mathematician Eratosthenes.

Here is how it works.

1. List the numbers 2, 3, 4, 5, …

2. Mark the first element *p* as prime

3. Remove all multiples of *p* from the list

4. Repeat to step 2

It works like this:

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 …

so 2 is prime

3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 …

(remove multiples of 2)

so 3 is prime

5 7 11 13 17 19 23 25 29 31 35 37 41 43 …

(remove multiples of 3)

… etc.

Here is the code to implement it in Haskell:

sieve :: [Int] -> [Int] sieve (p : xs) = p : sieve [x | x <- xs, x `mod` p /= 0] nPrimes n = take n $ sieve [2..] |

Observe how the sieve function takes advantage of the infinite list to compute an infinite list of all prime numbers. Now, computing the first 100 prime numbers is simply doing nPrimes 100. In the nPrime function, the control (take n) is decoupled from the data (sieve). And by using lazy evaluation, we can evaluate only as much data as is required by the control.

This is a pattern that can allow us to write more modular code.

**Conclusion**

Lazy evaluation is a powerful programming tool that introduces creative and elegant ways to make software design a pleasant undertaking. Coupled with features like higher order functions, it can become a powerful tool for structuring programs.

*Progress doesn’t come from early risers — progress is made by lazy men looking for easier ways to do things.* – **Robert A. Heinlein**