# Why Functional Programming?

As software becomes more and more complex, it is more and more important to structure it well. The well-structured software is easy to write and debug and provides a collection of modules that can be reused to reduce future programming costs. Here I will share how functional programming inherently comes up with more scalability and modularity. Compared to imperative programming, in functional programming, you have less chance of creating *spaghetti code* programs.

The fundamental operation in functional programming is the application of functions to arguments. Typically, the main function is defined in terms of other functions, which in turn are defined in terms of still more functions, until at the bottom level the functions are language primitives. A functional programmer is an order of magnitude more productive than his or her conventional counterpart because functional programs are an order of magnitude shorter. All of these functions are much like ordinary mathematical functions. Functional programming enables and encourages a more abstract way of solving a problem. A more *mathematical* programming way. You build a program as a mathematical abstraction. The result is less prone to error, cleaner, more elegant and more functional.

One of the biggest advantages of functional programming is: it avoids *states* during the runtime. i.e. the value of a term is always predetermined by the input. Functional programs contain no assignment statements, so variables, once given a value, never change. i.e the concept of immutability. This allows creating side effect free functions as basic building block in the language. The functions are more like expressions. Since expressions can be evaluated at any time, one can freely replace variables by their values and vice versa. A function call can have no effect other than to compute its result. This eliminates a major source of bugs and also makes the order of execution irrelevant. This is because functional languages do a better job when it comes to parallel computations. Since CPU clocks are nowadays limited and cores are cheaper, multi-core programming is the way to go, and it is made easier with functional programming through immutability and higher order functions. This explains why functional programming has gained so much ground recently.

But this characterization of functional programming is inadequate. All the things discussed until now focused more on what functional programming isn’t (i.e. no assignment, no side effects, no flow of control). Now let’s talk about something that explains the power of functional programming.

One of the very crucial thing a programmer should strive for while creating any real world application is Modularity. It is now generally accepted that modular design is the key to successful programming. One can achieve this in any programming language. When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems, and finally combines the solutions. The ways in which one can divide the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language.

Functional programming comes with two new, very important kinds of glue i.e higher-order functions and lazy evaluation. This is the key to functional programming’s power — it provides a powerful *modularization*.

**Higher order functions:**

Higher-order functions is another key area in the functional programming paradigm. Functional languages treat functions as first class values. This provides a flexible way to compose programs. Functions that take other functions as arguments and can return functions as a result are called higher order function. Creating higher order functions improves modularity. To demonstrate it let me take an example of performing some operation (may be an addition, multiplication or division) on each iteration starting from some integer a to some integer b.

I am going to create two functions here. Function 1 returns sum of integers from a to b and function 2 returns sum of cubes of integers from a to b.

Definition of function 1: Sum of Integers

[java]int sumOfIntegers(int a, int b) {

int sum =0;

for(int i=a; i<b; ++i){

sum = sum+i;

}

return sum;

}[/java]

similarly, Definition of function 2: Sum of cubes of Integers

[java]int sumOfCubesOfIntegers(int a, int b) {

int sum =0;

for(int i=a; i<b; ++i){

sum = sum+(i*i*i);

}

return sum;

}[/java]

Thus, In any first-order function (which take primitives like int, long as arguments), I will iterate from a to b and perform my operation in that loop. Let’s say I have n different kinds of operation. In this case, I will be writing n first order functions. I am writing the logic of iteration n times. How amazing it would be if I pass one function as an additional argument along with the two arguments a and b and use the one common function for iteration every time.

For this, I will define my n operations as n different functions (pure functions) and pass them into this common function to compute the result.

To Illustrate it, I am using the syntax of scala in the below code snippet.

Let me define a common function sum which takes another function as its first argument and performs the function operation on each iteration from its second argument to the third argument.

Definition of my Sum function will be like this:

[scala]def sum(f:Int=>Int, a:Int, b:Int): Int = if(a>b) 0 else f(a) + sum(f, a+1, b)[/scala]

Let sumInts and sumCubes be the fuctions which return sum of integers and sum of cubes of integers respectively. I can now describe these functions in terms of sum function as given below:

[scala]def sumInts(a:Int, b:Int) = sum(id,a,b) [/scala]

where id is an identity function defined as

[scala]def id (x:Int) => x [/scala]

And

[scala]def sumCubes(a:Int, b:Int) = sum(cube,a,b) [/scala]

where cube is defined as

[scala]def cube(x:Int) => x*x*x[/scala]

Here, the first argument of sum function is another predicate function which is an identity function in case 1 and a cube function in case 2. Both of these are pure functions.

In this way, I am creating a more modular program and reusing my common function for every computation. Thus, Functional programming provides *useful abstractions* to specify modules with generic functionality.

The function sum, id and cube are examples of useful abstraction in the above example.

This is an important goal for which functional programmers must strive i.e smaller and simpler and more general modules, glued together with the new glues we shall describe.

**Lazy evaluation:**

The other new kind of glue that functional languages provide enables whole programs to be glued together.

Lazy evaluation means waiting until the last possible moment to evaluate an expression, especially for the purpose of optimization. Lazy evaluation ensures non-evaluation of an expression when not needed at all. This saves us if an expression is expensive or impossible to evaluate.

A complete functional program is just a function from its input to its output. If *f* and *g* are such programs, then (g. f) is a program that, when applied to its input, computes g (f input) The program *f *computes its output, which is used as the input to program *g*. This might be implemented conventionally by storing the output from *f* in a temporary file. The problem with this is that the temporary file might occupy so much memory that it is impractical to glue the programs together in this way. Functional languages provide a solution to this problem. The two programs *f* and *g* are run together in strict synchronization. Program *f * is started only when *g* tries to read some input and runs only for long enough to deliver the output *g* is trying to read. Then *f* is suspended and *g* is run until it tries to read another input. As an added bonus, if *g* terminates without reading all of *f*’s output, then *f* is aborted. Program *f * can even be a nonterminating program, producing an infinite amount of output, since it will be terminated forcibly as soon as *g* is finished. This allows termination conditions to be separated from loop bodies — *again a powerful modularization.*

One should learn functional programming even if he/she is not working in any functional programming language. Understanding functional programming will improve your coding style and make you a better developer. It will definitely improve your way of thinking and add up a new perspective on your code and programming in general.