Tuesday, 1 December 2015

Building a Corporate Culture

Culture is a complex whole of our behavior, beliefs, knowledge, custom, outlook and all that makes us identify ourselves as distinct individuals and as belonging to a group. In the Corporate World, there are millions of small and large enterprises. But why is it that just a handful of them stand apart from the rest at a higher platform of respect and admiration? Well, it could not be just the pay or the innovation or the product they make. It is more to do with the prevalent culture at the workplace and its people. These are two intertwined characteristics that earn them the reputation of being coveted work environments. In fact, once these two things are set right at the foundation, most enterprises succeed. The work culture that people wear manifests itself in many ways; hence it is important that it be worn with care and pride. Of course, an element of culture is for each individual to imbibe and uphold but the responsibility of sowing the seeds for the growth of these elements rests with the workplace caretakers. This article presents a few tips on building upon some of the identifiable aspects of corporate culture which are worth being cultivated.

  •  Dressing Sense
The first thing that a person notices about you is how you are dressed. What one wears is a personal choice but the way you dress reflects your personal grooming, how you would like to present yourself before others and more importantly how you would like to look at yourself. A torn pair of jeans or a decked-up frock might be a fashion statement at a party but when in office, a display of professionalism and neatness should be the statement. Hence, irrespective of whether a dress code is imposed or not, it is important to dress smart and dress tidy; and to carry it well.  
  • Language
People know who you are and what your thoughts are when they hear you speak. Framing proper sentences is as important as forming your thoughts. English is a beautiful language with room enough to easily express appreciation as well as admission of mistakes. Make it a habit to speak the golden words of ‘Please’, ‘Sorry’ and ‘Thank You’. During conversations if you miss something, a ‘Pardon?’ or ‘Excuse Me?’ sounds way better than a dull “Hmm?” or “Huh?” Avoid using a politically or socially incorrect word when talking within the office premises. Yes, verbal abuse is perhaps as old as human civilization. Agreed, that swearing might help to provide an immediate let-out of piled up emotions. But words once spoken can never be taken back. And that should be the single biggest reason to avoid giving an immediate reaction and blurting out an invective. If something really upsets you, drinking a glass of lukewarm water and walking it off is way safer for oneself as well as the others concerned. Reaction delayed can be molded into powerful actions. Hold on to your thoughts, hold on to your tongue; weigh in the situation at hand and then respond; with responsibility and clarity. Assert yourself and in doing so, always remember to use the words that you would like others to use for you. 

  • Phone Etiquette
Today, almost everyone carries a mobile phone and there have been lots of articles on phone etiquette. But still people tend to forget when to lower their voice when speaking on the phone and when to lower the volume of the phone itself. A buzzing and vibrating phone in the middle of an important meeting is like a big fly flapping about in the room. It is a distraction. Either put it on silent (sans the vibration) or if it is an urgent call, just excuse yourself and attend to it. Being on the phone in some long, mundane conversation when there are people at your desk to receive your attention is rude. Poking about on the phone or sending texts in the midst of a meeting is also bad manners. Clicking ‘selfies’, or taking pictures of fellow colleagues without their consent is crude. Also, a sexy, hot item song might be your favorite but setting it as the ring-tone of your phone at full volume is distasteful. Refrain from it. 

  • Communication
Communication is one of the pillars of corporate success and email conversations form an important part of it. Before sending out an email, always read and re-read the text for grammatical mistakes, if any; check if all the members of targeted audience have been included, provide an appropriate subject line and most importantly, make sure it conveys what you want to state. Most often, especially in the Indian context, we tend to think and frame sentences in our local, more comfortable tongue and then translate that to English. But unfortunately, sometimes the meaning gets corrupted during the translation. Having a spell-check activated could be helpful. If it is an important mail, having a draft proof-read by a colleague would do no harm.

Also, it is an admirable trait to drop in regular status emails to keep all the concerned parties involved and updated on any on-going projects. This not only assists in keeping a track of the conversation or discussion line followed but also ensures accountability. In fact, not just regular emails but a task-tracker or project planner or simple share-point sites could also serve the purpose

Another aspect of communication, especially of relevance in a multi-lingual country like India, is to speak in a language that is understood by all; not just the majority. It is understandable that people all over are more comfortable speaking in their mother tongue but what we should not forget is that an office is an environment where every individual working in it has a right to a sense of belonging there. As long as it is not a private matter, keep everyone involved and not left out. Speak in a common language. Build a culture where everyone is included.  

  • Feedback
This is another very important aspect of a mature culture. Feedback is not just about talking negativity or just pointing out what went wrong. It is a great mechanism of identifying what could be made better. It is an opportunity for owning up responsibility as well as for appreciating the good work. A timely feedback session ideally after every project delivery could provide important lessons. Associate feedback with positivity. 

  • Courtesy
Exchanging a smile is perhaps the simplest and sweetest courtesy one could offer to ones fellow beings. A grumpy face not only displeases the mirror but people watching you from the other side of the glass are affected too. Holding the door, helping a co-worker get into the lift, turning off lights when not needed or something as simple as, wiping off the toilet seat after use for the next person are the basic stuff that everyone should practice. Maintaining a queue and patiently waiting for your turn be it while boarding the cab or while getting served at the cafeteria speaks of your level of maturity and courtesy to your co-workers. As they say, courtesy begets courtesy.
 

These are all very small things that everyone can adhere to and yet, they could make a big difference in what we are today and what we could be tomorrow. Hope we learn to become that difference.
 
 

 

 

 

Monday, 23 November 2015

About Big Data & Hadoop


This article is an attempt to provide a quick glimpse of Big Data and Hadoop, two of the top trending words in the field of information science. Consider this article as more of an introduction to Big Data and Hadoop for newbies.


To get a hang of these two concepts, we should first know a few things about Data. The first thing to note is “data” is plural. Its singular form which is not used very often is “datum”. So data are the collection of raw facts and figures; which when processed within a context can be used to derive meaningful information. For instance, the following figures don’t really convey anything much; till we give it a context and perhaps a graphical representation.

1
2
3
4
1
2
3
4
1
2
3
4
1500
1200
1000
1395
1690
800
1100
1000
1555
1200
1000
850


Let’s say, in the table above, values in each of the cells in the first row signify a week number. So ‘1’ corresponds to the first week of the month, ‘2’ the second week etc. And say, values in the second row signify expense (in any currency of choice). So now we can say that we have the data for three months of weekly expenses which could be represented as shown in the graph below.  
 
Fig.1 Data Visualization

Thus, the raw set of data in the table above turns into a piece of information when we know what each figure means. And we can infer that every month the expense in the first week is the highest. We could analyze these data in many different ways to gather many more bits of information. For instance, what has been the total monthly expense for a month, what is an average expense per month, which month were the expenses at their lowest and which is the highest, and with a larger data set one might even gain a deeper insight that could help one predict what could be expenses in the following months or weeks etc.

So to reiterate; data is the collection of raw facts and figures. When these are processed on the backdrop of a context, we derive meaning which is information. Good, so far?

Now, with the current phase of technological advancement and the millions and millions of computers at use all over the World, we are generating an immense amount of data. To form an idea, think about the digital pictures that are being clicked, shared, uploaded, posted and stored by billions of people across the globe. The innumerable web pages and posts put on the web, including this one. The mobile call logs produced across the World by people calling and talking to one another. The log messages that are generated when running an application. The emails which are being composed, sent, read or are being archived. Or the data on temperature and rainfall etc. that are being collected from across the different patches of the Earth and even space. The bottom line is we are generating a vast amount of data every second that we just want to or have to keep. These data are valuable to us, not just from a sentimental  or security or legal or, logging perspective but also because these could be analyzed to retrieve amazing bits of intelligence, these could display patterns of behavior, offer predictive analysis, these digital footprints could help identify an individualistic or group behavior in terms of buying and selling, on deciding what’s trending or just predict human behavior in general, this could help with future designs of applications and machines and what not! These data which are being produced at such high velocity in such huge volumes with this mind-boggling variety is called Big Data. Just remember the three V’s that make an impactful definition of Big Data, that is, Velocity, Volume and Variety. Since Big Data is pretty complex and unstructured; and we seek a near real-time analysis and response from it, processing and management of Big Data is recommended to be done using BASE (Basically Available Soft state Eventual Consistency) principles of database rather than the conventional ACID (Atomicity Consistency Isolation Durability) principle.

So this should bring us to an understanding of the basic concepts of data and Big Data, in general. Now that we have these data, where should these be stored? And how should these be stored so that these could be later retrieved and analyzed conveniently? It is at this point that Big Data processing tools make their pitch. MongoDB, Cassandra, Aerospike and Hadoop are just some of the well-known players in the market.  Their prerogative is dealing with unstructured data. Data not organized according to the traditional concept of rows and columns or normalization. These are No-SQL databases. Some of these are better known for managing Big Data and some for the capacity to analyze Big Data. Albeit, Hadoop seems to be gaining a little more momentum in terms of its fan-following and the accompanying popularity.

Hadoop was created by Doug Cutting. It is an open-source Apache project. But as it happens with most open-source stuff, two competing organizations have currently taken over the onus of parenting and the upbringing of Hadoop, namely Cloudera and HortonWorks.  They also provide certifications which are in pretty good demand in the market. The certification examination from Cloudera as stated in their official website is mainly MCQ (Multiple-Choice Question) while the HDP (HortonWorks Development Platform) is more hands-on which is based on executing certain tasks. More details could be found in their respective official websites.

Ok, now as promised a quick look into Hadoop. Hadoop is a big-data analytics tool that is linearly scalable and works on a cluster of nodes. At a minimum, a cluster might have just one node and at a maximum, it could span thousands of nodes. The architecture of Hadoop is based on two concepts: HDFS and MapReduce. But before we plunge into understanding them, we need to be familiar with a few of the jargon.
1.       Node: A computer-machine
2.       Cluster:  A group of nodes
3.       Commodity Hardware: Machines that aren’t too expensive to buy or maintain; don’t boast of  powerful processing capabilities
4.       Scalability: The ability to cope with increasing load while maintaining performance and delivery
5.       Horizontal or Linear Scalability: When scalability is achieved by adding more of commodity hardware that are not very resource intensive
6.       Vertical Scalability: When scalability is achieved by adding more powerful machines with greater processing power and resources


The concept of file systems in Hadoop revolves around the idea that files should be split into chunks and their storage distributed across a cluster of nodes with a certain replication factor. Benefits? The chunking allows parallel processing and the replication promises protection against data loss. HDFS which stands for Hadoop Distributed File System is just one of these file system implementations for Hadoop.  In simple terms, it is a file system in which the data or files are partitioned into chunks of size 128MB (default chunk size) and stored across a number of nodes in the cluster. Each chunk has a certain number of replication copies called its Replication Factor. It is ideally three.  The HDFS blocks are kept of size 128MB by default for two reasons- first: this ensures that at any time an integral number of blocks reside in a disk (typical disk block is of size 512 bytes) and second:  with the advent of storage technologies we have made good progress but still latency due to disk seek time is still more when compared to reading time; hence large chunks imply shorter seek times. All the nodes participating in the Hadoop cluster are arranged in a master-worker pattern comprising of one name-node (master) and multiple data-nodes (workers). The name-node holds information on the file-system tree and the meta-data for all the files and directories in the tree. This information is locally persisted in the form of two files- edit log and the namespace image. The data-nodes store and retrieve the chunks of data files assigned to them by the name-node.  Now in a cluster to keep track of all active nodes, all data-nodes are expected to send regular heartbeat messages to the name-node together with the list of file blocks they store. If a data-node is down, the name-node knows in which alternate data-node to look for a replicated copy of the file block. Thus, data-nodes are the workhorses of the file-system but the name-node is the brains of the system. As such, it is of utmost importance to always have the name-node functioning. Since this could create a precarious situation, hence as of Hadoop 2 there is a provision of two name-nodes; one in active and the other in standby mode. Now, again there are different approaches through which the data between the two name-nodes are continually shared to ensure high availability. Two among these are using NFS Filer or the QJM (Quorum Journal Manager). Anyways, in this article we just stick to the basics.  Will talk about HA (High Availability) Name-Nodes in another article. Good so far with the concepts of HDFS, file blocks, name-node and data-node? These form the skeleton of HDFS. Ok, let’s move onto MapReduce.

MapReduce is the method of executing a given task across multiple nodes in parallel. Each MapReduce job consists of four phases: Map->Sort->Shuffle->Reduce. Following terms are of importance when talking about MapReduce jobs:
1.       JobTracker: the software daemon that resides on the namenode and assigns map and reduce tasks to other nodes in the cluster.
2.       TaskTracker: the software daemon that resides on the data-nodes and is responsible for actually running the Map and Reduce tasks and reporting back the progress to the JobTracker.
3.       Job: a program that can execute Mappers and Reducers over a dataset
4.       Task: an execution of a single instance of a Mapper and Reducer over a slice of data
5.       Data locality: it is the practice of ensuring that a task running on a node works on the data that is closest to that node, preferably in the same node. To explain a little more; from the above definition of HDFS we know that data files are split into chunks and stored in datanodes. The idea of data locality is that the task executing on a node works on the data stored in that same node. But sometimes, the data in a datanode might be corrupted or might not be available for some reason. In such a scenario that task running in that node tries to access the data from the datanode that is closest to that node. This is done to avoid any cluster bottlenecks that might arise owing to data transfers between datanodes.


In simple terms, what happens in MapReduce jobs is, the data to be processed is fed into Mappers  that consume the input and generate a set of key-value pairs. These output from Mappers are then sorted and shuffled and provided as input to the reduce jobs; which finally crunch the key-value pairs given to it to generate a final response.
For example, consider a scenario where we have a huge pile of books. We want a count of all the words that occur in all of these books and this task is assigned to Hadoop. Hadoop would proceed to do the task somewhat like this.
The name-node in HDFS, let’s call it ‘master’ splits the books (application data) into chunks and distributes them across a number of data-nodes; let’s call them ‘workers’.  Thereafter, the master sends over a bunch mappers and a reducer to each of these workers.  This part is taken care of by the sentinels called JobTrackers and TaskTrackers who oversee the work of the mappers and reducers. All the mappers must finish their job before any reducer can begin. The task of the mappers is to simply read the pages from the chunk stored with the worker, write down each word it encounters in it and a count of 1 against each word on a page, irrespective of the number of times it encounters the same word. So the output from each mapper is a bunch of key-value pairs written on it where each word is a key and all the keys have a value 1. So the output would be something as shown below:
Apple -> 1
Mango ->1
Person -> 1
Person->1
Apple->1
Person-> 1 etc.

Next, the JobTracker directs the output from all the mappers to be read by the reducer thereby summing up values against each key. So continuing with the above sample, the reducer would generate something like this:
Apple -> 2
Mango->1
Person -> 3

So the output from the reducer gives us the result of the task. Simple enough to grasp the concept? Hope it is.
  
Presenting the following picture to offer a quick summary of the concepts explored here. Most often end-users just interact with a POSIX (Portable Operating System Interface) with client machines without needing to know of all the nitty-gritty stuff that Hadoop does.

Fig.2 Hadop



Wednesday, 18 November 2015

When Machines Talk

Communication is an amazing phenomenon. Many millions of years ago communication started as a bare medium to express the innate feelings of hunger, fear, anger, threat or camaraderie through sounds and gestures.  Watching a documentary on Animal Planet or Wild Discovery reflects how the animals in the wild communicate. Not just among the same group or within the same species but among beings of different species as well as with the environment. For instance, monkeys could be seen sending out alarm calls to herds of deer caught unawares by crouching tigers; and both the deer and the tigers understand the significance of their calls. Elephants can be seen trumpeting wildly asserting their rights over a patch of green land during the dry seasons and all the encroachers understand the impending threat. Plants too can be seen responding to external stimuli, sometimes imperceptibly and sometimes quite more visibly. For instance, on long stretches of highways one could often see leafy trees bent towards the traffic on either side of the road as if forming the roof of a palanquin, quite metaphorically of course.

To begin with, the communication needs of human-beings were limited to what was wanted and what was needed.  Just the basics. And then we began to grow. Our needs and wants leaped the invisible boundaries of animal life and raced towards limitlessness. Different dialects evolved, languages came into being, sounds were captured by pens and pencils, writing and calligraphy developed; signs and symbols were invented -- some vocal and some pictorial. With all these our innate feelings also underwent a change; and in the process, an interesting genre of signs and symbols came into being. Words or symbols which in their plain flavor were simple meaning words suddenly began to convey something more; sometimes teasing and at others a more sinister meaning, bordering on deeply painful or dangerous emotions. For example, the words Black and White. In it and of itself, they just convey the idea of two different colors. But a person using them could imply a myriad of different sentiments, meanings and concepts through the sheer pronunciation of the words, a look of the face, an exhibition of body-language, the context in question and what not. Anyways, so the idea put forth so far has been the bewildering progress that human-beings have made in the field of communication.  

These days we have mind-bogglingly diverse channels of communication. How? Because we have learnt not just to talk or sulk with one another but we have learnt to talk to machines. Every software program that is written is a message sent to the computer, one of the most wonderful of machines we have invented. With it, we are capable of not just talking or texting to each other but we can exchange information through an enormous variety of media- words, text, signs, emoticons, pictures, you name it. And this trespasses all boundaries of land, water, seas and all the spaces in between. We can communicate with each other not just from across the globe, but even from the skies above, thanks to the flying aircrafts, space stations, space-crafts et al.
 
At some time or the other most of us must have shared a sense of bewilderment at how much we have progressed. But we have seen it happening all around us all the time, ad nauseam that sometimes the stuff that should fill us with wonderment and probably some dismay; just leaves the imprint of a mild smirk. Technology, machines and devices and human learning have brought us far, far away from where we started. And now we are in a state when we have begun teaching machines to talk to each other. For instance, the television remote has learnt to communicate with the television set.  The cell phone knows how to talk to your computer and share pictures, files and other data. Google is teaching cars to drive around on the roads. Soon we would be in a World where a driverless car would fetch you up from wherever you are, bring you home and the house would know to welcome you.  It might scan some biometrics data and unlock the door; sense your mood from the same biometrics scanning and turn on the air-conditioning to the desired temperature, switch on the lights adjusted to the brightness wished and turn on the television or the music system to play the stuff that soothes you. Sounds fantastic, isn’t?  Yes, it’s all nice and hi-fi and sci-fi but such eerie reality might soon alienate human-beings and make them into something not very desirable to the human race. For then, the very need of people to communicate with each other would be rendered redundant, probably unwanted even as we would be cloistered by machines talking to one-another and communicating with humans in the mechanical ways trained by us.

Communication that is elevated to the stature of a skill, a platform of culture, an intricate art ought to be learnt and studied would perhaps become just one of the options to be set via one’s mobile phone or a tiny chip embedded somewhere. And in the transition, wonder if human beings could continue to remain human or would we be transformed into the image, a prototype of the robots we are so obsessed with creating.

 

 

 

 

 

 

 

 

 

 

 

 

Wednesday, 24 June 2015

Unmappable Character for Encoding

Hmmm, so I was working on a module of a big project. One of its modules has been developed by some other folks. Now, I checked out the code and as the recommended first step, tried to do a Maven Build of the whole project. But lo! It threw an error:

[FileName] error: unmappable character for encoding Cp1252

Now this file belonged to the other module written by the other team. After some Googling, I learnt that this kind of an Error is encountered due to Windows encoding issues (read not using the standard UTF8). Anyways, here's the two fixes available.

1. Approach 1: In the pom.xml of the file in error specify the encoding to be used as shown below
under <build/>:
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>2.3.2</version>
            <configuration>
                <encoding>UTF-8</encoding>
            </configuration>
        </plugin>
</build> 


2. Approach 2: I prefer this approach as I frequently use Maven Build from the command prompt and this does not involve touching any file. So what one has to do is simply include "-Dproject.build.sourceEncoding=UTF-8" in the build command as shown below:

mvn clean install  -Dproject.build.sourceEncoding=UTF-8



And that's it! Works like a charm! :)


Sunday, 29 March 2015

What is Hystrix?

Hystrix is a Netflix library. The definition provided at Github reads:

"Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable."

Now to grasp what it implies one has to think of a “distributed environment”. Today, most applications are moving towards a modular architecture. Meaning a big monolithic application encapsulating everything is no longer preferred. Instead, it is broken down into more manageable smaller modules; or microservices each dealing with a specific chunk of the application. To present a crude example, let’s say, we have an online shopping application. Different chunks like maintaining data on products and registered users, authentication of users, payment processing etc. could be exposed via different services or modules or third party libraries. Now a call to any of the services or client library that could invoke a request over the network is a potential source of latency or worse, failure. This is where Hystrix comes in.

Consider an application that entertains heavy user traffic in such a distributed environment with a lot of dependencies. Now if a certain service is down or is too slow to respond it could slow down or throttle the entire application. The following diagram from the Hystrix site draws a picture.



                                                                 Fig.1: (courtesy: GitHub)


Now what Hystrix does is it creates a pool of threads for each dependency in the application. So even if a service is not behaving as expected, the application system continues to function. Take a look at the following picture offered by Netflix to explain this scenario.




                                                                    Fig.2 (courtesy GitHub)

Thus, it helps to isolate such points of access between services thereby, avoiding cascading failure across the different application layers. It also provides fallback options, facilitates monitoring the system state and many other desirable features; thus, improving upon the application’s fault-tolerance and resiliency.

In fact, Hystrix was born out of the resilience engineering work undertaken by Netflix around 2011. Yes, modular programming has its own price tag but according to the data collected and analysed the value it offers far exceeds its cost. 

Hope that summarizes the basics of what Hystrix is all about. Wrapping it up with a few of the jargon.  

a) Commands -- any request to a dependency has to be wrapped in a Command. Think of it as a Java class to which the arguments required when invoking the request are to be passed as parameters. There are two types of commands:
    i) HystrixCommand -- used when a single response is expected from the dependency
                      HystrixCommand cmd = new HystrixCommand(arg1, arg2);

  ii)  HystrixObservableCommand -- used when the dependency is expected to return an Observable                                           that could emit a response(s)
                      HystrixObservableCommand cmd = new HystrixObservableCommand(arg1);


b) Command Execution -- a command can be executed in one of the following four ways.
   i)   execute() -- makes a blocking, synchronous call that either returns a single response or an                                       exception
  ii)   queue()  --  returns a Future from which the single response can be later retrieved
 iii)   observe() -- subscribes to the Observable that represents the response(s) from the dependency
 iv)   toObservable() -- returns an Observable that when subscribed to executes the command and                                   returns the response(s) 

c) Circuit-Breaker Pattern -- This is a much talked about feature offered by Hystrix that helps to check cascading failure across the different application layers. If the load on a certain dependency exceeds a certain threshold or if a service has not been responding for a certain number of consecutive requests, the circuit is considered "open"; implying no further requests are routed to it for a certain window period. After the elapse of this period, a request is made to see if the service is ready to entertain further requests. If yes, further request is resumed; if not, the circuit is again considered "open" for the window period. The good thing is, it is all configurable-- the threshold at which the circuit should be opened; the window period etc. In fact, one could just "open" the circuit and check how it behaves.


I think, this much should suffice for now. More details and examples on using it would be taken up another time.


Sunday, 22 March 2015

Git Checkout and Long Filenames

This is just a short note that could be helpful in doing a 'git' checkout of projects/files having longer than usual names. A couple of days back, had to check out a 'git' project owned and managed by another team. The project was pretty big so the 'git' clone operation ran for hours but eventually ended with the following message.

"cannot create directory.....
warning: Filename too long
warning: clone succeeded but checkout failed..."


Yes, it's baffling. Yes, it's cumbersome to have that long a filename but there was nothing I could do; till I came across the following solution. Give the following command to allow long filenames using 'git' in a Windows system:

"git config --system core.longpaths true"

And then if the clone had succeeded, use the following command to complete the checkout process and one is good to go.

"git checkout -f HEAD"



Tuesday, 27 January 2015

Java 8: Lambdas



The latest version of Java, namely Java 8, has created quite a stir. Not just because it brings in an enhancement over the previous versions with its new Date API, repeating annotations, Java FX etc.  but more so because it makes room for some interesting features of functional programming paradigm in its otherwise truly object oriented kingdom. Two of these features are Lambda and the Streams API.

In this article we are going to talk about Lambda.
Historically, the concept of Lambda traces back to Lambda calculus formulated by Alonzo Church in the 1930s. In a crude interpretation a Lambda can be thought of as representing a function, most often an anonymous function. The concept slowly streamed into programming. Coming to think of it, the advent of computers and programming languages to communicate with them was paved by the need for performing complex mathematical calculations correctly and quickly. Hence the conceptual bonding between mathematics and computer programming. Anyways, it is useful to think of Lambdas as anonymous functions that can be assigned to variables, passed as an argument to functions and can be returned from functions. Many programming languages like Ruby, Python, Scala, JavaScript etc. already have built-in Lambda functionality. And now, Java 8 has accommodated it too. But Java, an inherently Objected Oriented language did some very clever tricks to retain backward compatibility and make this accommodation.

In the kingdom of Java everything is an object. But Lambdas are functions. How do they get ‘citizenship’? Hence, the introduction of the concept of a Functional Interface under which Lambdas could be classed.  But before delving into its nitty-gritty there is one important feature to take note of. Java 8 allows interfaces to have implementation of methods, typed as “default” and “static” methods. These implemented methods can be inherited and overridden just like the methods of “Abstract” classes. Yes, that brings interfaces closer to “Abstract” classes, but they are still interfaces. They still support “multiple-inheritance”, “Abstract” classes don’t.

Example of default and static methods in Java 8 interfaces.

public interface myFunctionalInterface{

    default void method1(){}
    static String doSomething(){ System.out.println("Do something");}

}

Now, a Functional Interface is a normal Java 8 interface that has exactly one abstract method. Sometimes they are tagged as SAM, Single Abstract Method. An annotation, namely “@FunctionalInterface” has also been introduced to mark such interfaces. And this would prompt a compile time error if the agreement is not adhered to, that is if more than one abstract method is introduced in the interface. 

@FunctionalInterface
public interface myFunctionalInterface2{
    public void myAbstractMethod();
    default void myDefaultMethod(){}
   //public void mySecAbstractMethod();  throws compile time error
}

So the type of a Lambda maps to the signature of the abstract method defined in the functional interface.This has been made possible because of the clever exploitation of InvokeDynamics, a feature introduced in Java 7.

Now that we are clear on the basics, let's get to know Lambdas through some short and simple sample programs  to enable a stronger grip on the concept.

Case: Existing SAM Interfaces
There are interfaces in Java written prior to Java 8 that have only one abstract method. And guess what, a Lambda can be supplied to such functional interfaces. 
 Example: Runnable r = () -> System.out.println("Lambda for runnable interface");
                 r.run(); //prints "Lambda for runnable interface"

Case: Overriding default method
Example: @FunctionalInterface
                 public interface myInterface1(){
                          default void methodTest(){ System.out.println("Test Method");}
                         public void abstractMethod();
                 }

                 public interface myInterface2 extends myInterface1(){
                           default void methodTest(){
                                   System.out.println("Overridden Test Method");
                           }
                  }


Case: Conflict scenario with default method
Example: @FunctionalInterface                                                   @FunctionalInterface
                  interface A{                                                                  interface B{
                        default void someMethod(){};                                       default void someMethod(){};
                       public void abstractMethodA();                                     public void abstractMethodB();
                  }                                                                                    }

                Class AB implements A,B{
                         .............}
               //A compile error would be thrown if class AB does not explicitly specify which 'someMethod()' it refers to OR provides its own implementation of 'someMethod()'
                                                                             
Case: Multiple-inheritance with default methods
 Example: @FunctionalInterface
                  interface Parent{
                         default void testOne() {
                              System.out.println("Test Parent");
                         }
                         public void abstractMethod();
                  }

                  @FunctionalInterface
                   interface Child extends Parent{
                           @Override
                            default void testOne(){
                                        System.out.println("Test Child");
                            }
                            public void abstractChildMethod();
                    }

                  class Test implements Child{
                          ...........}

                  Invoking testOne() on an instance of Test class would print "Test Child"

 
Case: Local Variable access within Lambda
Local variable which are declared final or effectively final, meaning their values are not modified are accessible within Lambda.
Example: public void someMethod(){
                          int x = 10;
                          Runnable r = () -> System.out.println("The variable is: "+x);
                          r.run();
                 }

Case: Using one of the functional interfaces provided in java.util.* package
There are some 43 functional interfaces provided in Java 8. Supplier, Consumer, Predicate etc. for ease of use by the developers. One can simply provide a Lambda that matches the signature of the abstract method defined in these interfaces.
Example: Consumer<Integer> consumer = x -> System.out.println("Printing x: "+ x);
                List<Integer> list = Arrays.asList(34,22,12,456);
       
                list.forEach(consumer);
Case: Using existing custom methods as Lambda
Example: @FunctionalInterface
                 public interface FuncInterface1 {
                      public String encodePassword(String passwd, int id);
                  }
              public class SampleFunctionalIf{ 
                       public static FuncInterface1 sampleEncode(){
                              return (password,i) -> password.toLowerCase(); **
                       }
                      public static void doSomething(FuncInterface1 func){
                           String str = func.encodePassword("PassWord", 123);
                           System.out.println(str);
                      }   
                   public static void main(String[] args){
                           doSomething((password,i) -> password.toUpperCase()); **
               
                          FuncInterface1 f = sampleEncode();
                         doSomething(f);               
                    }
               }

Note: ** The underlined Lambdas map to the encodePassword() in the functional interface. Note that the signature of the Lambda has to match the signature of the abstract method.

Case: Method references
 Lambdas also allow referencing methods of classes or class instances using the "::" <colon> operator. 
Example: public class Person {
                    String firstName, lastName;
                    public Person(){
                          System.out.println("New Person created from default constructor");
                       }
                   }
                  
             public interface PersonFactory<P extends Person> {
                    P create(String fname, String lname);
             }

           public class MethodRefSample1 {
              public static void main(String[] args){
                     PersonFactory<Person> p = Person::new;
                     p.create("Carlie", "Hebdo");
               }
           } 

So, that’s how Lambdas make programming much more succinct, quick and easy. And of course, it could be used to write pretty complex codes as well.

In no way do the above examples cover the length and breadth of Lambdas, but they do provide a quick peek at what Lambda in Java 8 is all about and what it is capable of.


The Streams API exploits Lambdas to make it much easier to perform operations on Java Collections both sequentially and in parallel. Will talk more on Streams in another article.