Saturday 27 February 2016

ORM (Object Relational Mapping) is Not a Magic at ALL

How ORM works in background?
From the name itself, it says that

  • Map the object with relational database and keep the meta data(mapping informations). e.g. class against table name, fields of the class against columns of the table
  • Generate SQL on demand in background using the meta data and Java Reflection API. e.g. for saving an object it generates INSERT query corresponding to the table and fields for which the object is mapped. Similarly to find, update and delete, it generates SELECT, UPDATE and DELETE queryies respectively.

Here I have demonstrated how Hibernate(one of the ORM implementations) works internally using Java core API(including Reflection API) and JDBC API (not Hibernate API). Download/Fork project  from https://github.com/PrabhatKJena/HibernatePOC

First of all let's see how the code looks like when we are working with ORM.
Test.java
Output:
Here you can observe the code looks like Hibernate code but there is no hibernate API used. 

Now let's have a look into the code in background.
This is the config file (like hibernate-configuration) where JDBC connection details are configured. 

These are our custom annotations similar to @Entity@Column in Hibernate.

This is the Configuration class which is responsible for loading JDBC driver and set JDBC properties to SessionFactory (nothing but a JDBC Connection Factory)

This class is like a JDBC Connection factory which creates and returns JDBC Connection as a Session object.

This is the important class (same as Session in Hibernate) which provides methods to persist, update, get and delete an entity.

 This is a helper class which contains the meta data (mapping informations) about the entity class.


This is the most important helper class which is responssible for the antire secret behind ORM. 
  • Introspects the enity class and stores the meta data(mapping infromations) 
  • Generates the SQL on demand for every operations like insert, update, find, delete using Java Reflection API
The entire framework works with Reflection and Annotation API.




Thank You for Visiting the blog. Please feel free to share yout comment/suggestion. 





Friday 26 February 2016

4 x 4 Sliding Block Puzzle

Simple steps to have fun :
  • Download the file Sliding Block Puzzle
  • Make sure file should be save as .html  file, else rename it to <filename>.html
  • Open downloaded file in browser 
  • Select any pattern you want to solve from left side then start game
     Enjoy !!!


Sunday 14 February 2016

Spring Data MongoDB Integration

A brief on MongoDB:
  • MongoDB is a cross-platform, document oriented database, and leading NoSQL database. 
  • It provides, high performance, high availability, and easy scalability. It works on concept of collection and document.
Few terminologies of MongoDB related to RDBMS :
  • Database -- Database
  • Collection -- Table
  • Document -- Record
  • Column -- Field
For more on MongoDB, please visit : MongoDB Tutorials

A simple example with MongoDB :
     Get the source here https://github.com/PrabhatKJena/SpringDataMongo
  • Installation guide : Install MongoDB 
  • Libraries : include mongo-java-driver-2.11.0.jar into class path
  • create database as "booksDB"
  • create collection as "books"
Output : 
Spring Data Integration with MongoDB:

Here we will see an example using MongoTemplate (same as JdbcTemplate) and Repository.

Defining document class like entity class.
Book.java
Defining annoation based Spring configuration class
SpringConfiguration.java
Note : Add @EnableMongoRepositories to enable all repositories defined

Defining repogitory for Book entity
BookRepository.java
Defining test class (main class). This class can be the service class in the real time application.
Main.java
Output:
Thank You for Visiting the blog. Please feel free to share yout comment/suggestion. 

Saturday 13 February 2016

Secrets of Double Brace Initialization

What is Double Brace Initialization (DBI)? 
Using this feature we can create and initialize an object in a single line.
Confused ? Lets see a simple example : We have a requirement that to initialize all items of a vegetable shop with price.

Output:
Items available with Price :
{FRUITS={Apple=120, Grapes=80, Orange=40}, VEGETABLES={Potato=22, Onion=30, Tomato=60}}

This is how we normally do. And this needs 11 lines of code to create and initialize 3 maps.

Now We will re-write the same requirement using DBI feature and see how it is handy for developers.
Output: (Same as previous)
Items available with Price : 
{FRUITS={Apple=120, Grapes=80, Orange=40}, VEGETABLES={Potato=22, Onion=30, Tomato=60}}

Now See the Secret behind this :
Let see with a simple example with DBI.
Output:
edu.pk.core.DoubleBraceInitialization2$1

{Potato=22, Onion=30, Tomato=60}

We know that while using an anonymous class, compiler creates a separate .class file for the inner class of the enclosing class with the provided overriden method . In similar way, for this DBI feature also compiler creates a separate,class file for the inner class and directly copies the statements written inside {{  and }} and put into the constructor of the inner class.
In output, we can see it prints the class name as edu.pk.core.DoubleBraceInitialization2$1
And the inner class will be look like :
Observe here, the compiler just copied all lines written inside {{ and }} and put inside the constructor body.

So if we understand the secret behind, then now we can sure that we can instantiate and initialize the object our own class using DBI feature provided that class must have non-private constructor which we will use to instantiate the object. Because inheritance is not possible if super class has private construtor.

So it is cool, right? But wait a minute..... I want show something.
See here how many .class files are created while using DBI, for class DoubleBraceInitialization1.java. 1 for main class and 3 for inner classes for the map initialization. So imagine, how many class file it will create if we use such kind of code in our application. For each and every call to the method it will create all these files which is a burden for the class lodaer. Beacuse of this issue, it is not recomendade to use such type of initialization.

Warning:
Use of this can be lead to performance issue and memory leak problem.


Thank You for Visiting the blog. Please feel free to share yout comment/suggestion. 

Monday 8 February 2016

Secret of java.lang.Threadlocal - Part II

What is ThreadLocal ?

The ThreadLocal class is provided by Java core API as part of java.lang package. This class is used to maintain the individual local copy for each thread. Even if two threads are executing the same code, and the code has a reference to a ThreadLocal variable, then the two threads cannot see each other's ThreadLocal variables.

Let's see an example to understand better.


Output:
First generated : 101
Second generated : 102
Second generated : 104
Second generated : 105
Second generated : 106
Second generated : 107
First generated : 103
First generated : 108
First generated : 109
First generated : 110

Here we are expecting two generator threads should generate IDs starting from the supplied initial value (100). But observer here in o/p,  since the threads are sharing the common variable currentId, it fails to generate the required IDs as there is only one copy of the variable for all threads. To resolve this issue we have to maintain an individual copy for each thread so that there will not be any conflict among the threads.

Example with our own custom ThreadLocal class (MyThreadLocal.java)


Output:
First generated : 101
First generated : 102
First generated : 103
First generated : 104
Second generated : 101
First generated : 105
Second generated : 102
Second generated : 103
Second generated : 104
Second generated : 105

Now here each thread is generating the IDs starting from 100 individually with the same common variable currentId but declared as MyThreadLocal. 

java.lang.ThreadLocal also works in similar fasion. So instead of creating your own custom ThreadLocal class you can use that class as that having more flexibility and other benefits. This MyThreadLocal class I have written to demonstrate how internally java.lang.Threadlocal class works.



Thanking you for visiting this blog. Please feel free to share your comment/suggestion.


Tuesday 2 February 2016

Impementing LRU (Least Recent Used) cache is simpler than what you think actually

What is Cache ? 
A cache is an area of local memory that holds a copy of frequently accessed data(limited no of data) so future requests for that data can be served faster.

Next is LRU ? 
This is the technique which discards the least recently used items from the cache.

Implementing LRU cache in Java is as simple as just instantiating java.util.LinkedHashMap.
So before going forward, first of all we need to know the secret of LinkedHashMap
(Refer my previous blog http://secretsinjava.blogspot.in/2015/11/know-linkedhashmap-in-depth.html  for more details).

Now the Secrets :
#1. accessOrder : this is a boolean flag which is passed to LinkedHashMap constructor while instantiating. i.e It keeps the last accessed entry as last of the double linked list which LinkedHashMap used internally.

#2. protected boolean removeEldestEntry(Map.Entry<String, String> eldest);
This is the method of LinkHashMap which decides whether the map should remove its eldest entry or not. For each eldest entry, this method is invoked by put() and putAll() method after inserting a new entry into the map. Default this method returns false to keep all entries added into the map. This method can be overriden by the implentor to decide the no of entries can be kept in the map.

Example : 
Output :
{A4=2552, A5=2444}

Here the overriden method retuns true if size is more than 2. So the map always deletes the oldest entry when a new etry is put into it.

Now we understood the secrets of LinkedHashMap. So now implementing the LRU cache is just creating a LinkedHashMap provided above 2 proprties.
Example :

Output : 
{A6=Theta6, A1=Theta345, A3=Theta11, A2=Theta234}

Here we have given CACHE_SIZE = 4. So the map deletes all eldest entry when size > 4. The final content of the map the last 4 accessed keys.

Thats all about the LRU implementation using java.util.LinkdHashMap.  In place of LinkedHashMap you can also implement your own Map and using double linked internally. 


Please feel free to give your comment/suggestion. Thank you.