Friday, October 5, 2007

Understanding "inverse" mapping attribute

Generality

This page intends to give an internal view and understanding of inverse="true". Please, please, please read the Hibernate reference guide and especially:

  • Mapping a collection
  • Bidirectional Association
  • Parent Child Relationships

and the FAQs (the official ones and the one from the Wiki) before reading this.

Inverse defines which side is responsible of the association maintenance. The side having inverse="false" (default value) has this responsibility (and will create the appropriate SQL query - insert, update or delete). Changes made to the association on the side of the inverse="true" are not persisted in DB.

Inverse attribute is not related in any way to the navigation through relationship. It is related to the way hibernate generate SQL queries to update association data. Association data are:

  • a column in the one-to-many association
  • a row in the association table in a many-to-many association

Monodirectional association is managed by the only side available through navigation. When association is bidirectional, choosing the manager side allow better SQL optimization, this is the recommended behaviour.

one-to-many sample

Let's have a look at a simple one-to-many sample. Setting inverse="true" is recommanded and allow SQL optimization.

Note that <many-to-one> is always inverse="false" (the attribute does not exist).

<class name="net.sf.test.Parent" table="parent">
<id name="id" column="id" type="long" unsaved-value="null">
<generator class="sequence">
<param name="sequence">SEQ_DEFAULT</param>
</generator>
</id>
<set name="children" lazy="true" inverse="true">
<key column="parent_id"/>
<one-to-many class="net.sf.test.Child"/>
</set>
</class>

<class name="net.sf.test.Child" table="child">
<id name="id" column="id" type="long" unsaved-value="null">
<generator class="sequence">
<param name="sequence">SEQ_DEFAULT</param>
</generator>
</id>
<many-to-one name="parent" column="parent_id" not-null="true"/>
</class>

The inverse="true" is set to the one side.

Proper code

Parent p = new Parent();
Child c = new Child();
p.setChildren(new HashSet());
p.getChildren().add(c);
c.setParent(p);

session.save(p);
session.save(c);
session.flush();

Will do the following SQL queries

Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: insert into parent (id) values (?)
Hibernate: insert into child (parent_id, id) values (?, ?)

Hibernate insert parent then insert child. Note that my DB has a not null FK constraint on Child(parent_id), inserts work fine because I set <many-to-one not-null="true"

Note that I explicitly save parent and child objets. A better way is to use the cascade="save-update" element. I didn't do it to keep this explanation easier to understand and avoid concepts mismatch.

inverse="true" sample

Insert

Parent p = new Parent();
Child c = new Child();
p.setChildren(new HashSet());
p.getChildren().add(c);
c.setParent(p);

session.save(p);
session.flush(); //flush to DB
System.out.println("Parent saved");

session.save(c);
System.out.println("Child saved");
session.flush(); //flush to DB

Will do the following SQL queries

Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: insert into parent (id) values (?)
Parent saved
Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: insert into child (parent_id, id) values (?, ?)
Child saved

As you can see the relationship (incarnated by the parent_id column) is set during the child save : this is of the child responsibility. When saving parent, nothing is done on the relationship.

Update

Let's have a look at a relationship update

Parent p = (Parent) session.load(Parent.class, parentId);
Parent p2 = (Parent) session.load(Parent.class, parentId2);

c = (Child) session.find(
"from Child as child where child.parent = ?",
p, Hibernate.entity(Parent.class)).get(0);

// change parent of child c from p to p2
p.getChildren().remove(c);
p2.getChildren().add(c);
c.setParent(p2);

Will do the following SQL queries

Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 1
Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 2
Hibernate: select child0_.id as id, child0_.parent_id as parent_id from child child0_ where (child0_.parent_id=? ) //get children of parent 1

Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
//load childrens of Parent 1 and 2 (can't avoid this with set, see FAQ)

Hibernate: update child set parent_id=? where id=?

After a proper Java setting of the Parent child relationship (both side), Hibernate, set parent_id column to the proper value. As you can see, only 1 update is executed.

Now, we'll see inverse="true" in action ;-)

Parent p = (Parent) session.load(Parent.class, parentId);
Parent p2 = (Parent) session.load(Parent.class, parentId2);

c = (Child) session.find(
"from Child as child where child.parent = ?",
p, Hibernate.entity(Parent.class)).get(0);

c.setParent(p2);

Will do the following SQL queries

Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 1
Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 2
Hibernate: select child0_.id as id, child0_.parent_id as parent_id from child child0_ where (child0_.parent_id=? ) //get children

Hibernate: update child set parent_id=? where id=?

The relationship is updated because I change it on the child side. Note that the object tree is not consistent with the Database (children collections are not up to date). This is not recommanded.

On the contrary,

Parent p = (Parent) session.load(Parent.class, parentId);
Parent p2 = (Parent) session.load(Parent.class, parentId2);

c = (Child) session.find(
"from Child as child where child.parent = ?",
p, Hibernate.entity(Parent.class)).get(0);

p.getChildren().remove(c);
p2.getChildren().add(p);

Will do the following SQL queries

Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 1
Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 2
Hibernate: select child0_.id as id, child0_.parent_id as parent_id from child child0_ where (child0_.parent_id=? ) //get children

Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
//load childrens of Parent 1 and 2 (can't avoid this see FAQ)

Relationship update is not executed because update is only done on the parent side.

inverse="false"

inverse="false" (the default value) is not optimized for bidirectional relationships.

<class name="net.sf.test.Parent" table="parent">
<id name="id" column="id" type="long" unsaved-value="null">
<generator class="sequence">
<param name="sequence">SEQ_DEFAULT</param>
</generator>
</id>
<set name="children" lazy="true" inverse="false">
<key column="parent_id"/>
<one-to-many class="net.sf.test.Child"/>
</set>
</class>

<class name="net.sf.test.Child" table="child">
<id name="id" column="id" type="long" unsaved-value="null">
<generator class="sequence">
<param name="sequence">SEQ_DEFAULT</param>
</generator>
</id>
<many-to-one name="parent" column="parent_id" not-null="true"/>
</class>

The inverse="false" is set to the one side.

insert

Parent p = new Parent();
Child c = new Child();
p.setChildren(new HashSet());
p.getChildren().add(c);
c.setParent(p);

session.save(p);
session.save(c);
session.flush();

Will do the following SQL queries

Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: select SEQ_DEFAULT.nextval from dual
Hibernate: insert into parent (id) values (?)
Hibernate: insert into child (parent_id, id) values (?, ?)
Hibernate: update child set parent_id=? where id=?

Parent is responsible of the relationship. Hibernate insert parent, insert child then update the relationship (as a request to the parent). Two SQL orders are executed (one insert and one update) instead of one.

Note that I cannot do a flush between session.save(p) and session.save(c) because, parent, which is responsible of the relationship, needs a persistent child to play with.

update

Let's have a look at a relationship update

Parent p = (Parent) session.load(Parent.class, parentId);
Parent p2 = (Parent) session.load(Parent.class, parentId2);

c = (Child) session.find(
"from Child as child where child.parent = ?",
p, Hibernate.entity(Parent.class)).get(0);

p.getChildren().remove(c);
p2.getChildren().add(c);
c.setParent(p2);

Will do the following SQL queries

Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=?    //get parent 1
Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 2
Hibernate: select child0_.id as id, child0_.parent_id as parent_id from child child0_ where (child0_.parent_id=? )
//get first child for parent 1

Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
//load childrens of Parent 1 and 2 (can't avoid this see FAQ)

Hibernate: update child set parent_id=? where id=? // child.setParent
Hibernate: update child set parent_id=null where parent_id=? //remove
Hibernate: update child set parent_id=? where id=? // add

As you can see, having set inverse="true" allow the relationship to be managed by the parent side AND the child side. Several updates to the association data are done. This is inefficient considering the inverse="true" equivalent.

Parent p = (Parent) session.load(Parent.class, parentId);
Parent p2 = (Parent) session.load(Parent.class, parentId2);

c = (Child) session.find(
"from Child as child where child.parent = ?",
p, Hibernate.entity(Parent.class)).get(0);

p2.getChildren().add(c);

Will do the following SQL queries

Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=?    //get parent 1
Hibernate: select parent0_.id as id from parent parent0_ where parent0_.id=? //get parent 2
Hibernate: select child0_.id as id, child0_.parent_id as parent_id from child child0_ where (child0_.parent_id=? )
//get first child for parent 1

Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
Hibernate: select child0_.id as id__, child0_.id as id, child0_.parent_id as parent_id from child child0_ where child0_.parent_id=?
//load childrens of Parent 1 and 2 (can't avoid this see FAQ)

Hibernate: update child set parent_id=? where id=? // add

The relationship is properly set but the object model is in inconsistent state. This is not recommanded.

Conclusion

Using and understanding inverse="true" is essential to optimize your code. Prefer using inverse="true" on bidirectional association. After this tutorial it will be soooo easy ;-)

Source: simoes.org

Tuesday, September 25, 2007

Java: Few important points when it comes to Strings

How many times have you coded a check for String being null or empty? Countless times, right? I have. We use some ready-to-use classes from open source frameworks or we write our own StringUtils class. More or less they all implement the same thing and it always looks similar to the following code snippet:

String s = ...
if (s == null || s.equals(""))...

or similar to the following, which trims leading and ending whitespaces

String s = ...
if (s == null || s.trim().equals(""))...

Of course you could also do this:

"".equals(s)

which is a case when you do not care if String s is null and you don't have to worry about NPE as if won't happen ("" is never null, whereas s could be). But that's another story.

I have had "extra" warnings turned on in my IDE for couple of days. But today my IDE suprised me when it highlighted

[1] s.equals("")

and suggested that I could optimize it by making it to

[2] s.length() == 0

And guess what?! The IDE was right! I looked at the suggested code briefly, gave it a bit of thought and agreed that it would probably be faster. Method [1] creates a new instance of the String (an empty String, yes I know that all instances of "" would be caught during compilation and optimized and that they all would refer to the same instance). Just to be on the safe side I looked at the source of the String class.

And here is what I found. The length() method returns and integer primitive, which is not calculated with each method call to length(). It is rather a member variable (or constant, as Strings are invariants) of String class that is calculated when new String instance is created. So this method would be super fast.

public int length() {
return count;
}

On the other side, there is the equals() method, which is fast as well, but not as fast as length method. It has to do a check for class, class casting and comparison of count members (that's what length method returns).

public boolean equals(Object anObject) {
if (! (anObject instanceof String))
return false;
String str2 = (String) anObject;
if (count != str2.count)
return false;
if (value == str2.value && offset == str2.offset)
return true;
int i = count;
int x = offset;
int y = str2.offset;
while (--i >= 0)
if (value[x++] != str2.value[y++])
return false;
return true;
}

And remember the few important points when it comes to Strings:

  • Do not compare Strings with == operator. Unless you want to compare the object references. Use equals() method.
  • Do not construct new instances like new String("abc"). Simple "abc" will do, unless you really mean that you need a new instance of String with same value.
  • Do not concatenate Strings in loops using + operator. It's faster to use StringBuffer (or StringBuilder, which is in Tiger and is not synchronized) append() and then toString() methods instead. The plus (+) operator constructs new String object each time.

Source: hanuska.blogspot.com

Friday, August 24, 2007

Struts2 + Spring + JUnit

Hopefully this entry serves as some search engine friendly documentation on how one might unit test Struts 2 actions configured using Spring, something I would think many, many people want to do. This used to be done using StrutsTestCase in the Struts 1.x days but Webwork/Struts provides enough flexibility in its architecture to accommodate unit testing fairly easily. I’m not going to go over how the Spring configuration is setup. I’m assuming you have a struts.xml file which has actions configured like this:

<struts>
<package namespace="/site" extends="struts-default">
<action name="deletePerson" class="personAction"
method="deletePerson">
<result name="success">/WEB-INF/pages/person.jsp</result>
</action>
</package>
...
</struts>

You also might have an applicationContext.xml file where you might define your Spring beans like this.

<beans>
<bean id="personAction"
class="com.arsenalist.action.PersonAction"/>
...
</beans>

Then of course you also need to have an action which you want to test which might look something like:

public class PersonAction extend ActionSupport { 

private int id;

public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String deletePerson() {
....
return SUCCESS;
}
}

Remember than in Struts 2, an action is usually called before and after various other interceptors are invoked. Interceptor configuration is usually specified in the struts.xml file. At this point we need to cover three different methods of how you might want to call your actions.



  1. Specify request parameters which are translated and mapped to the actions domain objects (id in the PersonAction class) and then execute the action while also executing all configured interceptors.
  2. Instead of specifying request parameters, directly specify the values of the domain objects and then execute the action while also executing all configured interceptors.
  3. Finally, you just might want to execute the action and not worry about executing the interceptors. Here you’ll specify the values of the actions domain objects and then execute the action.

Depending on what you’re testing and what scenario you want to reproduce, you should pick the one that suits the case. There’s an example of all three cases below. The best way I find to test all your action classes is to have one base class which sets up the Struts 2 environment and then your action test classes can extend it. Here’s a class that could be used as one of those base classes.


See the comments for a little more detail about whats going on. One point to note is that the class being extended here is junit.framework.TestCase and not org.apache.struts2.StrutsTestCase as one might expect. The reason for this is that StrutsTestCase is not really a well written class and does not provide enough flexibility in how we want the very core Dispatcher object to be created. Also, the interceptor example shown in the Struts documentation does not compile as there seems to have been some sort of API change. It’s been fixed in this example.

public class BaseStrutsTestCase extends TestCase {

private Dispatcher dispatcher;
protected ActionProxy proxy;
protected MockServletContext servletContext;
protected MockHttpServletRequest request;
protected MockHttpServletResponse response;

/**
* Created action class based on namespace and name
*/
protected T createAction(Class clazz, String namespace, String name)
throws Exception {

// create a proxy class which is just a wrapper around the action call.
// The proxy is created by checking the namespace and name against the
// struts.xml configuration
proxy = dispatcher.getContainer().getInstance(ActionProxyFactory.class).
createActionProxy(
namespace, name, null, true, false);

// set to true if you want to process Freemarker or JSP results
proxy.setExecuteResult(false);
// by default, don't pass in any request parameters
proxy.getInvocation().getInvocationContext().
setParameters(new HashMap());

// set the actions context to the one which the proxy is using
ServletActionContext.setContext(
proxy.getInvocation().getInvocationContext());
request = new MockHttpServletRequest();
response = new MockHttpServletResponse();
ServletActionContext.setRequest(request);
ServletActionContext.setResponse(response);
ServletActionContext.setServletContext(servletContext);
return (T) proxy.getAction();
}

protected void setUp() throws Exception {
String[] config = new String[] { "META-INF/applicationContext-aws.xml" };

// Link the servlet context and the Spring context
servletContext = new MockServletContext();
XmlWebApplicationContext appContext = new XmlWebApplicationContext();
appContext.setServletContext(servletContext);
appContext.setConfigLocations(config);
appContext.refresh();
servletContext.setAttribute(WebApplicationContext.
ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE, appContext);

// Use spring as the object factory for Struts
StrutsSpringObjectFactory ssf = new StrutsSpringObjectFactory(
null, null, servletContext);
ssf.setApplicationContext(appContext);
//ssf.setServletContext(servletContext);
StrutsSpringObjectFactory.setObjectFactory(ssf);

// Dispatcher is the guy that actually handles all requests. Pass in
// an empty Map as the parameters but if you want to change stuff like
// what config files to read, you need to specify them here
// (see Dispatcher's source code)
dispatcher = new Dispatcher(servletContext,
new HashMap());
dispatcher.init();
Dispatcher.setInstance(dispatcher);
}
}

By extending the above class for our action test classes we can easily simulate any of the three scenarios listed above. I’ve added three methods to PersonActionTest which illustrate how to test the above three cases: testInterceptorsBySettingRequestParameters, testInterceptorsBySettingDomainObjects() and testActionAndSkipInterceptors(), respectively.

public class PersonActionTest extends BaseStrutsTestCase { 

/**
* Invoke all interceptors and specify value of the action
* class' domain objects directly.
* @throws Exception Exception
*/
public void testInterceptorsBySettingDomainObjects()
throws Exception {
PersonAction action = createAction(PersonAction.class,
"/site", "deletePerson");
pa.setId(123);
String result = proxy.execute();
assertEquals(result, "success");
}

/**
* Invoke all interceptors and specify value of action class'
* domain objects through request parameters.
* @throws Exception Exception
*/
public void testInterceptorsBySettingRequestParameters()
throws Exception {
createAction(PersonAction.class, "/site", "deletePerson");
Map params = new HashMap();
params.put("id", "123");
proxy.getInvocation().getInvocationContext().setParameters(params);
String result = proxy.execute();
assertEquals(result, "success");
}

/**
* Skip interceptors and specify value of action class'
* domain objects by setting them directly.
* @throws Exception Exception
*/
public void testActionAndSkipInterceptors() throws Exception {
PersonAction action = createAction(PersonAction.class,
"/site", "deletePerson");
action.setId(123);
String result = action.deletePerson();
assertEquals(result, "success");
}
}

The source code for Dispatcher is probably a good thing to look at if you want to configure your actions more specifically. There are options to specify zero-configuration, alternate XML files and others. Ideally the StrutsTestCaseHelper should be doing a lot more than what it does right now (creating a badly configured Dispatcher) and should allow creation of custom dispatchers and object factories. That’s the reason why I’m not using StrutsTestCase since all that does is make a couple calls using StrutsTestCaseHelper.


If you want to test your validation, its pretty easy. Here’s a snippet of code that might do that:

 public void testValidation() throws Exception {
SomeAction action = createAction(SomeAction.class,
"/site", "someAction");
// lets forget to set a required field: action.setId(123);
String result = proxy.invoke();
assertEquals(result, "input");
assertTrue("Must have one field error",
action.getFieldErrors().size() == 1);
}

This example uses Struts 2.0.8 and Spring 2.0.5.

Wednesday, August 22, 2007

Using DAO Design Pattern

DAO Pattern Definition

Access to data varies depending on the source of the data. Access to persistent storage, such as to a database, varies greatly depending on the type of storage (relational databases, object-oriented databases, flat files, and so forth) and the vendor implementation.
Reference: Blue Prints

Introduction

When you are creating your application framework, you would like to persist your data using smart techniques, and I am not talking about Hibernate, JDO, OJB or anything else, however I can ask for you? Can your application support a storage system replacement without any problem? If your asnwer is NO, this text will be useful for you.

Define a Interface as foundation for DAOs

You can define a simple Interface, describing "WHAT" you would like to do, and not "HOW" to do. Getting this idea, we can to think in Interfaces usage. Take a look in the following code:

package framework.dao; 

import java.util.Collection;

public interface IGenericDAO {
public void save(Object object) throws DAOException;
public void update(Object object) throws DAOException;
public void remove(Object object) throws DAOException;
public Object findByPrimaryKey(Object pk) throws DAOException;
public Collection findAll() throws DAOException;
}

You can see that this interface can remind you the EJB EntityBeans model, if you are thinking it is correct! EntityBeans is a great idea, so We can continue using nice ideas(concepts) like that.
In fact, we are exposing that this interface says that It can save, update, remove or find results against Information storage.


We need a simple extension from java.lang.Exception called DAOException, which can be throwed by any method, its source can be as simple as in the following code section:

package framework.dao; 

public class DAOException extends Exception {
public DAOException(String message) {
super(message);
}
public DAOException(Throwable e) {
super(e);
}
}
The Implementation

We talked previouslly about smart codes, so we need to create and use it automaticlly and easier! You can have two choices:

  • Create a DAOFactory class
  • Use Spring Framework


DAOFactory

This class will implement a couple of patterns,as such:
Singleton - We will use one and just one instance and
Factory Method - The method returns always an interface, but in execution it will return a concrete class as wich implements this interface, in this case IGenericDAO.
See the following code for DAOFactory:

import java.io.IOException;
import java.util.Properties;

public class DAOFactory {

private static DAOFactory me = null;

private Properties props = null;

private DAOFactory() {
try {
props = new Properties();
props.load(DAOFactory.class.getResourceAsStream("daos.properties"));
} catch (IOException e) {
e.printStackTrace();
}
}

public static DAOFactory getInstance() {
if (null == me) {
me = new DAOFactory();
}
return me;
}

public IGenericDAO getDAO(String name) {
IGenericDAO retorno = null;
try {
retorno = (IGenericDAO) Class.forName(props.getProperty(name))
.newInstance();
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
return retorno;
}
}

The DAOFactory is using a properties file to discover the real implementations for DAOs that will be requested by applications.

You can create much better resource of reading, as such to auto-detect changes and update the properties in memory and other stuff.

Using Spring Framework

You can work with Dependency Injection, and you will use the context.xml used in all Spring Applications to describe this dependency.
Other nice Spring's feature is the ability to create DAOs implementations extends the HibernateDAOSupport class, which offer a lot of nice things to make your development easier.

This entry is to make you think that your applications can use dinamic configurations issues, and reduce the coupling between your layers, if you will use Spring or your own IoC framework, the most important is to use some useful DAO strategy as I this text is decribing (more information about DAO in Spring Framework - here).

Tuesday, August 21, 2007

Java: PermGen OutOfMemory

This blog is in relation to Java PermGen OutOfMemory issue as described in Frank Kieviet's blog entries:

    Classloader leaks and How to fix Classloader leaks? 

To summarize, a new instance of custom Classloader is created by Application Server whenever a new application (.ear, .jar, .war) is deployed to the server, and this Classloader is used to load all the classes and resources contained in this application.  Benefit in this approach is that, this way applications are self-contained and isolated from each other, and there are no conflicts between different applications.  When an application is undeployed from server, its associated Classloader is also unloaded, and it is subject to garbage-collection by JVM.

As described in Frank's blog, there are situations in which Classloaders cannot be garbage-collected because of dangling references to them thru most unexpected places, and this will cause memory-leak in the PermGen space (a special section of heap).  To find the cause of this problem, I used JDK 6.0's jmap and jhat utility to generate memory dump and analyze memory dump, respectively.

Orphaned Classloader

jhat utility can be easily extended to include your own query on the heap snapshot, you need to download and modify the jhat source code though.  I added a new query to find all the Orphaned Classloaders in memory and display all the reference links to these Orphaned Classloaders.  By orphaned, I mean these classloader instances that have no strong-reference chains to them from the root set, except by these strong-references chains from rootset that goes through instance of Class loaded by the Classloader.  To illustrate this, see the diagram below (solid line = strong-reference, dash-line = weak-reference) :

The yellow Classloader instance is orphaned, because the only strong-reference chain to it from root set is the chain that goes through B.class, and B.class is loaded by this Classloader (all the red lines).  All other references that do not go through classes loaded by this Classloader are weak-references.  This scenario is a possible suspect of Classloader leak, because most likely Orphaned Classloaders are not intended result of programmer, there are exceptions though.  By using this query, we can easily find all the possible suspects, and then goes through each one to determine if they are real memory leak or not.

blogs.sun.com

Monday, August 6, 2007

Java research: Anonymous Inner Classes

There are a lot of articles through Internet which have mistakes regarding anonymous inner classes in Java. Anonymous inner class:

  • has no name;
  • can’t be declared as static;
  • can be instantiated only once.

Let me show you the truth.
Consider the following code:

public class Anonymous {
public static void main(String[] args) {
Runnable anonym = new Runnable() {
public void run() {
}
};
}
}

In order to get the name of inner class write down the following:

anonym.getClass().toString().

You’ll get something like that: Anonymous$1.
Anonymous class can be either static or non-static. It depends on the block in which the class have been declared. In the previous example the anonymous class was static. In this case we can create the second instance of this class in such a way:

Runnable anonym2 = (Runnable) anonym
.getClass().newInstance().

There is no need in type cast in JDK 1.5.
If the anonymous class was declared in non-static block, we have to provide a reference to the outer class to the proper constructor (in reflection veritas!). In the other case we’ll get the InstantiationException.
Here we have an example (determining of proper constructor and exception handling are not shown below):

public class Anonymous {
public void nonStaticMethod() {
Runnable anonym = new Runnable() {
public void run() {
}
};
Constructor[] constructors = anonym.getClass()
.getDeclaredConstructors();
Object[] params = new Object[1];
params[0] = this;

Runnable anonym2 = (Runnable) constructors[0]
.newInstance(params);
}

public static void main(String[] args) {
Anonymous example = new Anonymous();
example.nonStaticMethod();
}
}

In this example we have to use getDeclaredConstructors instead of getConstructors. Method getConstructors will return only public constructors, while needed constructor is protected one.


Have a nice day.


Technorati Tags: , ,

Wednesday, August 1, 2007

Singleton Pattern in Java

Design patterns are descriptions of problems and possible ways of their solving during object-oriented design (OOD).
Maybe the most popular design pattern is Singleton Pattern. It is used to guarantee that there will be only one instance of particular object in the application. The realization of this pattern can be useful while creating Connection Pool, Factory, Configuration Manager, etc.
In this article you will find basic description of this pattern and the example of its practical usage (in Java).
Look through the following code:

public class Singleton {
private static Singleton _instance = null;

private Singleton() {}

public synchronized static Singleton getInstance() {
if (_instance == null)
_instance = new Singleton();
return _instance;
}
}

The constructor of this class has to be declared as private. This modificator with the help of getInstance method prevents user from creating several instances of class. So, we can add the final modificator to the class declaration.
As mentioned above, getInstance() method will create the only one instance of Singleton class. This method is synchronized one! This limitation is used to guarantee that in multi-threaded environment there would be only one instance of Singleton class as well as in the single-threaded application.
We can get rid of synchronized keyword. In order to do that the _instance field must be initialized like this:

private static Singleton _instance = new Singleton(),

Of course, the “if” construction is not critical in this case.


I use implemetation of this pattern while working with project configuration. For example, the configuration file “props.txt” consists of properties set.
Consider the following code:

import java.util.*;
import java.io.*;

public class Configuration {
private static Configuration _instance = null;

private Properties props = null;

private Configuration() {
props = new Properties();
try {
FileInputStream fis = new FileInputStream(new File(“props.txt”));
props.load(fis);
}
catch (Exception e) {
// catch Configuration Exception right here
}
}

public synchronized static Configuration getInstance() {
if (_instance == null)
_instance = new Configuration();
return _instance;
}

// get property value by name
public String getProperty(String key) {
String value = null;
if (props.containsKey(key))
value = (String) props.get(key);
else {
// the property is absent
}
return value;
}
}

Use the following code to get the value for the specified property:

String propValue = Configuration.getInstance()
.getProperty(propKey).

Also you can provide some useful constants for your properties:

public static final String PROP_KEY = “propKey”,

and get their values in such a way:

String propValue = Configuration.getInstance()
.getProperty(Configuration.PROP_KEY).

That’s all. Best regards.


Technorati Tags: , ,