Wednesday, January 16, 2008

Acegi Security for Spring Framework

Acegi Security provides comprehensive security services for J2EE-based enterprise software applications. There is a particular emphasis on supporting projects built using The Spring Framework, which is the leading J2EE solution for enterprise software development. If you're not using Spring for developing enterprise applications, we warmly encourage you to take a closer look at it. Some familiarity with Spring - and in particular dependency injection principles - will help you get up to speed with Acegi Security more easily.

People use Acegi Security for many reasons, but most are drawn to the project after finding the security features of J2EE's Servlet Specification or EJB Specification lack the depth required for typical enterprise application scenarios. Whilst mentioning these standards, it's important to recognise that they are not portable at a WAR or EAR level. Therefore, if you switch server environments, it is typically a lot of work to reconfigure your application's security in the new target environment. Using Acegi Security overcomes these problems, and also brings you dozens of other useful, entirely customisable security features.

As you probably know, security comprises two major operations. The first is known as "authentication", which is the process of establishing a principal is who they claim to be. A "principal" generally means a user, device or some other system which can perform an action in your application. "Authorization" refers to the process of deciding whether a principal is allowed to perform an action in your application. To arrive at the point where an authorization decision is needed, the identity of the principal has already been established by the authentication process. These concepts are common, and not at all specific to Acegi Security.

At an authentication level, Acegi Security supports a wide range of authentication models. Most of these authentication models are either provided by third parties, or are developed by relevant standards bodies such as the Internet Engineering Task Force. In addition, Acegi Security provides its own set of authentication features. Specifically, Acegi Security currently supports authentication with all of these technologies:

  • HTTP BASIC authentication headers (an IEFT RFC-based standard)

  • HTTP Digest authentication headers (an IEFT RFC-based standard)

  • HTTP X.509 client certificate exchange (an IEFT RFC-based standard)

  • LDAP (a very common approach to cross-platform authentication needs, especially in large environments)

  • Form-based authentication (for simple user interface needs)

  • Computer Associates Siteminder

  • JA-SIG Central Authentication Service (otherwise known as CAS, which is a popular open source single sign on system)

  • Transparent authentication context propagation for Remote Method Invocation (RMI) and HttpInvoker (a Spring remoting protocol)

  • Automatic "remember-me" authentication (so you can tick a box to avoid re-authentication for a predetermined period of time)

  • Anonymous authentication (allowing every call to automatically assume a particular security identity)

  • Run-as authentication (which is useful if one call should proceed with a different security identity)

  • Java Authentication and Authorization Service (JAAS)

  • Container integration with JBoss, Jetty, Resin and Tomcat (so you can still use Container Manager Authentication if desired)

  • Your own authentication systems (see below)

Many independent software vendors (ISVs) adopt Acegi Security because of this rich choice of authentication models. Doing so allows them to quickly integrate their solutions with whatever their end clients need, without undertaking a lot of engineering or requiring the client to change their environment. If none of the above authentication mechanisms suit your needs, Acegi Security is an open platform and it is quite simple to write your own authentication mechanism. Many corporate users of Acegi Security need to integrate with "legacy" systems that don't follow any particular security standards, and Acegi Security is happy to "play nicely" with such systems.

Sometimes the mere process of authentication isn't enough. Sometimes you need to also differentiate security based on the way a principal is interacting with your application. For example, you might want to ensure requests only arrive over HTTPS, in order to protect passwords from eavesdropping or end users from man-in-the-middle attacks. Or, you might want to ensure that an actual human being is making the requests and not some robot or other automated process. This is especially helpful to protect password recovery processes from brute force attacks, or simply to make it harder for people to duplicate your application's key content. To help you achieve these goals, Acegi Security fully supports automatic "channel security", together with JCaptcha integration for human user detection.

Irrespective of how authentication was undertaken, Acegi Security provides a deep set of authorization capabilities. There are three main areas of interest in respect of authorization, these being authorizing web requests, authorizing methods can be invoked, and authorizing access to individual domain object instances. To help you understand the differences, consider the authorization capabilities found in the Servlet Specification web pattern security, EJB Container Managed Security and file system security respectively. Acegi Security provides deep capabilities in all of these important areas, which we'll explore later in this reference guide.

Source: Official Reference Guide

Friday, August 24, 2007

Struts2 + Spring + JUnit

Hopefully this entry serves as some search engine friendly documentation on how one might unit test Struts 2 actions configured using Spring, something I would think many, many people want to do. This used to be done using StrutsTestCase in the Struts 1.x days but Webwork/Struts provides enough flexibility in its architecture to accommodate unit testing fairly easily. I’m not going to go over how the Spring configuration is setup. I’m assuming you have a struts.xml file which has actions configured like this:

<struts>
<package namespace="/site" extends="struts-default">
<action name="deletePerson" class="personAction"
method="deletePerson">
<result name="success">/WEB-INF/pages/person.jsp</result>
</action>
</package>
...
</struts>

You also might have an applicationContext.xml file where you might define your Spring beans like this.

<beans>
<bean id="personAction"
class="com.arsenalist.action.PersonAction"/>
...
</beans>

Then of course you also need to have an action which you want to test which might look something like:

public class PersonAction extend ActionSupport { 

private int id;

public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String deletePerson() {
....
return SUCCESS;
}
}

Remember than in Struts 2, an action is usually called before and after various other interceptors are invoked. Interceptor configuration is usually specified in the struts.xml file. At this point we need to cover three different methods of how you might want to call your actions.



  1. Specify request parameters which are translated and mapped to the actions domain objects (id in the PersonAction class) and then execute the action while also executing all configured interceptors.
  2. Instead of specifying request parameters, directly specify the values of the domain objects and then execute the action while also executing all configured interceptors.
  3. Finally, you just might want to execute the action and not worry about executing the interceptors. Here you’ll specify the values of the actions domain objects and then execute the action.

Depending on what you’re testing and what scenario you want to reproduce, you should pick the one that suits the case. There’s an example of all three cases below. The best way I find to test all your action classes is to have one base class which sets up the Struts 2 environment and then your action test classes can extend it. Here’s a class that could be used as one of those base classes.


See the comments for a little more detail about whats going on. One point to note is that the class being extended here is junit.framework.TestCase and not org.apache.struts2.StrutsTestCase as one might expect. The reason for this is that StrutsTestCase is not really a well written class and does not provide enough flexibility in how we want the very core Dispatcher object to be created. Also, the interceptor example shown in the Struts documentation does not compile as there seems to have been some sort of API change. It’s been fixed in this example.

public class BaseStrutsTestCase extends TestCase {

private Dispatcher dispatcher;
protected ActionProxy proxy;
protected MockServletContext servletContext;
protected MockHttpServletRequest request;
protected MockHttpServletResponse response;

/**
* Created action class based on namespace and name
*/
protected T createAction(Class clazz, String namespace, String name)
throws Exception {

// create a proxy class which is just a wrapper around the action call.
// The proxy is created by checking the namespace and name against the
// struts.xml configuration
proxy = dispatcher.getContainer().getInstance(ActionProxyFactory.class).
createActionProxy(
namespace, name, null, true, false);

// set to true if you want to process Freemarker or JSP results
proxy.setExecuteResult(false);
// by default, don't pass in any request parameters
proxy.getInvocation().getInvocationContext().
setParameters(new HashMap());

// set the actions context to the one which the proxy is using
ServletActionContext.setContext(
proxy.getInvocation().getInvocationContext());
request = new MockHttpServletRequest();
response = new MockHttpServletResponse();
ServletActionContext.setRequest(request);
ServletActionContext.setResponse(response);
ServletActionContext.setServletContext(servletContext);
return (T) proxy.getAction();
}

protected void setUp() throws Exception {
String[] config = new String[] { "META-INF/applicationContext-aws.xml" };

// Link the servlet context and the Spring context
servletContext = new MockServletContext();
XmlWebApplicationContext appContext = new XmlWebApplicationContext();
appContext.setServletContext(servletContext);
appContext.setConfigLocations(config);
appContext.refresh();
servletContext.setAttribute(WebApplicationContext.
ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE, appContext);

// Use spring as the object factory for Struts
StrutsSpringObjectFactory ssf = new StrutsSpringObjectFactory(
null, null, servletContext);
ssf.setApplicationContext(appContext);
//ssf.setServletContext(servletContext);
StrutsSpringObjectFactory.setObjectFactory(ssf);

// Dispatcher is the guy that actually handles all requests. Pass in
// an empty Map as the parameters but if you want to change stuff like
// what config files to read, you need to specify them here
// (see Dispatcher's source code)
dispatcher = new Dispatcher(servletContext,
new HashMap());
dispatcher.init();
Dispatcher.setInstance(dispatcher);
}
}

By extending the above class for our action test classes we can easily simulate any of the three scenarios listed above. I’ve added three methods to PersonActionTest which illustrate how to test the above three cases: testInterceptorsBySettingRequestParameters, testInterceptorsBySettingDomainObjects() and testActionAndSkipInterceptors(), respectively.

public class PersonActionTest extends BaseStrutsTestCase { 

/**
* Invoke all interceptors and specify value of the action
* class' domain objects directly.
* @throws Exception Exception
*/
public void testInterceptorsBySettingDomainObjects()
throws Exception {
PersonAction action = createAction(PersonAction.class,
"/site", "deletePerson");
pa.setId(123);
String result = proxy.execute();
assertEquals(result, "success");
}

/**
* Invoke all interceptors and specify value of action class'
* domain objects through request parameters.
* @throws Exception Exception
*/
public void testInterceptorsBySettingRequestParameters()
throws Exception {
createAction(PersonAction.class, "/site", "deletePerson");
Map params = new HashMap();
params.put("id", "123");
proxy.getInvocation().getInvocationContext().setParameters(params);
String result = proxy.execute();
assertEquals(result, "success");
}

/**
* Skip interceptors and specify value of action class'
* domain objects by setting them directly.
* @throws Exception Exception
*/
public void testActionAndSkipInterceptors() throws Exception {
PersonAction action = createAction(PersonAction.class,
"/site", "deletePerson");
action.setId(123);
String result = action.deletePerson();
assertEquals(result, "success");
}
}

The source code for Dispatcher is probably a good thing to look at if you want to configure your actions more specifically. There are options to specify zero-configuration, alternate XML files and others. Ideally the StrutsTestCaseHelper should be doing a lot more than what it does right now (creating a badly configured Dispatcher) and should allow creation of custom dispatchers and object factories. That’s the reason why I’m not using StrutsTestCase since all that does is make a couple calls using StrutsTestCaseHelper.


If you want to test your validation, its pretty easy. Here’s a snippet of code that might do that:

 public void testValidation() throws Exception {
SomeAction action = createAction(SomeAction.class,
"/site", "someAction");
// lets forget to set a required field: action.setId(123);
String result = proxy.invoke();
assertEquals(result, "input");
assertTrue("Must have one field error",
action.getFieldErrors().size() == 1);
}

This example uses Struts 2.0.8 and Spring 2.0.5.

Wednesday, August 22, 2007

Using DAO Design Pattern

DAO Pattern Definition

Access to data varies depending on the source of the data. Access to persistent storage, such as to a database, varies greatly depending on the type of storage (relational databases, object-oriented databases, flat files, and so forth) and the vendor implementation.
Reference: Blue Prints

Introduction

When you are creating your application framework, you would like to persist your data using smart techniques, and I am not talking about Hibernate, JDO, OJB or anything else, however I can ask for you? Can your application support a storage system replacement without any problem? If your asnwer is NO, this text will be useful for you.

Define a Interface as foundation for DAOs

You can define a simple Interface, describing "WHAT" you would like to do, and not "HOW" to do. Getting this idea, we can to think in Interfaces usage. Take a look in the following code:

package framework.dao; 

import java.util.Collection;

public interface IGenericDAO {
public void save(Object object) throws DAOException;
public void update(Object object) throws DAOException;
public void remove(Object object) throws DAOException;
public Object findByPrimaryKey(Object pk) throws DAOException;
public Collection findAll() throws DAOException;
}

You can see that this interface can remind you the EJB EntityBeans model, if you are thinking it is correct! EntityBeans is a great idea, so We can continue using nice ideas(concepts) like that.
In fact, we are exposing that this interface says that It can save, update, remove or find results against Information storage.


We need a simple extension from java.lang.Exception called DAOException, which can be throwed by any method, its source can be as simple as in the following code section:

package framework.dao; 

public class DAOException extends Exception {
public DAOException(String message) {
super(message);
}
public DAOException(Throwable e) {
super(e);
}
}
The Implementation

We talked previouslly about smart codes, so we need to create and use it automaticlly and easier! You can have two choices:

  • Create a DAOFactory class
  • Use Spring Framework


DAOFactory

This class will implement a couple of patterns,as such:
Singleton - We will use one and just one instance and
Factory Method - The method returns always an interface, but in execution it will return a concrete class as wich implements this interface, in this case IGenericDAO.
See the following code for DAOFactory:

import java.io.IOException;
import java.util.Properties;

public class DAOFactory {

private static DAOFactory me = null;

private Properties props = null;

private DAOFactory() {
try {
props = new Properties();
props.load(DAOFactory.class.getResourceAsStream("daos.properties"));
} catch (IOException e) {
e.printStackTrace();
}
}

public static DAOFactory getInstance() {
if (null == me) {
me = new DAOFactory();
}
return me;
}

public IGenericDAO getDAO(String name) {
IGenericDAO retorno = null;
try {
retorno = (IGenericDAO) Class.forName(props.getProperty(name))
.newInstance();
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
return retorno;
}
}

The DAOFactory is using a properties file to discover the real implementations for DAOs that will be requested by applications.

You can create much better resource of reading, as such to auto-detect changes and update the properties in memory and other stuff.

Using Spring Framework

You can work with Dependency Injection, and you will use the context.xml used in all Spring Applications to describe this dependency.
Other nice Spring's feature is the ability to create DAOs implementations extends the HibernateDAOSupport class, which offer a lot of nice things to make your development easier.

This entry is to make you think that your applications can use dinamic configurations issues, and reduce the coupling between your layers, if you will use Spring or your own IoC framework, the most important is to use some useful DAO strategy as I this text is decribing (more information about DAO in Spring Framework - here).

Friday, July 27, 2007

Using OSF Layers in Your JWeb Application

This article will discuss one strategy for combining frameworks using three popular open source frameworks. For the presentation layer we will use Struts; for our business layer we will use Spring; and for our persistence layer we will use Hibernate. You should be able to substitute any one of these frameworks in your application and get the same effect. Figure 1 shows what this looks like from a high level when the frameworks are combined.





Figure 1. Overview of framework architecture with Struts, Spring, and Hibernate.

Application Layer

Most non-trivial web applications can be divided into at least four layers of responsibility. These layers are the presentation, persistence, business, and domain model layers. Each layer has a distinct responsibility in the application and should not mix functionality with other layers. Each application layer should be isolated from other layers but allow an interface for communication between them. Let's start by inspecting each of these layers and discuss what these layers should provide and what they should not provide.

At one end of a typical web application is the presentation layer. Many Java developers understand what Struts provides. However, too often, coupled code such as business logic is placed into an org.apache.struts.Action. So, let's agree on what a framework like Struts should provide. Here is what Struts is responsible for:

- Managing requests and responses for a user.

- Providing a controller to delegate calls to business logic and other upstream processes.

- Handling exceptions from other tiers that throw exceptions to a Struts Action.

- Assembling a model that can be presented in a view.

- Performing UI validation.

Here are some items that are often coded using Struts but should not be associated with the presentation layer:

- Direct communication with the database, such as JDBC calls.

- Business logic and validation related to your application.

- Transaction management.

Introducing this type of code in the presentation layer leads to type coupling and cumbersome maintenance.

The Persistence Layer

At the other end of a typical web application is the persistence layer. This is usually where things get out of control fast. Developers underestimate the challenges in building their own persistence frameworks. A custom, in-house persistence layer not only requires a great amount of development time, but also often lacks functionality and becomes unmanageable. There are several open source object-to-relational mapping (ORM) frameworks that solve much of this problem. In particular, the Hibernate framework allows object-to-relational persistence and query service for Java. Hibernate has a medium learning curve for Java developers who are already familiar with SQL and the JDBC API. Hibernate persistent objects are based on plain-old Java objects and Java collections. Furthermore, using Hibernate does not interfere with your IDE. The following list contains the type of code that you would write inside a persistence framework:

- Querying relational information into objects. Hibernate does this through an OO query language called HQL, or by using an expressive criteria API. HQL is very similar to SQL except you use objects instead of tables and fields instead of columns. There are some new specific HQL language elements to learn; however, they are easy to understand and well documented. HQL is a natural language to use for querying objects that require a small learning curve.

- Saving, updating, and deleting information stored in a database.

- Advanced object-to-relational mapping frameworks like Hibernate have support for most major SQL databases, and they support parent/child relationships, transactions, inheritance, and polymorphism.

Here are some items that should be avoided in the persistence layer:

- Business logic should be in a higher layer of your application. Only data access operations should be permitted.

- You should not have persistence logic coupled with your presentation logic. Avoid logic in presentation components such as JSPs or servlet-based classes that communicate with data access directly. By isolating persistence logic into its own layer, the application becomes flexible to change without affecting code in other layers. For example, Hibernate could be replaced with another persistence framework or API without modification to the code in any other layer.

The Business Layer

The middle component of a typical web application is the business or service layer. This service layer is often the most ignored layer from a coding perspective. It is not uncommon to find this type of code scattered around in the UI layer or in the persistence layer. This is not the correct place because it leads to tightly coupled applications and code that can be hard to maintain over time. Fortunately, several frameworks exist that address these issues. Two of the most popular frameworks in this space are Spring and PicoContainer. These are referred to as microcontainers that have a very small footprint and determine how you wire your objects together. Both of these frameworks work on a simple concept of dependency injection (also known as inversion of control). This article will focus on Spring's use of setter injection through bean properties for named configuration parameters. Spring also allows a sophisticated form of constructor injection as an alternative to setter injection as well. The objects are wired together by a simple XML file that contains references to objects such as the transaction management handler, object factories, service objects that contain business logic, and data access objects (DAO).

The way Spring uses these concepts will be made clearer with examples later in this article. The business layer should be responsible for the following:

- Handling application business logic and business validation

- Managing transactions

- Allowing interfaces for interaction with other layers

- Managing dependencies between business level objects

- Adding flexibility between the presentation and the persistence layer so they do not directly communicate with each other

- Exposing a context to the business layer from the presentation layer to obtain business services

- Managing implementations from the business logic to the persistence layer

The Domain Model Layer

Finally, since we are addressing non-trivial, web-based applications we need a set of objects that can move between the different layers. The domain object layer consists of objects that represent real-world business objects such as an Order, OrderLineItem, Product, and so on. This layer allows developers to stop building and maintaining unnecessary data transfer objects, or DTOs, to match their domain objects. For example, Hibernate allows you to read database information into an object graph of domain objects, so that you can present it to your UI layer in a disconnected manner. Those objects can be updated and sent back across to the persistence layer and updated within the database. Furthermore, you do not have to transform objects into DTOs, which can get lost in translation as they are moved between different application layers. This model allows Java developers to work with objects naturally in an OO fashion without additional coding.

First, we will create our domain objects since they will interoperate with each layer. These objects will allow us to define what should be persisted, what business logic should be provided, and what type of presentation interface should be designed. Next, we will configure the persistence layer and define object-to-relational mappings with Hibernate for our domain objects. Then we will define and configure our business objects. After we have these components we can discuss wiring these layers using Spring. Finally, we will provide a presentation layer that knows how to communicate with the business service layer and knows how to handle exceptions that arise from other layers.

Conclusion

This article covers a lot of ground in terms of technology and architecture. The main concept to take away is how to better separate your application, user interface, persistence logic, and any other application layer you require. Doing this will decouple your code, allow new code components to be added, and make your application more maintainable in the future. The technologies covered here address specific problems well. However, by using this type of architecture you can replace application layers with other technologies.

Tuesday, July 24, 2007

Data Validation in Hibernate

While it's important to build data validation into as many layers of a Web application as possible, it's traditionally been very time-consuming to do so, leading many developers to just skip it - which can lead to a host of problems down the road. But with the introduction of annotations in the latest version of the Java™ platform, validation got a lot easier. In this article, author shows you how to use the Validator component of Hibernate Annotations to build and maintain validation logic easily in your Web apps.

Java SE 5 brought many needed enhancements to the Java language, none with more potential than annotations. With annotations, you finally have a standard, first-class metadata framework for your Java classes. Hibernate users have been manually writing *.hbm.xml files for years (or using XDoclet to automate this task). If you manually create XML files, you must update two files (the class definition and the XML mapping document) for each persistent property needed. Using HibernateDoclet simplifies this (see Listing 1 for an example) but requires you to verify that your version of HibernateDoclet supports the version of Hibernate you wish to use. The doclet information is also unavailable at run time, as it is coded into Javadoc-style comments. Hibernate Annotations, illustrated in Listing 2, improve on these alternatives by providing a standard, concise manner of mapping classes with the added benefit of run-time availability.
Listing 1. Hibernate mapping code using HibernateDoclet

/** 
* @hibernate.property column="NAME" length="60" not-null="true"
*/
public String getName() {
return this.name;
}

/**
* @hibernate.many-to-one column="AGENT_ID" not-null="true" cascade="none"
* outer-join="false" lazy="true"
*/
public Agent getAgent() {
return agent;
}

/**
* @hibernate.set lazy="true" inverse="true" cascade="all" table="DEPARTMENT"
* @hibernate.collection-one-to-many class="com.triview.model.Department"
* @hibernate.collection-key column="DEPARTMENT_ID" not-null="true"
*/
public List getDepartment() {
return department;
}

If you use HibernateDoclet, you won't be able to catch mistakes until you generate the XML files or until run time. With annotations, you can detect many errors at compile time, or, if you're using a good IDE, during editing. When creating an application from scratch, you can take advantage of the hbm2ddl utility to generate the DDL for your database from the hbm.xml files. Important information -- that the name property must have a maximum length of 60 characters, say, or that the DDL should add a not null constraint -- is added to the DDL from the HibernateDoclet entries. When you use annotations, you can generate the DDL automatically in a similar manner.

While both code mapping options are serviceable, annotations offer some clear advantages. With annotations, you can use constants to specify lengths or other values. You have a much faster build cycle without the need to generate XML files. The biggest advantage is that you can access useful information such as a not null annotation or length at run time. In addition to the annotations illustrated in Listing 2, you can specify validation constraints. Some of the included constraints available are:

  • @Max(value = 100)
  • @Min(value = 0)
  • @Past
  • @Future
  • @Email

When appropriate, these annotations will cause check constraints to be generated with the DDL. (Obviously, @Future is not an appropriate case.) You can also create custom constraint annotations as needed.

Validation and application layers

Writing validation code can be a tedious, time-consuming process. Often, lead developers will forgo addressing validation in a given layer to save time, but it is debatable whether or not the drawbacks of cutting corners in this area are offset by the time savings. If the time investment needed to create and maintain validation across all application layers can be greatly reduced, the debate swings toward having validation in more layers. Suppose you have an application that lets a user create an account with a username, password, and credit card number. The application components into which you would ideally like to incorporate validation are as follows:

  • View: Validation through JavaScript is desirable to avoid server round trips, providing a better user experience. Users can disable JavaScript, so while this level of validation is good to have, it's not reliable. Simple validations of required fields are a must.
  • Controller: Validation must be processed in the server-side logic. Code at this layer can handle validations in a manner appropriate for the specific use case. For example, when adding a new user, the controller may check to see if the specified username already exists before proceeding.
  • Service: Relatively complex business logic validation is often best placed into the service layer. For example, once you have a credit card object that appears to be valid, you should verify the card information with your credit card processing service.
  • DAO: By the time data reaches this layer, it really should be valid. Even so, it would be beneficial to perform a quick check to make sure that required fields are not null and that values fall into specified ranges or adhere to specified formats -- for instance, an e-mail address field should contain a valid e-mail address. It's better to catch a problem here than to cause avoidable SQLExceptions.
  • DBMS: This is a common place to find validation being ignored. Even if the application you're building is the only client of the database today, other clients may be added in the future. If the application has bugs (and most do), invalid data may reach the database. In this event, if you are lucky, you will find the invalid data and need to figure out if and how it can be cleaned up.
  • Model: An ideal place for validations that do not require access to outside services or knowledge of the persistent data. For example, your business logic may dictate that users provide at least one form of contact information, either a phone number or an e-mail address; you can use model layer validation to make sure that they do.

A typical approach to validation is to use Commons Validator for simple validations and write additional validations in the controller. Commons Validator has the benefit of generating JavaScript to process validations in the view. But Commons Validator does have its drawbacks: it can only handle simple validations and it stores the validation definition in an XML file. Commons Validator was designed to be used with Struts and does not provide an easy way to reuse the validation declarations across application layers.

When planning your validation strategy, choosing to simply handle errors as they occur is not sufficient. A good design also aims to prevent errors by generating a friendly user interface. Taking a proactive approach to validation can greatly enhance a user's perception of an application. Unfortunately, Commons Validator fails to offer support for this. Suppose you want your HTML to set the maxlength attribute of text fields to match the validation or place a percent sign (%) after text fields meant to collect percentage values. Typically, that information is hard-coded into HTML documents. If you decide to change your name property to support 75 characters instead of 60, in how many places will you need to make changes? In many applications, you will need to:

  • Update the DDL to increase the database column length (via HibernateDoclet, hbm.xml, or Hibernate Annotations).
  • Update the Commons Validator XML file to increase the max to 75.
  • Update all HTML forms related to this field to change the maxlength attribute.

A better approach is to use Hibernate Validator. Validation definitions are added to the model layer through annotations, with support for processing the validations included. If you choose to leverage all of Hibernate, the validator helps provide validation in the DAO and DBMS layers as well. In the code samples that follow, you will take this a step further by using reflection and JSP 2.0 tag files, leveraging the annotations to dynamically generate code for the view layer. This will remove the practice of hard-coding business logic in the view.

Hibernate DAO implementations use the validation annotations as well, if you want them to. All you need to do is specify Hibernate event-based validation in the hibernate.cfg.xml file.

Saturday, July 21, 2007

Introducing Spring

The first time AOP expert Nicholas Lesiecki explained AOP to me, I didn't understand a word he was saying; and I felt much the same the first time I considered the possibility of using IOC containers. The conceptual basis of each technology alone is a lot to digest, and the myriad of new acronyms applied to each one doesn't help -- particularly given that many of them are variations on stuff we already use.

Like many technologies, these two are much easier to understand in practice than in theory. Having done my own research on AOP and IOC container implementations (namely, XWork, PicoContainer, and Spring), I've found that these technologies help me gain functionality without adding code-based dependencies on multiple frameworks. They'll both be a part of my development projects going forward.

In a nutshell, AOP allows developers to create non-behavioral concerns, called crosscutting concerns, and insert them in their application code. With AOP, common services like logging, persistence, transactions, and the like can be factored into aspects and applied to domain objects without complicating the object model of the domain objects. 

Developing with a new framework without unit testing is like walking on a new trapeze wire without a net: sure you could do it, but you're going to bleed. I prefer to develop with a net, and for me that net is TDD. Before DbUnit came along, testing code that was dependent on a database could be a little tough. DbUnit is an extension of JUnit that provides a framework for unit tests dependent on a database. I used DbUnit to write the test code for the example classes in this article. While not present in the article, the DbUnit code is part of the article source code (see Resources). Or for an introduction to DbUnit, see "Control your test-environment with DbUnit and Anthill" (developerWorks, April 2004) by Philippe Girolami.

IOC allows me to create an application context where I can construct objects, and then pass to those objects their collaborating objects. As the word inversion implies, IOC is like JNDI turned inside out. Instead of using a tangle of abstract factories, service locators, singletons, and straight construction, each object is constructed with its collaborating objects. Thus, the container manages the collaborators.

Spring is both an AOP framework and an IOC container. I believe it was Grady Booch who said the great thing about objects is that they can be replaced; and the great thing about Spring is that it helps you replace them. With Spring, you simply inject dependencies (collaborating objects) using JavaBeans properties and configuration files. Then it's easy enough to switch out collaborating objects with a similar interface when you need to.

Spring provides an excellent on-ramp to both IOC containers and AOP. As such, you don't need to be familiar with AOP in order to follow the examples in this article. All you need to know is that you'll be using AOP to declaratively add transactional support to your example application, much the same way that you would use EJB technology. See Resource to learn more about Spring.