Monday, December 03, 2007

TestNG versus JUnit4

Comparing JUnit 4 and TestNG 5.7

Excerpts from http://www.ibm.com/developerworks/java/library/j-cq08296/ by Andy Glover.

JUnit is geared more towards unit testing - testing an object class in isolation.
TestNG provides more features and flexibility to facilitate its use not only for unit but integration, regression, functional, acceptance testings etc.

1. The setup method (annotated with @BeforeClass) needs to static and public with JUnit 4 but thats not required by TestNG. Thus TestNG is more flexible of the two.

2. Dependency testing:
Unlike JUnit, TestNG welcomes test dependencies through the dependsOnMethods attribute of the Test annotation. With this handy feature, you can easily specify dependent methods, which will execute before a desired method. What's more, if the dependent method fails, then all subsequent tests will be skipped, not marked as failed.

In JUnit 4, you can specify test orders using fixtures but if one test A fails then a test B that depends on test A will also be marked as failed.

TestNG's trick of skipping, rather than failing, can really take the pressure off in large test suites. Rather than trying to figure out why 50 percent of the test suite failed, your team can concentrate on why 50 percent of it was skipped! Better yet, TestNG complements its dependency testing setup with a mechanism for rerunning only failed tests.

3. Fail and rerun:
The ability to rerun failed tests is especially handy in large test suites, and it's a feature you'll only find in TestNG. In JUnit 4, if your test suite consists of 1000 tests and 3 of them fail, you'll likely be forced to rerun the entire suite (with fixes). Needless to say, this sort of thing can take hours.

Anytime there is a failure in TestNG, it creates an XML configuration file (testng-failed.xml) that delineates the failed tests. Running a TestNG runner with this file causes TestNG to only run the failed tests. So, in the previous example, you would only have to rerun the three failed tests and not the whole suite.

This feature doesn't seem like such a big deal when you're running smaller test suites, but you quickly come to appreciate it as your test suites grow in size.

4. Parametric testing:
By placing parametric data in TestNG's XML configuration files, you can reuse a single test case with different data sets and even get different results. This technique is perfect for avoiding tests that only assert sunny-day scenarios or don't effectively verify bounds.

JUnit testers often turn to a framework like FIT in this case because it lets you drive tests with tabular data. But TestNG provides a similar feature right out of the box.

This feature not only facilitates reuse of the test case code but also allows non-programmers to specify test data (since test data is in xml file).

public class TestWebServer {
@Test(parameters = { "number-of-times" })
public void accessPage(int numberOfTimes) {
while (numberOfTimes-- > 0) {
// access the web page
}
}
}



5. Advanced Parametric testing:
While pulling data values into an XML file can be quite handy, tests occasionally require complex types, which can't be represented as a String or a primitive value. TestNG handles this scenario with its @DataProvider annotation, which facilitates the mapping of complex parameter types to a test method.

Example:

//This method will provide data to any test method that declares that its Data Provider
//is named "test1"
@DataProvider(name = "test1")
public Object[][] createData1() {
return new Object[][] {
{ "Cedric", new Integer(36) },
{ "Anne", new Integer(37)},
};
}

//This test method declares that its data should be supplied by the Data Provider
//named "test1"
@Test(dataProvider = "test1")
public void verifyData1(String n1, Integer n2) {
System.out.println(n1 + " " + n2);
}



6. Groups:

You can define groups at the class level and then add groups at the method level. You can also specify groups and methods to be included and excluded.

@Test(groups = { "checkin-test" })
public class All {

@Test(groups = { "func-test" )
public void method1() { ... }

public void method2() { ... }
}


and then in testng.xml:


<test name="Simple example">
<groups>
<run>
<include name="checkin-test"/>
<exclude name="broken"/>
</run>
</groups>

<classes>
<class name="example1.Test1">
<methods>
<include name="testMethod" />
</methods>
</classes>
</test>

Saturday, December 01, 2007

TestNG - java testing framework

Recently i got introduced to TestNG (version 5.7) at work. I was familiar to JUnit from the past and i kind of knew about the existance of TestNG and that it had improvements over JUnit but i never thought that it will gain so much traction that i will be made to use it soon. Here are some of the features:
  • JDK 5 Annotations (JDK 1.4 is also supported with JavaDoc annotations).
  • Flexible test configuration - using multiple testng XML configuration files one per test suite.
  • Support for data-driven testing (with @DataProvider).
  • Support for parameters - you can pass parameters to test methods from the testng.xml file.
  • Allows distribution of tests on slave machines - support for parallel execution of tests and methods.
  • Powerful execution model (no more TestSuite) - test classes are annotated POJOs and don't have to extend any class or implement interface to have test methods.
  • Supported by a variety of tools and plug-ins (Eclipse, IDEA, Ant, Maven, etc...).
  • Embeds BeanShell for further flexibility.
  • Default JDK functions for runtime and logging (no dependencies).
  • Dependent methods for application server testing. - one can specify the dependsOnMethods attribute to the @Test annotation to specify a list of methods that should execute before a certain test method executes. This is a powerful feature and is required for any kind of dependent testing. If a dependent method fails, then all subsequent tests will be skipped, not marked as failed (unlike JUnit).
A good article stating improvements in TestNG over JUnit 4 is found at http://www.ibm.com/developerworks/java/library/j-cq08296/.

I used TestNG today for the first time and found the framework very easy to use and within a day i had it integrated into our build system and made a presentation to the team about its usage in our project. In this post, i am detailing the steps i performed to start using TestNG:

1. Wrote a class using just the 3 basic TestNG annotations to start with:
  • @BeforeClass
  • @Test (groups = {"xyz.groupname"}) - at class level which gets inherited by all public methods in the class.
  • @AfterClass
See http://testng.org/doc/documentation-main.html#annotations for complete list of supported annotations.

2. Wrote a master testng.xml which included all project suite-files and was referenced from the ant build script. Also wrote a testng-regression.xml which was imported in the master testng.xml. In the testng-regression.xml, defined the test runs in the suite. Each such testng-xxx.xml file is for xxx named test suite. Each suite can have one or more test runs. Each test runs identifies the class(es) or package(s) to lookup for annotated test methods. Each test run also identifies filter criteria based on groups to include and exclude in the test run. See http://testng.org/doc/documentation-main.html#testng-xml for more on testng.xml.

One powerful feature i found was the group names could be specified in dot separated (java package name like) notations and follow a hierarchy akin to the Log4j Logger naming hierarchy. So you can use wildcards in the testng.xml to not only include all classes of a group but also include classes from child groups. For example, i could just say xyz.* to include xyz.abc and xyz.def group classes.

3. Lastly, used the ant build file to call the testng ant task and pass the testng.xml location to it so that testng can execute the tests we wanted. We can have multiple targets defined for different types of tests that we may want to automate. See http://testng.org/doc/ant.html for examples.

In the latest releases of JUnit 4, it too uses JDK5 annotations and thus makes up for some of the shortcomings that led Cedric Beust to develop TestNG framework.

If you have not had a chance to explore TestNG so far, then i hope after reading this post you will have the good sense to do so now :).


Saturday, November 10, 2007

Case for Web services with JSON RPC

I have recently been working on developing JSON RPC based web services (over https) and using Java client. The server side JSONRPC services were developed using the JSON-RPC-Java and later also using the JSON-RPC C libraries.

The only client side JSON RPC stack in Java that is available at the time of this writing is http://code.google.com/p/json-rpc-client/. It supports JSON RPC over http (using apache commons httpclient library). It was easily extensible to support JSON RPC over https. In this post, i am going to put down my experiences of using JSON RPC.

  1. JSON is a fat-free XML. (Read more at http://json.org/xml.html).
  2. JSON RPC is an alternative RPC mechanism over http (or https).
  3. JSON RPC is simpler to learn and implement than SOAP. The stacks are much less lines of code compared to SOAP stacks.
  4. JSON RPC is simple as it does not include an Interface Definition Language like WSDL for SOAP based web services. So there is no contract definition between client and server in a IDL rather contract is defined on paper and then implememted in respective languages of server-side and client-side.
  5. JSON RPC spec is very loosly written and hence leaves alot of room for vendors to come up with their own solutions. Like metaparadig folks have their proprietary way of implementing class hinting (viz the way to identify the class type to the other end so that JSON message can be mapped to a class type and an instance of the class can be created with the passed in values in the JSON stream).
  6. The interoperability between JSON RPC C/C++ service and Java client is limited in following aspects:
    1. No Java collections can be used. This is same for even SOAP web services. The root cause for this limitation is that the pre JDK 1.5 Java had no generics and hence all collection classes (like ArrayList) could have held more than one Object types so it was hard to tell the type of the element held in the collection. This is solved by proprietary class hinting ismplementations when both client and server are in Java but across languages this becomes an issue. So the solution is to use arrays instead.
    2. Enum types are not supported by the metaparadigm JSON-RPC-Java stack at present as its a newer JDK 1.5 feature. So use int instead.
  7. Security: Though several approaches may be possible but the simplest solution is to implement JSON RPC over https with basic authentication for client. You may have a self-signed certificate for the web service to keep the deployments simple. But if you really want the most security possible then go for a trusted CA signed certificate for the web service but then you will require a certificate signing infrastructure in place to be able to create a certificate for each instance of web service installed.
  8. JSON RPC spec does not have anything to say about intermediary message handlers but it is easy to think of creating JSON RPC intermediary nodes although the spec does not have provisions for extensible message control headers like SOAP spec has. So JSON RPC is pretty much limited to being used between two nodes (the client and the server) - the message source and the message destination or end point. Its not really meant for "document" style messaging for which SOAP is used in B2B applications.
So if you want to build a robust, fat-free (read faster) distributed RPC infrastructure then you can base it on JSON RPC.

JSON RPC makes most sense in web applications where the client is in Javascript language as JSON maps directly to Javascript objects and hence you dont need to parse the message and extract the data, its done automatically. But other than AJAXing your web pages, you can also use it for straight forward RPC architectures where SOAP may be an overkill. You will have a working JSON RPC solution much sooner and it is of course much easier to comprehend and implement than SOAP. So when you are using SOAP web services with RPC style then think twice as you have an more able alternative approach in JSON RPC.

Let me know your thoughts by leaving your comments.

Tuesday, November 06, 2007

Using Basic authentication and HTTPS (w/ self-signed certificates) in Java

1. Client Authentication is in practice only used for B2B type applications.
2. In some cases we may even be okay with not authenticating the server on the client end during SSL handshake, for sake of:
o simplicity (no certificate signing infrastructure is required) and
o performance (we only use SSL for encryption and not for server authentication).

This approach is of self-signed certificate which the server can sign for itself and client will by-pass server authentication.

3. We first need to configure web server for SSL. Tomcat currently operates only on JKS, PKCS11 or PKCS12 format keystores.
4. We can use the JDK keytool to generate self-signed certificate for the host running tomcat as shown below:

$ keytool -genkey -alias tomcat -keyalg RSA -keystore example.keystore
Enter keystore password: secret
Re-enter new password: secret
What is your first and last name?
[Unknown]: localhost
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]: <My Company Name>
What is the name of your City or Locality?
[Unknown]: <City>
What is the name of your State or Province?
[Unknown]: <State>
What is the two-letter country code for this unit?
[Unknown]: <Country Code>
Is CN=localhost, OU=Unkown, O=<My Company Name>, L=<City>, ST=<State>, C=<Country> correct?
[no]: yes

Enter key password for
(RETURN if same as keystore password): <Enter>


The example.keystore is then generated and is in JKS (Java Key Store) format.

5. Copy it to the Tomcat root directory say C:\Program Files\Apache Software Foundation\Tomcat 6.0 path.

6. The final step is to configure your secure socket in the $CATALINA_HOME/conf/server.xml file, where $CATALINA_HOME represents the directory into which you installed Tomcat 6.

<Connector protocol="org.apache.coyote.http11.Http11Protocol"
port="8443" minSpareThreads="5" maxSpareThreads="75"
enableLookups="true" disableUploadTimeout="true"
acceptCount="100" maxThreads="200"
scheme="https" secure="true" SSLEnabled="true"
keystoreFile="./example.keystore" keystorePass="secret"
clientAuth="false" sslProtocol="TLS"/>

NOTE: You can refer to the http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html for more configuration options.

With the above settings you can verify that browsing to https://localhost:8443 returns the splash page of tomcat home.

7. We will also make sure that tomcat has a role named "manager" and some user associated with the role. We can edit the tomcat_users.xml for that:

<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager"/>
<user username="admin" password="admin" roles="manager"/>
</tomcat-users>

8. Now, we can enforce that a certain URL pattern for our web application always requires https access. To do this, we need to edit the web.xml of the web application:


<security-constraint>
<display-name>some name for service</display-name>
<web-resource-collection>
<web-resource-name>My Service</web-resource-name>
<description/>
<url-pattern>/secure/XYZ/*</url-pattern>
<http-method>GET</http-method>
<http-method>POST</http-method>
<http-method>HEAD</http-method>
<http-method>PUT</http-method>
<http-method>OPTIONS</http-method>
<http-method>TRACE</http-method>
<http-method>DELETE</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>manager</role-name>
</auth-constraint>
<user-data-constraint>
<description/>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
<login-config>
<auth-method>BASIC</auth-method>
<realm-name>MY_SECURE_REALM</realm-name>
</login-config>
<security-role>
<description>manager api can use this role.</description>
<role-name>manager</role-name>
</security-role>

With the above configuration, we have Basic authentication and HTTPS enabled for all resources accessed by the URL pattern /secure/XYZ/*.

So even if you try to access the resource at /secure/XYZ/* using http then tomcat will redirect you to the page using https scheme and thus enforce secure use. Since we also use the Basic authentication so browser client will prompt you entering user credentials.

9. If you are using API based http client access from say a J2SE client (using apache commons httpclient 3.x) then you will need to set the credentials for the realm MY_SECURE_REALM (which defines the Authentication Scope on the web server) in the Http header.

HttpState state = new HttpState();
state.setCredentials(new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT,
"MY_SECURE_REALM"), new UsernamePasswordCredentials(user, passwd));

Also you will need to use the org.apache.commons.httpclient.contrib.ssl.EasySSLProtocolSocketFactory to be able to by-pass the agent authentication on client side. Apache commons httpclient comes with this contrib code which is included with the source distro but is not bundled in the jar file. So you will need to pull the source out from contrib/ssl path and use it in your project.

Basically you will need to check if the uri in use has scheme type of https then associate the EasySSLProtocolSocketFactory as the protocol handler for the scheme.

if (uri.getScheme().equals("https")) {
Protocol easyhttps = new Protocol(uri.getScheme(), new EasySSLProtocolSocketFactory(), uri.getPort());

Protocol.registerProtocol("https", easyhttps);
}

The way it works is, EasySSLProtocolSocketFactory in turn uses the EasyX509TrustManager (again from contrib/ssl) to just do a agent certificate validity from and to time validation (so that the ceritificate is not expired and is not before the validity start date). As long as the certificate in use by the agent is valid the EasyX509TrustManager will be okay to bypass doing any authentication for the self-signed certificate for the agent.

That completes the simple discourse on how to use Basic authentication with HTTPS (using self-signed certificate for the server end).

Friday, October 26, 2007

Salient points about log4j

0. Log4j has three main components: loggers, appenders and layouts.

1. Loggers are named entities which follow hierarchical naming.
2. A logger is said to be an ancestor of another logger if its name followed by a dot is a prefix of the descendant logger name. A logger is said to be a parent of a child logger if there are no ancestors between itself and the descendant logger. For example, the logger named "com.foo" is a parent of the logger named "com.foo.Bar".
3. The root logger resides at the top of the logger hierarchy. It is exceptional in two ways:

1. it always exists,
2. it cannot be retrieved by name.
Invoking the class static Logger.getRootLogger method retrieves it.
4. Loggers may be assigned levels. The set of possible levels, that is:

TRACE,
DEBUG,
INFO,
WARN,
ERROR and
FATAL

are defined in the org.apache.log4j.Level class.

5. The inherited level for a given logger C, is equal to the first non-null level in the logger hierarchy, starting at C and proceeding upwards in the hierarchy towards the root logger.

6. Here are the basic Logger class methods:

package org.apache.log4j;

public class Logger {

// Creation & retrieval methods:
public static Logger getRootLogger();
public static Logger getLogger(String name);

// printing methods:
public void trace(Object message);
public void debug(Object message);
public void info(Object message);
public void warn(Object message);
public void error(Object message);
public void fatal(Object message);

// generic printing method:
public void log(Level l, Object message);
}

7. A logging request is said to be enabled if its level is higher than or equal to the level of its logger.

A log request of level p in a logger with (either assigned or inherited, whichever is appropriate) level q, is enabled if p >= q.

This rule is at the heart of log4j. It assumes that levels are ordered. For the standard levels, we have DEBUG < INFO < WARN < ERROR < FATAL.

// get a logger instance named "com.foo"
Logger logger = Logger.getLogger("com.foo");

// Now set its level. Normally you do not need to set the
// level of a logger programmatically. This is usually done
// in configuration files.
logger.setLevel(Level.INFO);

Logger barlogger = Logger.getLogger("com.foo.Bar");

// This request is enabled, because WARN >= INFO.
logger.warn("Low fuel level.");

// This request is disabled, because DEBUG < INFO.
logger.debug("Starting search for nearest gas station.");

// The logger instance barlogger, named "com.foo.Bar",
// will inherit its level from the logger named
// "com.foo" Thus, the following request is enabled
// because INFO >= INFO.
barlogger.info("Located nearest gas station.");

// This request is disabled, because DEBUG < INFO.
barlogger.debug("Exiting gas station search");


8. In fundamental contradiction to biological parenthood, where parents always preceed their children, log4j loggers can be created and configured in any order. In particular, a "parent" logger will find and link to its descendants even if it is instantiated after them.

9. Log4j makes it easy to name loggers by software component. This can be accomplished by statically instantiating a logger in each class, with the logger name equal to the fully qualified name of the class. This is a useful and straightforward method of defining loggers. As the log output bears the name of the generating logger, this naming strategy makes it easy to identify the origin of a log message. The developer is free to name the loggers as desired.Nevertheless, naming loggers after the class where they are located seems to be the best strategy known so far.

10. Log4j allows logging requests to print to multiple destinations. In log4j speak, an output destination is called an appender.

Currently, appenders exist for the console, files, GUI components, remote socket servers, JMS, NT Event Loggers, and remote UNIX Syslog daemons. It is also possible to log asynchronously.

11. More than one appender can be attached to a logger.

The addAppender method adds an appender to a given logger.

12. Each enabled logging request for a given logger will be forwarded to all the appenders in that logger as well as the appenders higher in the hierarchy.

In other words, appenders are inherited additively from the logger hierarchy. For example, if a console appender is added to the root logger, then all enabled logging requests will at least print on the console. If in addition a file appender is added to a logger, say C, then enabled logging requests for C and C's children will print on a file and on the console.

It is possible to override this default behavior so that appender accumulation is no longer additive by setting the additivity flag to false.

13.The layout is responsible for formatting the logging request according to the user's wishes, whereas an appender takes care of sending the formatted output to its destination.

The PatternLayout, part of the standard log4j distribution, lets the user specify the output format according to conversion patterns similar to the C language printf function.

See the conversion characters to use in the link below:
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html

For example, the PatternLayout with the conversion pattern "%r [%t] %-5p %c - %m%n" will output something akin to:

176 [main] INFO org.foo.Bar - Located nearest gas station.

The first field is the number of milliseconds elapsed since the start of the program.
The second field is the thread making the log request.
The third field is the level of the log statement.
The fourth field is the name of the logger associated with the log request.
The text after the '-' is the message of the statement.

14. To use log4j by reading the configuration from a properties file:

import com.foo.Bar;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;

public class MyApp {

static Logger logger = Logger.getLogger(MyApp.class.getName());

public static void main(String[] args) {


// BasicConfigurator replaced with PropertyConfigurator.
PropertyConfigurator.configure(args[0]);

logger.info("Entering application.");
Bar bar = new Bar();
bar.doIt();
logger.info("Exiting application.");
}
}

And a sample configuration file:

log4j.rootLogger=DEBUG, A1
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout

# Print the date in ISO 8601 format
log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

# Print only messages of level WARN or above in the package com.foo.
log4j.logger.com.foo=WARN

Example output:

2000-09-07 14:07:41,508 [main] INFO MyApp - Entering application.

This will only print warn, error or fatal but not info, debug or trace messages for all components inheriting from com.foo logger name hierarchy.

Example using multiple appenders:

log4j.rootLogger=debug, stdout, R

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n

log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=example.log

log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1

log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n

Example output:

INFO [main] (MyApp2.java:12) - Entering application.
DEBUG [main] (Bar.java:8) - Doing it again!
INFO [main] (MyApp2.java:15) - Exiting application.


15. Under certain well-defined circumstances however, the static inializer of the Logger class will attempt to automatically configure log4j.

The exact default initialization algorithm is defined as follows:

1. Setting the log4j.defaultInitOverride system property to any other value then "false" will cause log4j to skip the default initialization procedure (this procedure).
2. Set the resource string variable to the value of the log4j.configuration system property. The preferred way to specify the default initialization file is through the log4j.configuration system property. In case the system property log4j.configuration is not defined, then set the string variable resource to its default value "log4j.properties".
3. Attempt to convert the resource variable to a URL.
4. If the resource variable cannot be converted to a URL, for example due to a MalformedURLException, then search for the resource from the classpath by calling org.apache.log4j.helpers.Loader.getResource(resource, Logger.class) which returns a URL. Note that the string "log4j.properties" constitutes a malformed URL. See Loader.getResource(java.lang.String) for the list of searched locations.
5. If no URL could not be found, abort default initialization. Otherwise, configure log4j from the URL. The PropertyConfigurator will be used to parse the URL to configure log4j unless the URL ends with the ".xml" extension, in which case the DOMConfigurator will be used. You can optionaly specify a custom configurator. The value of the log4j.configuratorClass system property is taken as the fully qualified class name of your custom configurator. The custom configurator you specify must implement the Configurator interface.

Under Tomcat 3.x and 4.x, you should place the log4j.properties under the WEB-INF/classes directory of your web-applications. Log4j will find the properties file and initialize itself. This is easy to do and it works.

Generally one would want to have the flexibility to choose between different logging implementations. In that case, one can use apache commons logging which provides similar interface as Log4J's described above (only it calls it Logger as Log and uses a LogFactory.getLog() to get the named Log instance) but comes with adapters for several other logging implementations in Java (Avalon, JDK's Logging etc).

Monday, September 03, 2007

EJB 3.0 with JBoss 4.2.1 GA and Netbeans IDE 5.5.1

I have started to read O'reilly's EJB 3.0 5th Edition by Bill Burke and Richard Monson-Haefel. The book covers EJB 3.0 and Java Persistence 1.0 in detail. It comes with a JBoss workbook for JBoss 4.0.3 release. I installed the current stable release JBoss 4.2.1 GA for my practice. Unlike the 4.0.x releases, the JBoss 4.2.x release has the EJB 3 enabled by default. I used Netbeans 5.5.1 IDE for development. It supports JBoss 4.x and even 5.x (which is still in beta). It readily recognized my JBoss 4.2.1 installation in the server manager. I created a project as Enterprise Application (with both web and ejb modules). The persistence configuration was simple and i used the default datasource HSQL DB 1.8.

There were some gotchas before i could get chapter 4 "Developing your first bean" examples working:
1. one has to change this "DefaultDS" to "java:/DefaultDS" in the META-INF/persistence.xml of your ejb module.
2. also, in the client application, you will need to reference the "TravelAgentBean/remote" as "<Your EAR application name>/TravelAgentBean/remote". To be sure as to where in the jndi tree has your session bean got registered, you can browse to jmx-console (http://localhost:8080/jmx-console) and look for "service=JNDIView". Click on the link and in the following page, invoke the method list() to see the list of names in the JNDI tree. From there you can know for sure what name to use for lookup at client end for your session facade.
3. lastly, i had mistakenly had my entity bean's id annotated as @GeneratedValue in which case when i used to call the setId() method in my session facade bean, then it used to throw an exception while persisting using EntityManager's persist() that the entity instance is detached one. So, i could either remove the annotation of generated value or don't set the id.
4. at client end, there is no need for PortableRemoteObject.narrow() method anymore. You can simply use Java casting.

For my simple example, i did not have to pass the JNDI bootstrap params as properties instance to InitialContext() constructor.

Eclipse 3.3 (Europa) has a very nice OR mapping tool called Dali, but i am yet to figure out how to make it work for me (basically if i don't have the corresponding table already created in the DB, then Eclipse is unable to map the Entity type's members to columns in the database table and so far i dont know how to turn that error off. With Netbeans i did not get that issue and when i first ran my application the table was automatically created by hibernate as i declared it to do so by setting hibernate property hibernate.hbm2ddl.auto with value "update"). Europa release supports JBoss 4.2.x release. Red Hat is also developing a comprehensive IDE solution for JBoss in partnership with Exadel to facilitate easier Rich web application development.

Wednesday, July 25, 2007

My new Honda Element 2007

I recently bought a Honda Element 2007 compact SUV and after having driven it today for a week, i feel a very satisfied owner of my first 4 wheeler vehicle ever. This model of SUV is unique with capacity to seat only 4 passengers and the doors are wide-opening clamshell type with no pillar between the front and back seats (but one has to first open the front doors to be able to open the back doors). Other auto makers also have SUVs of similar kind like Toyota’s FJ Cruiser and Nissan’s Xterra. The mileage is 21mpg in city and 26mpg on highway (which is decent compared to other compact SUV’s). The engine is 166 hp and its a 4 wheel drive vehicle. It comes fully loaded with power window, mirror, steering and doors. There is a AM/FM/CD player and a skylight glass top. The thing which i loved about my new Element is its very spacious (alot of leg room and room for cargo). With wide opening doors, loading cargo is easy (as there is no pillar between seats). The rear seats can be folded to the sides too thus making extra space for cargo. Also the look is off beat and trendy (in my opinion).

Sunday, June 17, 2007

Charting the web with Cewolf/JFreeChart - Producing Time Series plots

I recently had an opportunity to use the Cewolf 1.0 at work for some Time Series plots (a variant of the XY Chart, shown in the figure above, where the X-axis is for time values). This blog is about Cewolf and how to create time series plots with it.

Cewolf is a JSP tag library which uses JFreeChart for rendering the charts. It comes with a controller servlet which is used for interpreting the parameters passed through the JSP tag and accordingly generating the chart image in-memory (no files created on the file system of the server) and embeds the image as tag in the HTML output to the client response stream.

IMO, Cewolf/JFreeChart is the best free charting package for a web application required to draw charts and being developed in Java EE. It supports several different types of charts and one of them was the Time Series plots. Here is some code which can produce a simple time series plot (using cewolf).

1. To install Cewolf you just need to copy the jars from its lib/ path (which includes the JFreeChart jar too) to WEB-INF/lib of your web application.

2. We need to write a data producer which gets the data set (in {time, value} pairs for the time series plot). A typical time series data producer is given below:


//~--- non-JDK imports --------------------------------------------------------

import de.laures.cewolf.DatasetProduceException;
import de.laures.cewolf.DatasetProducer;

import org.jfree.data.time.Minute;
import org.jfree.data.time.TimeSeries;
import org.jfree.data.time.TimeSeriesCollection;

//~--- JDK imports ------------------------------------------------------------

import java.io.Serializable;

import java.util.Date;
import java.util.Map;

/**
* A sample data producer for the time series plot.
*/
public class MyDataProducer implements DatasetProducer, Serializable
{
public MyDataProducer()
{
}

public Object produceDataset(Map map) throws DatasetProduceException
{
/*
* To this time series collection we can add more than one time series
* where each time series will be represented by its own line on a
* plot.
*/
TimeSeriesCollection ts = new TimeSeriesCollection();

try {
String[] allSeries = { "series1", "series2" };

// Loop through all series and add the data to series and series to
// timeseries collection (ts).
for (int i = 0; i < allSeries.length; i++) {

// Get data for series from some kind of datasource
MyDataSet[] myDataSet = GetDataForSeries(allSeries[i]);
TimeSeries mySeries = new TimeSeries("My Data Series " + i, Minute.class);

// Add data to series
for (MyDataSet data : myDataSet) {
mySeries.add(new Minute(new Date(data.getTimestamp().getTime())), data.getYValue());
}

// Add the series to the collection
ts.addSeries(mySeries);
}
} catch (Exception e) {
e.printStackTrace();

throw new DatasetProduceException();
}

return ts;
}

public boolean hasExpired(Map map, Date date)
{
return false;
}

public String getProducerId()
{
return "My Data Producer";
}

private MyDataSet[] GetDataForSeries(String string)
{
// Get data from DB or some data source
// return an array of time/value pairs (for instance, as an array
// of MyDataSet instances.
}
}


MyDataSet class is:


import java.sql.Timestamp;

/** A sample time/value pair data. An array/list of this type will constitute
* the data set for the plot.
*/
public class MyDataSet
{
private Timestamp timestamp; // You can use other date/time types in Java SE here.
private double yValue;

public MyDataSet()
{
}

public Timestamp getTimestamp() {
return timestamp;
}

public void setTimestamp(Timestamp timestamp) {
this.timestamp = timestamp;
}

public double getYValue() {
return yValue;
}

public void setYValue(double yValue) {
this.yValue = yValue;
}
}


In the JSP page you include the chart now:


<jsp:usebean id="myPlotData" class="com.mycompany.MyDataProducer">

<cewolf:chart
id="MyChart"
type="timeseries"
title="My Plot Title"
xaxislabel="Time"
yaxislabel="My Data Value">
<cewolf:data>
<cewolf:producer id="myPlotData" usecache="false">
</cewolf:data>
</cewolf:chart>
<cewolf:img chartid="MyChart" renderer="/cewolf" width="1000" height="400">


Lastly, one needs to configure the CewolfServlet in the web.xml:
<servlet>
<servlet-name>CewolfServlet</servlet-name>
<servlet-class>de.laures.cewolf.CewolfRenderer</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>CewolfServlet</servlet-name>
<url-pattern>/cewolf/*</url-pattern>
</servlet-mapping>


The generated charts on tomcat does require one to increase the JVM heap size to at least 256MB from the default 64MB.

The pros of using Cewolf/JFreeChart in a Java EE web application:
1. Compared to the other open source packages, JFreeChart happens to be the best in terms of the look and feel of the plots and the ease of use of the API.
2. Cewolf contributes to the glory by making the JFreeChart available as JSP tag library. And as far as i know, there isn't any other better option for plotting in the open source Java world.

The duo of Cewolf/JFreeChart are lacking in a few important features:
1. AJAX support for real-time plots. So if we want the server to be able to asynchronously (or by the virtue of some background polling from client) refresh the chart in (near) real-time then its not something supported today in Cewolf/JFreeChart. If this feature is required then JViews Charts or Chart Director are the two commercial offerings that i know of that can do AJAX based live charts.
2. Zoom and Pan interactions are not easily supported by the API.

Sunday, June 10, 2007

Working with JMaki

I recently had an opportunity to use some of the JMaki UI components and it took me some googling to figure out how to pass data dynamically (which is what most of the time you will want and unfortunately all examples use some static data in the JSON format) to the UI components. JMaki's integration with Netbeans makes it really simple to have those nice Web UI components working for you in a jiffy (like grid, tree, menu, captcha, autocomplete etc). Though i am a big fan of DWR (having used the reverse ajax in DWR 2.0 for an event browser application to show events in real-time) for Ajax support in my work, i did like the Ajax-enabled UI Components that come with JMaki. Another nice thing about JMaki is, it provides a common data model for multiple implementations of a certain UI component. For example, you have a Yahoo UI Tree and a dojo toolkit tree component. Since JMaki provides the abstraction by keeping the data models same for both these tree components, so we have the option to switch between these implementations with almost no change to code.

Now going back to the point (the reason i am writing this post after all) ... the JMaki components accept data dynamically in JSON format and though one can create the JSON format string to pass as value attributes to the widgets, it becomes cumbersome for widgets like trees or grid to escape the quotes and construct the strings. To make our lives easy in constructing the JSON formatted dynamic data, JMaki comes bundled with org.json.* classes (JSONObject, JSONArray etc) using which one can create the data to pass in an elegant and maintainable way. You will need to convert the JSONObject or JSONArray types to their Object literal form using the following code which Greg Murray (JMaki project manager) released in reply to one post on JMaki users forum:


/**
* Converts a JSON Object to an Object Literal
*
*
* @param jo
* @param buff
*
* @return
*
* @throws JSONException
*/
public static String jsonToObjectLiteral(JSONObject jo, StringBuffer buff)
throws JSONException
{
if (buff == null) {
buff = new StringBuffer("{");
} else {
buff.append("{");
}

JSONArray names = jo.names();

for (int l = 0; (names != null) && (l < names.length()); l++) {
String key = names.getString(l);
String value = null;

if (jo.optJSONObject(key) != null) {
value = key + ":";
buff.append(value);
jsonToObjectLiteral(jo.optJSONObject(key), buff);
} else if (jo.optJSONArray(key) != null) {
value = key + ":";
buff.append(value);
jsonArrayToString(jo.optJSONArray(key), buff);
} else if (jo.optLong(key, -1) != -1) {
value = key + ":" + jo.get(key) + "";
buff.append(value);
} else if (jo.optDouble(key, -1) != -1) {
value = key + ":" + jo.get(key) + "";
buff.append(value);
} else if (jo.opt(key) != null) {
Object obj = jo.opt(key);

if (obj instanceof Boolean) {
value = key + ":" + jo.getBoolean(key) + "";
} else {
value = key + ":" + "'" + jo.get(key) + "'";
}

buff.append(value);
}

if (l < names.length() - 1) {
buff.append(",");
}
}

buff.append("}");

return buff.toString();
}

/**
* Converts a json array to string.
*
*
* @param ja
* @param buff
*
* @return
*
* @throws JSONException
*/
public static String jsonArrayToString(JSONArray ja, StringBuffer buff)
throws JSONException
{
if (buff == null) {
buff = new StringBuffer("[");
} else {
buff.append("[");
}

for (int key = 0; (ja != null) && (key < ja.length()); key++) {
String value = null;

if (ja.optJSONObject(key) != null) {
jsonToObjectLiteral(ja.optJSONObject(key), buff);
} else if (ja.optJSONArray(key) != null) {
jsonArrayToString(ja.optJSONArray(key), buff);
} else if (ja.optLong(key, -1) != -1) {
value = ja.get(key) + "";
buff.append(value);
} else if (ja.optDouble(key, -1) != -1) {
value = ja.get(key) + "";
buff.append(value);
} else if (ja.optBoolean(key)) {
value = ja.getBoolean(key) + "";
buff.append(value);
} else if (ja.opt(key) != null) {
Object obj = ja.opt(key);

if (obj instanceof Boolean) {
value = ja.getBoolean(key) + "";
} else {
value = "'" + ja.get(key) + "'";
}

buff.append(value);
}

if (key < ja.length() - 1) {
buff.append(",");
}
}

buff.append("]");

return buff.toString();
}


So, after you have your dynamic data put in JSONObject or JSONArray, you can invoke the corresponding conversion method stated above to get the String form of your JSON data ready to be passed to the component.

For instance, in your tree builder code, you will need to do the following (the example is from this post where the solution was posted by Greg Murray):


public static JSONObject buildTreeData(AuthorizedTeams ateams)
throws JSONException {

JSONObject retValue = new JSONObject();
JSONObject root = new JSONObject();
root.put ("title", "Organizations");
root.put ("expanded", true);
JSONArray data = new JSONArray();

Team[] teams = ateams.getTeams();

for (int i=0; i<teams.length; i++) {
JSONObject teamObj = new JSONObject();
teamObj.put("title", teams[i].getTeamName());
teamObj.put("expanded", true);

JSONArray children = new JSONArray();

User[] teamUsers = teams[i].getMembers();
for (int j=0; j<teamUsers.length; j++) {
JSONObject childObj = new JSONObject();
childObj.put("title",teamUsers [j].getUserName());
children.put(childObj);
}
teamObj.put("children", children);
data.put(teamObj);
}
root.put ("children", data);
retValue.put ("root", root);

return jsonToObjectLiteral(retValue, new StringBuffer());
}

Here is the JSP snippet:

<jsp:useBean id="teams"
class="com.myapp.assignment.AuthorizedTeams"
scope="request"/>
<a:widget name="dojo.tree" value="${teams.teamsData}">

Tuesday, May 08, 2007

JavaServer Faces Part 1 - Introduction

This is first in the series of blogs on JSF.

JSF = JavaServer Faces.

It’s a web framework. The 3 independent elements that make up a usable JSF component in a page are:

  1. UIComponent class – defines behavior of component. Eg. UISelectOne
  2. Renderer class – provides specific renderings of component. For eg, a UISelectOne can be rendered in HTML as either a group of radio buttons or a select menu.
  3. A JSP tag – which associates a Renderer with a UIComponent and makes them usable in JSP as a single tag, eg <h:selectOneMenu>

JSF UI components are bound to server-side Java beans (which are registered as Managed Beans in faces-config.xml). In the JSP pages, the UI components are bound to Managed Beans using the JSF Expression Language (which in JSF 1.2 is same as JSTL 2.1’s EL and is now called Unified EL). Once bound, updating bean properties or invoking bean methods from a web interface is handled automatically by JSF request processing lifecycle. This ability to automatically synchronize server-side Java Bean properties to a hierarchical set of components that are based on UI presented to the client user is a major advantage of JSF over other web frameworks like Struts.

JSF Request Processing Lifecycle

  1. When a JSP page with JSF components is requested first time, then JSF runtime creates an in-memory components tree on server side.
  2. In between requests, when nothing is happening in application, the component tree is cached on server.
  3. Upon a subsequent request, the component tree is reconstituted, and if form input values are sent in request, they are processed and validations are executed.
  4. Upon successful validation, server-side managed bean properties are updated.
  5. Once all event processing and updates are over, the response is sent to client.

To enable JSF support in a Java EE web application, following needs to be done:

  1. An entry for Faces Servlet in web.xml and mapping of this servlet to *.faces or /faces/* etc. (A request that uses the appropriate faces URL pattern can be considered a faces request and when received by faces controller, it processes the request by preparing an object known as the JSF context, which contains all accessible application data and routes the client to appropriate view page based on the navigation rules as defined in the faces-config.xml.)
  2. A JSF configuration file – faces-config.xml in WEB-INF/ path.
  3. Following jar files in WEB-INF/lib path:
    1. JSF jars – jsf-api.jar and jsf-impl.jar
    2. Apache commons jars – commons-beanutils.jar, commons-collection.jar, commons-digester.jar, and commons-logging.jar.
    3. JSTL jars – standard.jar and jstl.jar

For a JSP page to be JSF enabled,

  • we need to include at least the following taglibs from Sun’s JSF RI (you may also use Apache MyFaces implementation of JSF spec):
   1: <%@taglib uri=”http://java.sun.com/jsf/core” prefix=”f”%>
   2: <%@taglib uri=”http://java.sun.com/jsf/html” prefix=”h”%>




  • In the JSP page body, we must add <f:view> tag which becomes the base UI component of component tree in memory on server side when the page is requested for viewing.

  • If page processes form input, then we can add <h:form> tag.

Example code:


inputname.jsp – shows a form to user to enter name

If outcome is “greeting” then show greeting.jsp to user

The input name between the two pages is stored in memory in PersonBean’s personName field. The personName is registered as managed bean and JSF’s EL is used in the JSP pages to access the PersonBean’s personName field values.



   1: inputname.jsp:
   2:  
   3: <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %>
   4: <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %>
   5: <f:loadBundle basename="jsfks.bundle.messages" var="msg"/>
   6:  
   7: <html>
   8:  <head>
   9:   <title>enter your name page</title>
  10:  </head>
  11:  <body>
  12:    <f:view>
  13:      <h1>
  14:       <h:outputText value="#{msg.inputname_header}"/>
  15:      </h1>
  16:      <h:form id="helloForm">
  17:       <h:outputText value="#{msg.prompt}"/>
  18:       <h:inputText value="#{personBean.personName}" />
  19:       <h:commandButton action="greeting" value="#{msg.button_text}" />
  20:      </h:form>
  21:    </f:view>
  22:  </body>
  23: </html>



Where, message bundle is defined in a message.properties file (which needs to be put in WEB-INF/classes path in your web applications WAR) as,

 



   1: inputname_header=JSF KickStart
   2: prompt=Tell us your name:
   3: greeting_text=Welcome to JSF
   4: button_text=Say Hello
   5: sign=!



We bind a PersonBean to the inputText filed in helloForm. To do so we also need to register the PersonBean as managed bean in faces-config.xml. We also need to define the navigation rule from :



   1: <?xml version="1.0"?>
   2: <!DOCTYPE faces-config PUBLIC
   3:   "-//Sun Microsystems, Inc.//DTD JavaServer Faces Config 1.1//EN"
   4:   "http://java.sun.com/dtd/web-facesconfig_1_1.dtd">
   5:  
   6: <faces-config>
   7:   <navigation-rule>
   8:    <from-view-id>/pages/inputname.jsp</from-view-id>
   9:     <navigation-case>
  10:      <from-outcome>greeting</from-outcome>
  11:      <to-view-id>/pages/greeting.jsp</to-view-id>
  12:    </navigation-case>
  13:   </navigation-rule>
  14:  
  15:   <managed-bean>
  16:     <managed-bean-name>personBean</managed-bean-name>
  17:     <managed-bean-class>jsfks.PersonBean</managed-bean-class>
  18:     <managed-bean-scope>request</managed-bean-scope>
  19:   </managed-bean>
  20: </faces-config>

And here’s what the greeting.jsp is:



   1: <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %>
   2: <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %>
   3: <f:loadBundle basename="jsfks.bundle.messages" var="msg"/>
   4:  
   5: <html>
   6:   <head>
   7:    <title>greeting page</title>
   8:   </head>    
   9:   <body>
  10:      <f:view>
  11:          <h3>
  12:       <h:outputText value="#{msg.greeting_text}" />,
  13:       <h:outputText value="#{personBean.personName}" />
  14:          <h:outputText value="#{msg.sign}" />
  15:         </h3>
  16:      </f:view>
  17:  </body>    
  18: </html>



And the managed bean PersonBean.java:



   1: package jsfks;
   2:  
   3: public class PersonBean {
   4:  
   5:    String personName;
   6:     
   7:    /**
   8:    * @return Person Name
   9:    */
  10:    public String getPersonName() {
  11:       return personName;
  12:    }
  13:  
  14:    /**
  15:    * @param Person Name
  16:    */
  17:    public void setPersonName(String name) {
  18:       personName = name;
  19:    }
  20: }



This completes the short introduction to JSF 1.1.

Book notes: Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, by Martin Kleppmann

My notes from the excellent book on how software has evolved to handle data from hierarchical databases to the NoSQL -  https://www.goodrea...