SKM IT World

Just another blog about IT


Leave a comment

My Lesson Learned From Doing Gilded Rose Kata

I’d like to share some of my thoughts about my approach to solve the Gilded Rose Refactoring Kata by Emily Bache. If you don’t know this kata, read the description for a better understanding. I have published my whole solution on GitHub . I tried to make a commit after every step, so you can keep track of my steps in the log of git. The chosen programming language is Java.

Solving Gilded Rose Step-By-Step

Let’s have a look at what I have done step-by-step.

Before adding the new feature, I wanted to refactor the given code base. Therefore, I started writing tests till I had a 100% line and branch coverage. During writing the tests, I was having the idea,, that the calculation of the quality is depended by the name of the item. Hence, the idea arose to use something similar like the Strategy Pattern. When I reached for 100% coverage, I tried to start with the implementation for the first strategy (“Aged Brie”). But I was unsure, what was my limit values for this first strategy. My problem was that I hadn’t tests for the limit values. So my first lessons learned was that 100% line or branch coverage doesn’t mean all test cases are covered. So I added tests for the limit values and finished implementing the “Aged Brie” strategy, added it to the original updateQualtity method (see below code snippet) and ran the tests. All tests were green.


ItemStrategy itemStrategy = new ItemStrategy();
...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   }

// original code follows
}

These cycle I repeated four times: Find missing test cases (mostly for limit values); add new tests for these cases; implement a further strategy; add this new strategy to the original updateQualtiy method and ran the tests. If the tests are green, the next cycle with a new strategy begins. At the end the extended updatedQuality method looked like the following code snippet.

ItemStrategy itemStrategy = new ItemStrategy();

...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   } else if ("Sulfuras, Hand of Ragnaros".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForSulfurasItem(items[i]);
      continue;
   } else if("Backstage passes to a TAFKAL80ETC concert".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForBackstagePassItem(items[i]);
      continue;
   } else {
      items[i] = itemStrategy.updateQualityForNormalItem(items[i]);
      continue;
   }

// commented out original code
}

My second Lessons Learned was “Refactoring needs time” and the refactoring wasn’t finished. The next steps were cleaning up unnecessary code and refactoring the strategy implementations like replacing if-else construct by ternary operator and extracting if-condition to private methods.

After that I implemented the new feature “conjured item” following the above describe work flow. After this step I could say “Ready”, but I was unhappy with the if-else if-else chain. Therefore, I decided to extract each strategy implementation to an own class (following the “classic” strategy pattern). That helps to replace the if-else if-else chain by an itemStrategyMap. So the next Lesson Learned was “The status ‘Ready’ depends by the definition”.
The last step was doing clean up and choosing better names for the interface and its method.


static Map<String, ItemStrategy> itemStrategyMap = new HashMap<>();

static {
   itemStrategyMap.put("Aged Brie", new AgedBrieItemStrategy());
   itemStrategyMap.put("Sulfuras, Hand of Ragnaros", new SulfurasItemStrategy());
   itemStrategyMap.put("Backstage passes to a TAFKAL80ETC concert", new BackstagePassItemStrategy());
   itemStrategyMap.put("Conjured", new ConjuredItemStrategy());
}

public void updateQuality() {
   for (int i = 0; i < items.length; i++) {
      ItemStrategy itemStrategy = itemStrategyMap.getOrDefault(items[i].name, new NormalItemStrategy());
      items[i] = itemStrategy.updateItem(items[i]);
   }
}

Let’s summarize the Lesson Learned:
1) 100% line or branch coverage doesn’t mean all test cases are covered.
2) Refactoring needs time.
3) The status ‘Ready’ depends by the definition.
These insights aren’t really new for me. I can often observe these insights in my daily work. Nevertheless, it was good to have these insights again, following the rule “learning through repetition” ☺

What I forgot

I stopped after that step. Thinking about it some days later, I have realized that there exists more improvements. For example, the tests from GildedRoseTest class could be extracted to separate test classes regarding to the specific strategy classes.

Advertisements


Leave a comment

My Notes From Conference “Herbstcampus 2015”

In September I visited the conference “Herbstcampus” in Nuremberg. I took notes about some sessions, that I like to share with you. The notes are in German.

SOLIDes Design – Kriterien für objektorientiertes Design by David Tanzer

Solides Design

Wie geht’s? Was geht? Wann hat es Sinn? – Portierung von COBOL-Programmen nach Java by Carsten Siedentop

Cobol

Überzogen – Technische Schulden by Gerrit Beine

Technische Schulden

Follow the Leader – Wie Sie Ihr Team beeinflussen und gestalten können by Sabine Zehnder

Follow the leader0001Follow the leader0002

So sieht’s aus! -Architekturüberblicke: Tipps und Tricks by Stefan Zörner

Architektur0001Architektur0002


2 Comments

Commons VFS, SSHJ and JSch in Comparison

Some weeks ago I evaluated some SSH libraries for Java. The main requirements to them are file transferring and file operations on a remote machine. Therefore, it exists a network protocol based on SSH, SSH File Transfer Protocol (or SFTP). So I needed a SSH library that supports SFTP.

A research shows that it exits many SSH libraries for Java. I reduce the number of libraries to three for the comparison. I choose JSch, SSHJ and Apache’s Commons VFS for a deeper look. All of them support SFTP. JSch seems to be the de-facto standard for Java. SSHJ is a newer library. Its goal is to have a clear Java API for SSH. The goal of Commons VFS is to have a clear API for virtual file systems and SFTP is one of the supported protocol. Under the hood it uses JSch for the SFTP protocol. The libraries should cover following requirements:

  • client authentication over password
  • client authentication over public key
  • server authentication
  • upload files from local host over SFTP
  • download files to local host over SFTP
  • file operations on the remote host like move, delete, list all children of a given folder (filtering after type like file or folder) over SFTP
  • execute plain shell commands

Lets have a deeper look how the three libraries cover the requirements.

Client Authentication

All three libraries supports both required authentication methods. SSHJ has the clearest API for authentication (SSHClient.authUserPass(), SSHClient.authUserPublicKey()).


SSHClient sshClient= new SSHClient();
sshClient.connect(host);

// only for public key authentication
sshClient.authPublickey("user", "location to private key file");

// only for password authentication
sshClient.authPassword("user", "password");

In Commons VFS the authentication configuration depends which kind of authentication should be used. For the public key authentication, the private key has to set in the FileSystemOption and the user name is a part of the connection url. For the password authentication, user name and password is a part of the connection url.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
fileSystemManager.init();

// only for public key authentication
SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setIdentities(opts, new File[]{privateKey.toFile()});
String connectionUrl = String.format("sftp://%s@%s", user, host);

// only for password authentication
String connectionUrl = String.format("sftp://%s:%s@%s", user, password, host);

// Connection set-up
FileObject remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

The authentication configuration in JSch is similar to Commons VFS. It depends which kind of authentication should be used. The private key for the public key authentication has to be configured in the JSch object and the password for the password authentication has to be set in the Session object. For both, the user name is set, when the JSch object gets the Session object.


JSch sshClient = new JSch();

// only for public key authentication
sshClient.addIdentity("location to private key file");

session = sshClient.getSession(user, host);

// only for password authentication
session.setPassword(password);

session.connect();

Server Authentication

All three libraries supports server authentication. In SSHJ the server authentication can be enabled with SSHClient.loadKnownHost. It is possible to  add an own location of known_host file or it is used the default location that depends on the using platform.


SSHClient sshClient = new SSHClient();
sshClient.loadKnownHosts(); // or sshClient.loadKnownHosts(knownHosts.toFile());
sshClient.connect(host);

In Commons VFS the server authentication configuration is also a part of the FileSystemOption like the public key authentication. There, the location of the known_hosts file can be set.


SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setKnownHosts(opts, new File("location of the known_hosts file"));

In JSch it exists two possibilities to configure the server authentication. One possibility is to use the OpenSSHConfig (see JSch example for OpenSSHConfig). The another possibility is easier. The location of the known_hosts file can be set directly in JSch object.


JSch sshClient = new JSch();
sshClient.setKnownHosts("location of known-hosts file");

Upload/download Files Over SFTP

All three libraries supports uploads and downloads files over SFTP. SSHJ has very clear API for these operations. The SSHClient object creates a SFTPClient object. This object is responsible for the upload (SFTPClient.put) and for the download (SFTPClient.get).


SSHClient sshClient = new SSHClient();
// ... connection

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  // download
  sftpClient.get(remotePath, new FileSystemFile(local.toFile()));
  // upload
  sftpClient.put(new FileSystemFile(local.toFile()), remotePath);
}

In Commons VFS the upload and download files is abstracted as operation on a file system. So both are represented by the copyFrom method of a FileObject object. Upload is a copyFrom operation on a RemoteFile  object and download is a copyFrom operation on a LocalFile.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
// ... configuration
remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

LocalFile localFileObject = (LocalFile) fileSystemManager.resolveFile(local.toUri().toString());
FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  // download
  localFileObject.copyFrom(remoteFileObject, new AllFileSelector());

  // upload
  remoteFileObject.copyFrom(localFileObject, new AllFileSelector());
} finally {
  localFileObject.close();
  remoteFileObject.close();
}

JSch also supports a SFTPClient. In JSch it is called ChannelSFTP. It has two method for download (ChannelSFTP.get) and upload (ChannelSFTP.put).


// here: creation and configuration of session

ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();

  // download
  InputStream inputStream = sftpChannel.get(remotePath);
  Files.copy(inputStream, localPath);

  // upload
  OutputStream outputStream = sftpChannel.put(remotePath);
  Files.copy(locaPathl, outputStream);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Execute Shell Commands

Only Commons VFS doesn’t support executing plain shell commands. In SSHJ it is a two-liner. The SshClient starts a new Session object. This object executes the shell command. It is very intuitive.


// creation and configuration of sshClient

try (Session session = sshClient.startSession()) {
  session.exec("ls");
}

In Jsch the ChannelExec is responsible for executing shell commands over SSH. At first the command is set in the channel and then the channel has to be started. It isn’t so intuitive than in SSHJ.


// here: creation and configuration of session object

ChannelExec execChannel = null;
try {
  execChannel = (ChannelExec) session.openChannel("exec");
  execChannel.connect();
  execChannel.setCommand(command);
  execChannel.start();
} catch (JSchException ex) {
  throw new IOException(ex);
} finally {
  if (execChannel != null) {
    execChannel.disconnect();
  }
}

File Operations On the Remote Hosts

All libraries supports more or less ideal file operations over SFTP on remote machines. In SSHJ SFTPClient has also methods for file operations. The names of the methods are the same as the file operations on a Linux system. The following code snippet shows how to delete a file.


//here: creation and configuration of sshClient

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  sftpClient.rm(remotePath);
}

Commons VFS’s core functionality is file operations. The usage takes getting used to. A file object has to be resolve and the file operations can be done on it.


// here: creation and configuration of remoteRootDirectory

FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  remoteFileObject.delete();
} finally {
  remoteFileObject.close();
}

JSch’s SFTPClient ChannelSFTP has also method for file operations. The mostly file operations are supported by this channel. For e.g. the file copy operation on the remote machine has to be done by plain shell commands over the ChannelExec.

// here: creation and configuration of session
ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();
  sftpChannel.rm(remotePath);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Conclusion

After this comparison I have two favourites, SSHJ and Commons VFS. SSHJ has a very clear API and I would choose it if I need a common SSH client or file operation support over SFTP is sufficient. I would choose Commons VFS if I have file operation over many file system protocols or a common SSH client is not needed. For the case, that I need both, I could use JSch directly to execute commands over SSH. The API of Commons VFS takes getting used to. But after understanding the concept behind, the usage of the API is straightforward.

The whole source code examples of this comparison are hosted on Github.

Useful Links

  1. SSHJ homepage
  2. JSch homepage
  3. Commons-vfs homepage
  4. Wikipedia page about SFTP
  5. Source Code of this comparison on Github


1 Comment

Unit And Integration Test Reports For Maven Projects In SonarQube

Since SonarQube 4.2. the test report isn’t generated by the Sonar Maven Plugin during a Maven build (see SonarQube’s blog post) . Therefore, the test report has to be generated by another plugin before Sonar Maven Plugin collects the information for the SonarQube server. Here, Jacoco Maven Plugin can help. It has the possibility to generate test report that are understandable for SonarQube. Jacoco Maven Plugin goes one step further, it has the possibility to generate a test report for integration test.

In the following sections, a solution is presented that meets following criteria:

  • Maven is used as build tool.
  • The project can be a multi module project.
  • Unit tests and integration tests are parts of each module. Here, integration tests are tests that test the integration between classes in a module.
  • Test reports are separate in unit test report and integration test report.

The road map for the next section is that firstly the Maven project structure is shown for the separation of unit and integration tests. Then the Maven project configuration is shown for having separate unit test runs and integration test runs.  After that, we have a look on the Maven project configuration for the test report generation separated in unit test and integration test. At the end, SonarQube’s configuration is shown for the test report visualization in the SonarQube’s dashboard.

Maven Project Structure

At first, we look at how a default Maven project structure looks like for a single module project.

my-app
├── pom.xml
├── src
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

The directory src/main/java contains the production source code and the directory src/test/java contains the test source code. We could put unit tests and integration tests together in this directory. But we want to separate these two types of tests in separate directories. Therefore, we add a new directory called src/it/java. Then unit tests are put in the directory src/test/java and the integration tests are put in the directory src/it/java, so the new project structure looks like the following one.

my-app
├── pom.xml
├── src
│   ├── it
│   │   └── java
│   │       └──
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

Unit And Integration Test Runs

Fortunately, the unit test run configuration is a part of the Maven default project configuration. Maven runs these tests automatically if following criteria are met:

  • The tests are in the directory src/test/java and
  • the test class name either starts with Test or ends with Test or TestCase.

Maven runs these tests during the Maven’s build lifecylce phase test.

The integration test run configuration has to be done manually. It exists Maven plugins that can help. We want that the following criteria are met:

  • integration tests are stored in the directory src/it/java and
  • the integration test class name either starts IT or ends with IT or ITCase and
  • integrations tests runs during the Maven’s build lifecycle phase integration-test.

Firstly, Maven has to know that it has to include the directory src/it/java to its test class path. Here, the Build Helper Maven Plugin can help. It adds the directory src/it/java to the test class path.


<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>1.8</version>
  <executions>
    <execution>
      <id>add-test-source</id>
      <phase>process-test-sources</phase>
      <goals>
        <goal>add-test-source</goal>
      </goals>
      <configuration>
        <sources>
          src/it/java
        </sources>
      </configuration>
     </execution>
     <execution>
       <id>add-test-resources</id>
       <phase>generate-test-resources</phase>
       <goals>
         <goal>add-test-resource</goal>
       </goals>
       <configuration>
          <resources>
            <resource>
              src/it/resources
            </resource>
          </resources>
       </configuration>
     </execution>
   </executions>
 </plugin>

The above code snippet has to be inserted into the section <project><build><plugins> in the project root pom.

Maven’s build lifecycle contains a phase called integration-test.  In this phase, we want to run the integration test. Therefore, we bind the Maven Failsafe Plugin to the phase integration-test:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>2.13</version>
  <configuration>
    <encoding>${project.build.sourceEncoding}</encoding>
  </configuration>
  <executions>
    <execution>
      <id>failsafe-integration-tests</id>
      <phase>integration-test</phase>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Again, the above code snippet also has to be inserted into the section <project><build><plugins> in the project root pom. Then Maven Failsafe Plugin runs the integration tests automatically, when their class name either starts with IT or ends with IT or ITCase.

Test Report Generation

We want to use the Jacoco Maven Plugin for the test report generation. It should generate two test reports, one for the unit test and one for the integration tests. Therefore, the plugin has to two separated agents, that have to be prepared. Then they generate the report during the test runs. The Maven’s build lifecycle contains own phases for preparation before the test phases (test and integration-test). The preparation phase for the test phase is called process-test-classes and the preparation phase for integration-test phase is called pre-integration-test. In these two phases we bind the Jacoco Maven Plugin, so the configuration of this plugin looks like the following code snippet (Again, it is a part of the section <project><build><plugins>):

<plugin>
  <groupId>org.jacoco</groupId>
  <artifactId>jacoco-maven-plugin</artifactId>
  <version>0.7.2.201409121644</version>
  <executions>
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.reportPath}
      </configuration>
      <id>pre-test</id>
      <phase>process-test-classes</phase>
      <goals>
        <goal>prepare-agent</goal>
      </goals>
    </execution>
<!-- we want to execute <span class="hiddenSpellError" pre="execute " data-mce-bogus="1">jacoco</span>:prepare-agent-integration in test phase,-->
but before executing maven failsafe plugin -->
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.itReportPath}</destFile>
      </configuration>
      <id>pre-itest</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>prepare-agent-integration</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The configuration element destFile is the path to the location, where the test reports should be stored. It is important to use the properties ${sonar.jacoco.reportPath} and ${sonar.jacoco.itReportPath}. These properties are used by SonarQube to find the test reports for the visualization.

Now, we can run the goal mvn install and our project is built inclusive unit and integration test and inclusive generating two test reports.

SonarQube Test Report Visualization

Now, we want to visualize our test reports in SonarQube. Therefore, we have to run the Sonar Maven 3 Plugin (command mvn sonar:sonar)  in our project after a successful build.

When we open our project in the SonarQube dashboard, we see only the report for the unit test per module. The reason is that the report visualization of the integration test has to be configured in SonarQube, separately. These configuration steps are described in the SonarQube documentation very well.

Summary

This blog describes how to generate test reports for unit and integration test during a Maven build. On GitHub, I host a sample project that demonstrate all configuration steps. As technical environment I use

  • Maven 3.2.5
  • Maven Plugins:
    • Maven Surefire Plugin
    • Maven Failsafe Plugin
    • Build Helper Maven Plugin
    • Jacoco Maven Plugin
    • Sonar Maven 3 Plugin
  • SonarQube 4.5.1
  • Java 7

Links

  1. SonarQube’s blog post Unit Test Execution in SonarQube
  2. Jacoco Maven plugin project site
  3. Introduction to Maven’s build lifecycle
  4. Maven Failsafe Plugin  project site
  5. Build Helper Maven Plugin project site
  6. SonarQube documentation about Code Coverage by Integration Tests for Java Project
  7. A sample Maven project on GitHub


Leave a comment

Configuration over JNDI in Spring Framework

From a certain point on, an application has to be configurable.  Spring Framework has a nice auxiliary tool for this issue since the first version 0.9 , the class PropertyPlaceholderConfigurer and since Spring Framework 3.1 the class PropertySourcesPlaceholderConfigurer. When you start a Google search for PropertyPlaceholderConfigurer, you will find many examples where the configuration items are saved in properties files. But in many Java enterprise applications, it is common that the configuration items are loaded over JNDI look ups. I’d like to demonstrate how the PropertyPlaceholderConfigurer (before Spring Framework 3.1) and accordingly PropertySourcesPlaceholderConfigurer (since Spring Framework 3.1) can help to ease the configuration over JNDI look ups in our application.

Initial Situation

We have an web application that has a connection to a database. This database connection has to be configurable. The configuration items are defined in a web application context file.


<Context docBase="/opt/tomcat/warfiles/jndi-sample-war.war" antiResourceLocking="true">
  <Environment name="username" value="demo" type="java.lang.String" override="false"/>
  <Environment name="password" value="demo" type="java.lang.String" override="false"/>
  <Environment name="url" value="jdbc:mysql://192.168.56.101:3306/wicket_demo" type="java.lang.String" override="false"/>
</Context> 

For loading these configuration items, the JNDI look up mechanism is used.

In our application we define a data source bean in a  Spring context XML file. This bean represents the database connection.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
     http://www.springframework.org/schema/context
         http://www.springframework.org/schema/context/spring-context.xsd">

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
        destroy-method="close">
        <property name="url" value="${url}" />
        <property name="username" value="${username}" />
        <property name="password" value="${password}" />
  <bean> 
</beans> 

Every value that starts and ends with ${} should be replaced by PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer at the time when launching the application. The next step is to set up PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer.

Before Spring Framework 3.1 – PropertyPlaceholderConfigurer Set Up for JNDI Look Up

We define a PropertyPlaceholderConfigurer  bean  in a Spring context XML file. This bean contains to an inner bean that maps the property names of the data source bean to the corresponding JNDI name. The JNDI name consists of two parts. The first part is the name of the context in which the resource is (in our case java:comp/env/) and the second part is the name of the resource (in our case either username, password or url).

<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="properties">
        <bean class="java.util.Properties">
            <constructor-arg>
                <map>
                    <entry key="username">
                        <jee:jndi-lookup jndi-name="java:comp/env/username" />
                    </entry>
                    <entry key="password">
                        <jee:jndi-lookup jndi-name="java:comp/env/password" />
                    </entry>
                    <entry key="url">
                        <jee:jndi-lookup jndi-name="java:comp/env/url" />
                    </entry>
                </map>
            </constructor-arg>
        </bean>
    </property>
</bean>

Since Spring Framework 3.1 – PropertySourcesPlaceholderConfigurer Set Up for JNDI Look Up

Since Spring 3.1 PropertySourcesPlaceholderConfigurer should be used instead of PropertyPlaceholderConfigurer. This effects that since Spring 3.1 the <context:property-placeholder/> namespace element registers an instance of PropertySourcesPlaceholderConfigurer (the namespace definition must be spring-context-3.1.xsd) instead of PropertyPlaceholderConfigurer (you can simulate the old behaviour when you use the namespace definition spring-context-3.0.xsd). So our Spring XML context configuration is very short, when you comply some convention (based on the principle Convention over Configuration).

<context:property-placeholder/>

The default  behavior is that the PropertySourcesPlaceholderConfigurer iterates through a set of PropertySource to collect all properties values. This set contains JndiPropertySource per default in a Spring based web application. By default, JndiPropertySource looks up after JNDI resource names prefixed with java:comp/env. This means if your property is ${url}, the corresponding JNDI resource name has to be java:comp/env/url.

The source code of the sample  web application is hosted on GitHub.


5 Comments

Spring Web Application With Hessian Services As a Maven Project

This post describes how to set up a Maven project for a Spring web application with Hessian Service. It also shows how to set up the deployment for exposing the Hessian service and how to set up a client to consume the Hessian service. The Spring Framework version is 3.1.1.RELEASE and the Hessian version is 4.0.7.

Maven Set Up For Server

Our Maven project has three modules

  • hello-world-api
  • hello-world-impl
  • hello-word-war

<modules>
  <module>hello-world-api</module>
  <module>hello-world-impl</module>
  <module>hello-world-war</module>
</modules>

The module hello-world-api contains the interfaces of the services that Hessian server and Hessian client need for the communication. The module hello-world-impl contains the implementation of the services that are deployed on the server side. The module hello-world-war contains the configuration for the servlet container and the configuration which services should be exported as a  Hessian service.

The Service Interface Definition

The module hello-world-api should become a JAR artifact so the packaging jar for this Maven module:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-api</artifactId>

  <name>Hello World Api</name>

</project>

The jar contains the interfaces of the services. An example


package com.github.skosmalla.hello.world.spring.hessian;

public interface HelloWorld {

  public String welcome();

}

The Service Implementation

The module hello-world-impl also becomes a JAR artifact. This jar contains the service implementation for the server. The service implementation could look like following code:


package com.github.skosmalla.hello.world.spring.hessian;

public class HelloWorldImpl implements HelloWorld {

  @Override
  public String welcome() {
    return "Hello World";
  }

}

Thereby this implementation can be used as a Hessian service, it has to be defined as a Spring bean. Therefore we need a Spring configuration file hello-world-service-config.xml. The location for this file is src/main/resources/META-INF/spring. The bean configuration looks like the following code:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

  <bean id="helloWorldService" class="com.github.skosmalla.hello.world.spring.hessian.HelloWorldImpl"/>

</beans>

In this module we have two dependencies, one to the API module and one to the spring-beans module.


<dependencies>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-beans</artifactId>
  </dependency>
  <dependency>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-api</artifactId>
  </dependency>
</dependencies>

The WAR Deployment

The module hello-world-war  describes the configuration for the server deployment. The artifact of this module becomes a WAR file. Therefore, the packaging of this Maven module is war.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-war</artifactId>

  <name>Hello World WAR </name>

  <packaging>war</packaging>

  <build>
    <defaultGoal>install</defaultGoal>
  </build>
</project>

Now, we have to do two things for running our server application on a servlet container:

  1. Add configuration of Spring application context with our implementation to the servlet container.
  2. Add configuration to dispatch request to our Hessian service.

To create an ApplicationContext instance in a web application, we have to configure an ContextLoaderListener in our Web Application Deployment Descriptor (location: src/main/webapp/WEB-INF/web.xml):

<context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>classpath:META-INF/spring/*.xml</param-value>
</context-param>

<listener>
  <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

The ContextLoaderListener builds the root application context from all Spring configuration files located in the classpath under META-INF/spring (That pattern matches our hello-world-service-config.xml).  But no request can be processed. Therefore we have to configure a servlet that dispatch the request to the service. Here, the Spring Framework supports us with a DispatcherServlet. To use it we have to add that servlet to our Web Application Deployment Descriptor (location: src/main/webapp/WEB-INF/web.xml):

<servlet>
  <servlet-name>hessian</servlet-name>
  <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
  <load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
  <servlet-name>hessian</servlet-name>
  <url-pattern>/hessian/*</url-pattern>
</servlet-mapping>

That configuration means that servlet is named as hessian and it is responsible for all request to the URL http://<URL to the Tomcat instanz>/<webapp-context>/hessian/* . Now, we have to configure the Hessian service interface that are dispatched by the servlet hessian. For it, we have to add a Spring configuration file in the same location like Web Application Deployment Descriptor (src/main/webapp/WEB-INF). The name of that file have to be hessian-servlet.xml (the pattern is <servlet name>-servlet.xml). Here, we configure the Hessian service interface:


<bean name="/HelloWorldService" class="org.springframework.remoting.caucho.HessianServiceExporter">
  <property name="service" ref="helloWorldService" />
  <property name="serviceInterface" value="com.github.skosmalla.hello.world.spring.hessian.HelloWorld" />
</bean>

That Spring configuration file defines a new application context. It is child application context of the root application context, loaded by the ContextLoaderListener. A child application context can see every bean of the root application context, but the root application context cannot see beans of its child application context (for more information, have look at the Spring Framework reference). The HessianServiceExporter has a reference to the service implementation, defined in the root application context.
The URL of that Hessian service interface is http://<URL to the Tomcat instanz>/<webapp-context>/hessian/HelloWorldService (Pattern is http://<URL to the Tomcat instanz>/<webapp-context>/hessian/<bean name of the HessianServiceExporter>).

So that these configurations can run in a servlet container, we have to add the dependencies, that contains HessianServiceExporter, DispatcherServlet and ContextLoaderListener, to the pom.xml :


<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-webmvc</artifactId>
</dependency>
<dependency>
  <groupId>com.github.skosmalla.spring.hessian</groupId>
  <artifactId>hello-world-impl</artifactId>
</dependency>
<dependency>
  <groupId>com.caucho</groupId>
  <artifactId>hessian</artifactId>
</dependency>

The Hessian dependency is needed at runtime and the hello-world-impl contains our business logic and the spring configuration file for the root application context.

With a mvn clean install Maven builds a WAR file in the project’s target folder. This WAR file can be deployed on a Tomcat.

The Hessian Test Client With Spring

Now we write a client to test the Hessian service. Therefore, we create a new Maven module.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-client</artifactId>

  <name>Hello World Client</name>

</project>

Spring Framework offers a HessianProxyFactoryBean for calling the remote HelloWorld service. The configuration for this HessianProxyFactoryBean could look like the following code snippet:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

<bean id="helloWorldService"
  class="org.springframework.remoting.caucho.HessianProxyFactoryBean">
  <property name="serviceUrl"
    value="http:/localhost:8080/hello-world/hessian/HelloWorldService" />
  <property name="serviceInterface"
    value="com.github.skosmalla.hello.world.spring.hessian.HelloWorld" />
</bean>
</beans>

In the property serviceInterface we define the interface of the Hessian service, here com.github.skosmalla.hello.world.spring.hessian.HelloWorld. In the property serviceUrl we define the URL to the Hessian Service deployed on the Tomcat. In our sample the Tomcat is on localhost with port number 8080 and the web application is hello-world.

Now, this factory bean creates Hessian service proxy for us:


package com.github.skosmalla.hello.world.spring.hessian;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class HessianClient {

  public static void main(String[] args) {
    ApplicationContext appContext = new ClassPathXmlApplicationContext("META-INF/spring/hessian-config.xml");

    HelloWorld service = (HelloWorld) appContext.getBean("helloWorldService");

    String welcomeMessage = service.welcome();

    System.out.println(welcomeMessage);

  }
}

The dependencies for the client are the following one:

<dependencies>
  <dependency>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-api</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
  </dependency>
  <dependency>
    <groupId>com.caucho</groupId>
    <artifactId>hessian</artifactId>
    <scope>runtime</scope>
  </dependency>
</dependencies>

If we start this client we will get “Hello World” on our command line.

Now, we have seen a full example how to set up a Hessian service in a Spring web application and how to call such a service remotely. The full code you can find on Github.

Links


Leave a comment

Read Classpath Resource from Jar Files with Spring

To read resources from classpath, you can use the Spring class ClassPathResource. The constructor with one argument reads only resources from file system.  But sometimes you want to read a resources from a jar file in the classpath.  For this case, you must build an own URLClassLoader from the URL of the jar file and use the ClassPathResource constructor with the arguments path and classLoader.  The path value is the path of the resource in the jar file.

An Example

We have a jar file, named example.jar. The content looks as follows:

example.jar
|_ META-INF
   |_ sample-resource.txt

We want to read the sample-resource.txt.  The source code for this example looks as follows:

URL jarUrl = new File("path_to_jar_file").toURI().toURL();
URLClassLoader jarLoader = new URLClassLoader(new URL[]{jarUrl}, Thread.currentThread().getContextClassLoader());
ClassPathResource sampleResource = new ClassPathResource("META-INF/sample-resource.txt",jarLoader);