SKM IT World

Just another blog about IT


Leave a comment

My Lesson Learned From Doing Gilded Rose Kata

I’d like to share some of my thoughts about my approach to solve the Gilded Rose Refactoring Kata by Emily Bache. If you don’t know this kata, read the description for a better understanding. I have published my whole solution on GitHub . I tried to make a commit after every step, so you can keep track of my steps in the log of git. The chosen programming language is Java.

Solving Gilded Rose Step-By-Step

Let’s have a look at what I have done step-by-step.

Before adding the new feature, I wanted to refactor the given code base. Therefore, I started writing tests till I had a 100% line and branch coverage. During writing the tests, I was having the idea,, that the calculation of the quality is depended by the name of the item. Hence, the idea arose to use something similar like the Strategy Pattern. When I reached for 100% coverage, I tried to start with the implementation for the first strategy (“Aged Brie”). But I was unsure, what was my limit values for this first strategy. My problem was that I hadn’t tests for the limit values. So my first lessons learned was that 100% line or branch coverage doesn’t mean all test cases are covered. So I added tests for the limit values and finished implementing the “Aged Brie” strategy, added it to the original updateQualtity method (see below code snippet) and ran the tests. All tests were green.


ItemStrategy itemStrategy = new ItemStrategy();
...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   }

// original code follows
}

These cycle I repeated four times: Find missing test cases (mostly for limit values); add new tests for these cases; implement a further strategy; add this new strategy to the original updateQualtiy method and ran the tests. If the tests are green, the next cycle with a new strategy begins. At the end the extended updatedQuality method looked like the following code snippet.

ItemStrategy itemStrategy = new ItemStrategy();

...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   } else if ("Sulfuras, Hand of Ragnaros".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForSulfurasItem(items[i]);
      continue;
   } else if("Backstage passes to a TAFKAL80ETC concert".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForBackstagePassItem(items[i]);
      continue;
   } else {
      items[i] = itemStrategy.updateQualityForNormalItem(items[i]);
      continue;
   }

// commented out original code
}

My second Lessons Learned was “Refactoring needs time” and the refactoring wasn’t finished. The next steps were cleaning up unnecessary code and refactoring the strategy implementations like replacing if-else construct by ternary operator and extracting if-condition to private methods.

After that I implemented the new feature “conjured item” following the above describe work flow. After this step I could say “Ready”, but I was unhappy with the if-else if-else chain. Therefore, I decided to extract each strategy implementation to an own class (following the “classic” strategy pattern). That helps to replace the if-else if-else chain by an itemStrategyMap. So the next Lesson Learned was “The status ‘Ready’ depends by the definition”.
The last step was doing clean up and choosing better names for the interface and its method.


static Map<String, ItemStrategy> itemStrategyMap = new HashMap<>();

static {
   itemStrategyMap.put("Aged Brie", new AgedBrieItemStrategy());
   itemStrategyMap.put("Sulfuras, Hand of Ragnaros", new SulfurasItemStrategy());
   itemStrategyMap.put("Backstage passes to a TAFKAL80ETC concert", new BackstagePassItemStrategy());
   itemStrategyMap.put("Conjured", new ConjuredItemStrategy());
}

public void updateQuality() {
   for (int i = 0; i < items.length; i++) {
      ItemStrategy itemStrategy = itemStrategyMap.getOrDefault(items[i].name, new NormalItemStrategy());
      items[i] = itemStrategy.updateItem(items[i]);
   }
}

Let’s summarize the Lesson Learned:
1) 100% line or branch coverage doesn’t mean all test cases are covered.
2) Refactoring needs time.
3) The status ‘Ready’ depends by the definition.
These insights aren’t really new for me. I can often observe these insights in my daily work. Nevertheless, it was good to have these insights again, following the rule “learning through repetition” ☺

What I forgot

I stopped after that step. Thinking about it some days later, I have realized that there exists more improvements. For example, the tests from GildedRoseTest class could be extracted to separate test classes regarding to the specific strategy classes.

Advertisements


2 Comments

Commons VFS, SSHJ and JSch in Comparison

Some weeks ago I evaluated some SSH libraries for Java. The main requirements to them are file transferring and file operations on a remote machine. Therefore, it exists a network protocol based on SSH, SSH File Transfer Protocol (or SFTP). So I needed a SSH library that supports SFTP.

A research shows that it exits many SSH libraries for Java. I reduce the number of libraries to three for the comparison. I choose JSch, SSHJ and Apache’s Commons VFS for a deeper look. All of them support SFTP. JSch seems to be the de-facto standard for Java. SSHJ is a newer library. Its goal is to have a clear Java API for SSH. The goal of Commons VFS is to have a clear API for virtual file systems and SFTP is one of the supported protocol. Under the hood it uses JSch for the SFTP protocol. The libraries should cover following requirements:

  • client authentication over password
  • client authentication over public key
  • server authentication
  • upload files from local host over SFTP
  • download files to local host over SFTP
  • file operations on the remote host like move, delete, list all children of a given folder (filtering after type like file or folder) over SFTP
  • execute plain shell commands

Lets have a deeper look how the three libraries cover the requirements.

Client Authentication

All three libraries supports both required authentication methods. SSHJ has the clearest API for authentication (SSHClient.authUserPass(), SSHClient.authUserPublicKey()).


SSHClient sshClient= new SSHClient();
sshClient.connect(host);

// only for public key authentication
sshClient.authPublickey("user", "location to private key file");

// only for password authentication
sshClient.authPassword("user", "password");

In Commons VFS the authentication configuration depends which kind of authentication should be used. For the public key authentication, the private key has to set in the FileSystemOption and the user name is a part of the connection url. For the password authentication, user name and password is a part of the connection url.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
fileSystemManager.init();

// only for public key authentication
SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setIdentities(opts, new File[]{privateKey.toFile()});
String connectionUrl = String.format("sftp://%s@%s", user, host);

// only for password authentication
String connectionUrl = String.format("sftp://%s:%s@%s", user, password, host);

// Connection set-up
FileObject remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

The authentication configuration in JSch is similar to Commons VFS. It depends which kind of authentication should be used. The private key for the public key authentication has to be configured in the JSch object and the password for the password authentication has to be set in the Session object. For both, the user name is set, when the JSch object gets the Session object.


JSch sshClient = new JSch();

// only for public key authentication
sshClient.addIdentity("location to private key file");

session = sshClient.getSession(user, host);

// only for password authentication
session.setPassword(password);

session.connect();

Server Authentication

All three libraries supports server authentication. In SSHJ the server authentication can be enabled with SSHClient.loadKnownHost. It is possible to  add an own location of known_host file or it is used the default location that depends on the using platform.


SSHClient sshClient = new SSHClient();
sshClient.loadKnownHosts(); // or sshClient.loadKnownHosts(knownHosts.toFile());
sshClient.connect(host);

In Commons VFS the server authentication configuration is also a part of the FileSystemOption like the public key authentication. There, the location of the known_hosts file can be set.


SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setKnownHosts(opts, new File("location of the known_hosts file"));

In JSch it exists two possibilities to configure the server authentication. One possibility is to use the OpenSSHConfig (see JSch example for OpenSSHConfig). The another possibility is easier. The location of the known_hosts file can be set directly in JSch object.


JSch sshClient = new JSch();
sshClient.setKnownHosts("location of known-hosts file");

Upload/download Files Over SFTP

All three libraries supports uploads and downloads files over SFTP. SSHJ has very clear API for these operations. The SSHClient object creates a SFTPClient object. This object is responsible for the upload (SFTPClient.put) and for the download (SFTPClient.get).


SSHClient sshClient = new SSHClient();
// ... connection

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  // download
  sftpClient.get(remotePath, new FileSystemFile(local.toFile()));
  // upload
  sftpClient.put(new FileSystemFile(local.toFile()), remotePath);
}

In Commons VFS the upload and download files is abstracted as operation on a file system. So both are represented by the copyFrom method of a FileObject object. Upload is a copyFrom operation on a RemoteFile  object and download is a copyFrom operation on a LocalFile.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
// ... configuration
remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

LocalFile localFileObject = (LocalFile) fileSystemManager.resolveFile(local.toUri().toString());
FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  // download
  localFileObject.copyFrom(remoteFileObject, new AllFileSelector());

  // upload
  remoteFileObject.copyFrom(localFileObject, new AllFileSelector());
} finally {
  localFileObject.close();
  remoteFileObject.close();
}

JSch also supports a SFTPClient. In JSch it is called ChannelSFTP. It has two method for download (ChannelSFTP.get) and upload (ChannelSFTP.put).


// here: creation and configuration of session

ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();

  // download
  InputStream inputStream = sftpChannel.get(remotePath);
  Files.copy(inputStream, localPath);

  // upload
  OutputStream outputStream = sftpChannel.put(remotePath);
  Files.copy(locaPathl, outputStream);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Execute Shell Commands

Only Commons VFS doesn’t support executing plain shell commands. In SSHJ it is a two-liner. The SshClient starts a new Session object. This object executes the shell command. It is very intuitive.


// creation and configuration of sshClient

try (Session session = sshClient.startSession()) {
  session.exec("ls");
}

In Jsch the ChannelExec is responsible for executing shell commands over SSH. At first the command is set in the channel and then the channel has to be started. It isn’t so intuitive than in SSHJ.


// here: creation and configuration of session object

ChannelExec execChannel = null;
try {
  execChannel = (ChannelExec) session.openChannel("exec");
  execChannel.connect();
  execChannel.setCommand(command);
  execChannel.start();
} catch (JSchException ex) {
  throw new IOException(ex);
} finally {
  if (execChannel != null) {
    execChannel.disconnect();
  }
}

File Operations On the Remote Hosts

All libraries supports more or less ideal file operations over SFTP on remote machines. In SSHJ SFTPClient has also methods for file operations. The names of the methods are the same as the file operations on a Linux system. The following code snippet shows how to delete a file.


//here: creation and configuration of sshClient

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  sftpClient.rm(remotePath);
}

Commons VFS’s core functionality is file operations. The usage takes getting used to. A file object has to be resolve and the file operations can be done on it.


// here: creation and configuration of remoteRootDirectory

FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  remoteFileObject.delete();
} finally {
  remoteFileObject.close();
}

JSch’s SFTPClient ChannelSFTP has also method for file operations. The mostly file operations are supported by this channel. For e.g. the file copy operation on the remote machine has to be done by plain shell commands over the ChannelExec.

// here: creation and configuration of session
ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();
  sftpChannel.rm(remotePath);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Conclusion

After this comparison I have two favourites, SSHJ and Commons VFS. SSHJ has a very clear API and I would choose it if I need a common SSH client or file operation support over SFTP is sufficient. I would choose Commons VFS if I have file operation over many file system protocols or a common SSH client is not needed. For the case, that I need both, I could use JSch directly to execute commands over SSH. The API of Commons VFS takes getting used to. But after understanding the concept behind, the usage of the API is straightforward.

The whole source code examples of this comparison are hosted on Github.

Useful Links

  1. SSHJ homepage
  2. JSch homepage
  3. Commons-vfs homepage
  4. Wikipedia page about SFTP
  5. Source Code of this comparison on Github


1 Comment

Unit And Integration Test Reports For Maven Projects In SonarQube

Since SonarQube 4.2. the test report isn’t generated by the Sonar Maven Plugin during a Maven build (see SonarQube’s blog post) . Therefore, the test report has to be generated by another plugin before Sonar Maven Plugin collects the information for the SonarQube server. Here, Jacoco Maven Plugin can help. It has the possibility to generate test report that are understandable for SonarQube. Jacoco Maven Plugin goes one step further, it has the possibility to generate a test report for integration test.

In the following sections, a solution is presented that meets following criteria:

  • Maven is used as build tool.
  • The project can be a multi module project.
  • Unit tests and integration tests are parts of each module. Here, integration tests are tests that test the integration between classes in a module.
  • Test reports are separate in unit test report and integration test report.

The road map for the next section is that firstly the Maven project structure is shown for the separation of unit and integration tests. Then the Maven project configuration is shown for having separate unit test runs and integration test runs.  After that, we have a look on the Maven project configuration for the test report generation separated in unit test and integration test. At the end, SonarQube’s configuration is shown for the test report visualization in the SonarQube’s dashboard.

Maven Project Structure

At first, we look at how a default Maven project structure looks like for a single module project.

my-app
├── pom.xml
├── src
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

The directory src/main/java contains the production source code and the directory src/test/java contains the test source code. We could put unit tests and integration tests together in this directory. But we want to separate these two types of tests in separate directories. Therefore, we add a new directory called src/it/java. Then unit tests are put in the directory src/test/java and the integration tests are put in the directory src/it/java, so the new project structure looks like the following one.

my-app
├── pom.xml
├── src
│   ├── it
│   │   └── java
│   │       └──
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

Unit And Integration Test Runs

Fortunately, the unit test run configuration is a part of the Maven default project configuration. Maven runs these tests automatically if following criteria are met:

  • The tests are in the directory src/test/java and
  • the test class name either starts with Test or ends with Test or TestCase.

Maven runs these tests during the Maven’s build lifecylce phase test.

The integration test run configuration has to be done manually. It exists Maven plugins that can help. We want that the following criteria are met:

  • integration tests are stored in the directory src/it/java and
  • the integration test class name either starts IT or ends with IT or ITCase and
  • integrations tests runs during the Maven’s build lifecycle phase integration-test.

Firstly, Maven has to know that it has to include the directory src/it/java to its test class path. Here, the Build Helper Maven Plugin can help. It adds the directory src/it/java to the test class path.


<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>1.8</version>
  <executions>
    <execution>
      <id>add-test-source</id>
      <phase>process-test-sources</phase>
      <goals>
        <goal>add-test-source</goal>
      </goals>
      <configuration>
        <sources>
          src/it/java
        </sources>
      </configuration>
     </execution>
     <execution>
       <id>add-test-resources</id>
       <phase>generate-test-resources</phase>
       <goals>
         <goal>add-test-resource</goal>
       </goals>
       <configuration>
          <resources>
            <resource>
              src/it/resources
            </resource>
          </resources>
       </configuration>
     </execution>
   </executions>
 </plugin>

The above code snippet has to be inserted into the section <project><build><plugins> in the project root pom.

Maven’s build lifecycle contains a phase called integration-test.  In this phase, we want to run the integration test. Therefore, we bind the Maven Failsafe Plugin to the phase integration-test:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>2.13</version>
  <configuration>
    <encoding>${project.build.sourceEncoding}</encoding>
  </configuration>
  <executions>
    <execution>
      <id>failsafe-integration-tests</id>
      <phase>integration-test</phase>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Again, the above code snippet also has to be inserted into the section <project><build><plugins> in the project root pom. Then Maven Failsafe Plugin runs the integration tests automatically, when their class name either starts with IT or ends with IT or ITCase.

Test Report Generation

We want to use the Jacoco Maven Plugin for the test report generation. It should generate two test reports, one for the unit test and one for the integration tests. Therefore, the plugin has to two separated agents, that have to be prepared. Then they generate the report during the test runs. The Maven’s build lifecycle contains own phases for preparation before the test phases (test and integration-test). The preparation phase for the test phase is called process-test-classes and the preparation phase for integration-test phase is called pre-integration-test. In these two phases we bind the Jacoco Maven Plugin, so the configuration of this plugin looks like the following code snippet (Again, it is a part of the section <project><build><plugins>):

<plugin>
  <groupId>org.jacoco</groupId>
  <artifactId>jacoco-maven-plugin</artifactId>
  <version>0.7.2.201409121644</version>
  <executions>
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.reportPath}
      </configuration>
      <id>pre-test</id>
      <phase>process-test-classes</phase>
      <goals>
        <goal>prepare-agent</goal>
      </goals>
    </execution>
<!-- we want to execute <span class="hiddenSpellError" pre="execute " data-mce-bogus="1">jacoco</span>:prepare-agent-integration in test phase,-->
but before executing maven failsafe plugin -->
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.itReportPath}</destFile>
      </configuration>
      <id>pre-itest</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>prepare-agent-integration</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The configuration element destFile is the path to the location, where the test reports should be stored. It is important to use the properties ${sonar.jacoco.reportPath} and ${sonar.jacoco.itReportPath}. These properties are used by SonarQube to find the test reports for the visualization.

Now, we can run the goal mvn install and our project is built inclusive unit and integration test and inclusive generating two test reports.

SonarQube Test Report Visualization

Now, we want to visualize our test reports in SonarQube. Therefore, we have to run the Sonar Maven 3 Plugin (command mvn sonar:sonar)  in our project after a successful build.

When we open our project in the SonarQube dashboard, we see only the report for the unit test per module. The reason is that the report visualization of the integration test has to be configured in SonarQube, separately. These configuration steps are described in the SonarQube documentation very well.

Summary

This blog describes how to generate test reports for unit and integration test during a Maven build. On GitHub, I host a sample project that demonstrate all configuration steps. As technical environment I use

  • Maven 3.2.5
  • Maven Plugins:
    • Maven Surefire Plugin
    • Maven Failsafe Plugin
    • Build Helper Maven Plugin
    • Jacoco Maven Plugin
    • Sonar Maven 3 Plugin
  • SonarQube 4.5.1
  • Java 7

Links

  1. SonarQube’s blog post Unit Test Execution in SonarQube
  2. Jacoco Maven plugin project site
  3. Introduction to Maven’s build lifecycle
  4. Maven Failsafe Plugin  project site
  5. Build Helper Maven Plugin project site
  6. SonarQube documentation about Code Coverage by Integration Tests for Java Project
  7. A sample Maven project on GitHub


Leave a comment

Configuration over JNDI in Spring Framework

From a certain point on, an application has to be configurable.  Spring Framework has a nice auxiliary tool for this issue since the first version 0.9 , the class PropertyPlaceholderConfigurer and since Spring Framework 3.1 the class PropertySourcesPlaceholderConfigurer. When you start a Google search for PropertyPlaceholderConfigurer, you will find many examples where the configuration items are saved in properties files. But in many Java enterprise applications, it is common that the configuration items are loaded over JNDI look ups. I’d like to demonstrate how the PropertyPlaceholderConfigurer (before Spring Framework 3.1) and accordingly PropertySourcesPlaceholderConfigurer (since Spring Framework 3.1) can help to ease the configuration over JNDI look ups in our application.

Initial Situation

We have an web application that has a connection to a database. This database connection has to be configurable. The configuration items are defined in a web application context file.


<Context docBase="/opt/tomcat/warfiles/jndi-sample-war.war" antiResourceLocking="true">
  <Environment name="username" value="demo" type="java.lang.String" override="false"/>
  <Environment name="password" value="demo" type="java.lang.String" override="false"/>
  <Environment name="url" value="jdbc:mysql://192.168.56.101:3306/wicket_demo" type="java.lang.String" override="false"/>
</Context> 

For loading these configuration items, the JNDI look up mechanism is used.

In our application we define a data source bean in a  Spring context XML file. This bean represents the database connection.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
     http://www.springframework.org/schema/context
         http://www.springframework.org/schema/context/spring-context.xsd">

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
        destroy-method="close">
        <property name="url" value="${url}" />
        <property name="username" value="${username}" />
        <property name="password" value="${password}" />
  <bean> 
</beans> 

Every value that starts and ends with ${} should be replaced by PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer at the time when launching the application. The next step is to set up PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer.

Before Spring Framework 3.1 – PropertyPlaceholderConfigurer Set Up for JNDI Look Up

We define a PropertyPlaceholderConfigurer  bean  in a Spring context XML file. This bean contains to an inner bean that maps the property names of the data source bean to the corresponding JNDI name. The JNDI name consists of two parts. The first part is the name of the context in which the resource is (in our case java:comp/env/) and the second part is the name of the resource (in our case either username, password or url).

<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="properties">
        <bean class="java.util.Properties">
            <constructor-arg>
                <map>
                    <entry key="username">
                        <jee:jndi-lookup jndi-name="java:comp/env/username" />
                    </entry>
                    <entry key="password">
                        <jee:jndi-lookup jndi-name="java:comp/env/password" />
                    </entry>
                    <entry key="url">
                        <jee:jndi-lookup jndi-name="java:comp/env/url" />
                    </entry>
                </map>
            </constructor-arg>
        </bean>
    </property>
</bean>

Since Spring Framework 3.1 – PropertySourcesPlaceholderConfigurer Set Up for JNDI Look Up

Since Spring 3.1 PropertySourcesPlaceholderConfigurer should be used instead of PropertyPlaceholderConfigurer. This effects that since Spring 3.1 the <context:property-placeholder/> namespace element registers an instance of PropertySourcesPlaceholderConfigurer (the namespace definition must be spring-context-3.1.xsd) instead of PropertyPlaceholderConfigurer (you can simulate the old behaviour when you use the namespace definition spring-context-3.0.xsd). So our Spring XML context configuration is very short, when you comply some convention (based on the principle Convention over Configuration).

<context:property-placeholder/>

The default  behavior is that the PropertySourcesPlaceholderConfigurer iterates through a set of PropertySource to collect all properties values. This set contains JndiPropertySource per default in a Spring based web application. By default, JndiPropertySource looks up after JNDI resource names prefixed with java:comp/env. This means if your property is ${url}, the corresponding JNDI resource name has to be java:comp/env/url.

The source code of the sample  web application is hosted on GitHub.


Leave a comment

QA Milestones since the Foundation of an eHealth Framework

“Quality exists when the price is long forgotten”,  said Mr Royce to Mr Rolls with his petrol smeared face or Mr Rolls to Mr Royce – but anyway.

The question is about which price is he talking about?

Money, yes of course! But what about Pain, Features, Competition, Time, Returns, No-Family, Consumer Satisfaction, Feelgood Management, Lifecycle…. – I think you get the point.

The reason for the pain i feel right now is that it happens again to me for the third time to move my contribution for the Developer Network of ICW to another webspace.

So i was quite surprized and more important i´m very thankful to the offer of SKM (Big Hugs!) to be finally part of this enviroment full of motivation, positive energy and knowledge in order to link the following stuff for eternity to my Xing account.

To all who are magically attracted to quality, fighting against superficiality, sensitised to sense-of-self vs. awareness-of-others and promoters of best practises for Continuous Integration and Continuous Delivery.

Let´s go….

QA Milestones since the Foundation of the eHealth Framework (2010)

Imagine that you are responsible for the quality of the underlying technical platform in the scope of an electronic health record that is being developed from scratch. The platform
is the eHealth Framework (eHF). In addition, this solution is part of an integrated health suite with a software development kit (SDK) and various professional services products.
On the 02 December 2009 the tenth version of the eHealth Framework with the version number 2.9 was released. Therefore it is a good occasion to take a look back and remind
ourselves of the challenges we faced so far in QA, how we solved them and how we generated value.
As the saying goes: On time, on scope, on budget! The role that quality plays in this mix is the subject of this article.

 

QA from Scratch to First Release

When developing software from scratch you first need to lay the foundations so that when it is time to roll out the software all the features and usage scenarios can be tested. Sounds easy but it isn’t!

Here is a list of the highlights:
Product Management produced the Software Requirement Specification (SRS) with an overview of the priorities for developing the basis of the new LifeSensor, a personal health record. Based on the Software Requirement Specification, QA extracted all the relevant test cases, these are then reviewed by Product Management. The Tooling to carry out the necessary tests was built. To ensure the web services testing could be carried out we built a Java-based web services test client that catered for maximum flexibility and automation. The test data was stored in an MS Excel sheet. The development cycle was such that every Thursday there was a new version to be tested. Based on our testing we created new bugs as necessary. The respective Product Manager created acceptance tests for the newly developed features that had to be completed for the current week. In this way everyone was involved in an operational feature. In such situation be careful with statements like “….I’m 90% finished…” when you are already in the 3rd day of finalizing the requested missing 10%. I don’t want to criticize anyone because requirements can and do change and that causes effects from the DB straight to GUI and its usability. To steer the project there was a clear focus on pragmatic and sensible reporting: tested, not-tested and accepted Bugs including Top-Ten Risks.

Lessons learned:

  • It was quickly proved in practice that integrative tests between WS-to-GUI and GUI-to-WS were valuable. Yes there was a GUI too.
  • Another important point to share is:  Feature or QA-ready is NOT Production ready. So involve the operating administrators immediately to keep everything maintainable.
  • Take care about these guys!  Make the operating administrators to your friends. Do everything!

 

QA after Initial Roll-out

Once the software has gone into production each new feature must fit into the existing application both conceptually and functionally without introducing any unwanted side effects. In addition we were confronted with the mandatory requirement that each new version of a framework with existing clients must face.

The keyword here is: backwards compatibility. Here it is also important to mention that LifeSensor development was then split between developing its own specific components and its basis components, the latter becoming the backbone of ICW’s health care platform: This was the birth of the eHealth-Framework with its own development team.

Here are some of the highlights of this new team:
We changed our software development process to agile inside a V-Model and introduced Scrum. This means that we QAs worked in parallel with the development team and product mangers. We were no longer on the receiving end of getting a piece of software thrown over the fence and having to test it somewhere in isolation. The whole team was responsible for delivering the features. The whole team was responsible for quality. Not only QA was responsible for quality any more. This is the modern QA approach like in modern soccer. If the Bavarians have the ball the Lewandowski become the first line of defense. If you go deeper with this thought our approach and reason for existing is that QA has to deliver added value and support to development and product management in order to be accepted day by day. For instance, this means no longer being able to hide behind theoretical metrics discussions. We refactored the architecture of our test client and separated common components like object creation from the test data in MS Excel and test result reporting. The added value here was that other teams could easily reuse our test data and checks when using the eHealth Framework or if they wanted to build their own test client. We started to build up a continuous integration environment based on CruiseControl where every test in our test client was executed during the night. We developed further features in our test client in order to minimize duplication of test data based on localization needs. (for instance, different test data was necessary for Austria, Switzerland….) Once again: we had already released a software version and would have to release further software versions: So please think about this attached diagram from the Software Lifecycle Management session from the ICW Developer Conference 2008 and what that means for ensuring backwards compatibility.

 

eHF_Lifecycle

Lessons learned:

  • Do not underestimate the Software Lifecycle. And it is good to have the checks in a continuous integration process.
  • QA is not only testing. QA is not Build Management.
  • Build Management is an independent IT topic with special in-depth know-how and provides a crucial service for the development and the QA team.
  • With software in production be prepared that QA will get new roles as supporter and investigator of potential bugs.

QA Between Version 2.1 and 2.8

The first part dealt with with the historical roots of QA. We now make a big jump over the released versions. Meanwhile, the amount of test data multiplied. We also supported new platforms such as Glassfish. Now the existing tools had to be made manageable and usable with this amount of necessary test data. Meaning usability is not only a topic for developing a GUI. We carried out about 50.000 tests per night on the Web services level, per code line, and rising.

Here is a collection of highlights and ideas:
The test data in the Excel sheets were clustered together as fragments where it made sense. For example, now it’s no longer necessary to include the address in each Emergency Contact as test data; you can pick the one you want from a pool of addresses and reference this. This saves lots of maintenance work when the interface changes. With each negative test, we check the expected exception. However, exceptions and their messages are also subject to change. So what happens now to our backward compatibility tests? They will all fail. What have we done about this? We reused the localization feature that I talked about in “QA after initial roll out” and include there the newly expected exceptions for backwards compatibility testing. And how does the test now knows its context, is it a backward compatibility test or a normal test for its own initial version?  Talk with your build management team, they will provide you a switch.

The Continuous Integration (CI) environment itself was also continuously developed. To deal with the quantity and to scale better we moved to Hudson. Appropriate monitoring solutions were designed as well as a “merge assistant” was developed for more control over this process. Additionally, the tomcat server logs are now published by Hudson for every test that is run. In order to test Auditing and Encryption on the database level we developed direct access to the DB with reuse of the already written and automated test cases. Of course everything can be run on local machines as well as in the CI environment. We made a test tool evaluation but at that time it did not look so good for support in the agile Java domain and an artifact based approach.

Lessons learned:

  • Automation is king. Period! But be careful because automation can make you stupid. Systems and approaches are changing and therefore the test cases have to change respectively. Just configuring the test system and executing one BIG button is not QA but a hit-and-run approach – even when the test results are all “executed ok” in the log files. So we have to ask ourselves every time – Are we still testing the right things?
  • Greatness must be worked hard for every day and is not a default value. Establish such a likeable enviroment.
  • Now take a deep breath and think about what is the process worth in time problems? Everybody should think about it at least once again after a release is delivered.
  • Establish a weekly Bug smashing session. Here it’s important that not only the QA-Team works to resolve the bug but that decision makers from PM and Dev are also represented. The whole project will take a great benefit from that.

 

QA for the Latest Release – 2.9

It will not surprise anyone if I say the amount of work is no less. What was not mentioned so far is that we also have an outstanding development team in Bulgaria. That is, the communications must be guided by this. But there are enough tools to cater for this situation they only need to be properly implemented. Imagine that the whole team is located on the same floor, but everyone stays at their desk. Fortunately Scrum addresses this issue. It is not complicated.
So we’re into the final stretch. Here are the main highlights and some associated ideas:
We achieved the goal of including the whole system documentation (based on DITA) in our CI environment (for example, part of this documentation set is included on ICW’s partner DVDs). Not only bugs, but all story planning as well version tracking is completely mapped in Jira. This means we had absolutely everything that we need to create a release in our CI environment. Some bugs need to be fixed not only for future versions but also in older versions or code lines. How can I ensure that everything is remembered? During bug smashing if we determine that we need to provide a fix for an older version, we provide this information as an additional subtask in the Jira entry. Everything is now transparent and if there are side effects everyone can see if a fix was checked in and when it was checked in and QA can start with retesting.

As far as our CI environment went we had to solve the problem that in the meantime we carry out so many tests that they were not finished in the morning but rather at midday. In this regard our build management with development and QA worked together to improve the feedback time by optimizing the build time. Some intermediate goals to achieve the required solution included: different load balancing and tuning of integration tests.

How to improve further:
Have you ever experienced the situation where somebody calls you up and nervously asks “…QA! Do you have a test which checks the… (fill in some scenario you hope is covered)“ – always if the customer claims to have found a bug , suddenly in that second for every stakeholder this is such a clear and logical never to be forgotten test from that moment on. So. Hmmmmm! Why not identify these “obvious” ones a little bit sooner? How? We have product managers, developers and QAs and everybody brings their own perspective to every single feature. We can use these perspectives and you will be surprised how your test cases increase. Every test case you identify in such sessions is an additional security net and a good day for the quality of your product. Donate this to your system as much as possible. It is of great benefit.

Some of next challenges will be adding micro-benchmarking tests in our test scenarios. This will never replace a mandatory performance test from this specially skilled people in our performance lab for every release but gives you soon information about possible side effects when your system is growing up.
I hope you liked this short overview of our work and that you can find something here that will help you in your own daily work. Thanks to my colleagues in technical writing and build management for their input and help with this post.

Finished.

PS: So and there is a pdf version for you to download right into the mediathek.

 

 


5 Comments

Spring Web Application With Hessian Services As a Maven Project

This post describes how to set up a Maven project for a Spring web application with Hessian Service. It also shows how to set up the deployment for exposing the Hessian service and how to set up a client to consume the Hessian service. The Spring Framework version is 3.1.1.RELEASE and the Hessian version is 4.0.7.

Maven Set Up For Server

Our Maven project has three modules

  • hello-world-api
  • hello-world-impl
  • hello-word-war

<modules>
  <module>hello-world-api</module>
  <module>hello-world-impl</module>
  <module>hello-world-war</module>
</modules>

The module hello-world-api contains the interfaces of the services that Hessian server and Hessian client need for the communication. The module hello-world-impl contains the implementation of the services that are deployed on the server side. The module hello-world-war contains the configuration for the servlet container and the configuration which services should be exported as a  Hessian service.

The Service Interface Definition

The module hello-world-api should become a JAR artifact so the packaging jar for this Maven module:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-api</artifactId>

  <name>Hello World Api</name>

</project>

The jar contains the interfaces of the services. An example


package com.github.skosmalla.hello.world.spring.hessian;

public interface HelloWorld {

  public String welcome();

}

The Service Implementation

The module hello-world-impl also becomes a JAR artifact. This jar contains the service implementation for the server. The service implementation could look like following code:


package com.github.skosmalla.hello.world.spring.hessian;

public class HelloWorldImpl implements HelloWorld {

  @Override
  public String welcome() {
    return "Hello World";
  }

}

Thereby this implementation can be used as a Hessian service, it has to be defined as a Spring bean. Therefore we need a Spring configuration file hello-world-service-config.xml. The location for this file is src/main/resources/META-INF/spring. The bean configuration looks like the following code:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

  <bean id="helloWorldService" class="com.github.skosmalla.hello.world.spring.hessian.HelloWorldImpl"/>

</beans>

In this module we have two dependencies, one to the API module and one to the spring-beans module.


<dependencies>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-beans</artifactId>
  </dependency>
  <dependency>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-api</artifactId>
  </dependency>
</dependencies>

The WAR Deployment

The module hello-world-war  describes the configuration for the server deployment. The artifact of this module becomes a WAR file. Therefore, the packaging of this Maven module is war.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-war</artifactId>

  <name>Hello World WAR </name>

  <packaging>war</packaging>

  <build>
    <defaultGoal>install</defaultGoal>
  </build>
</project>

Now, we have to do two things for running our server application on a servlet container:

  1. Add configuration of Spring application context with our implementation to the servlet container.
  2. Add configuration to dispatch request to our Hessian service.

To create an ApplicationContext instance in a web application, we have to configure an ContextLoaderListener in our Web Application Deployment Descriptor (location: src/main/webapp/WEB-INF/web.xml):

<context-param>
  <param-name>contextConfigLocation</param-name>
  <param-value>classpath:META-INF/spring/*.xml</param-value>
</context-param>

<listener>
  <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

The ContextLoaderListener builds the root application context from all Spring configuration files located in the classpath under META-INF/spring (That pattern matches our hello-world-service-config.xml).  But no request can be processed. Therefore we have to configure a servlet that dispatch the request to the service. Here, the Spring Framework supports us with a DispatcherServlet. To use it we have to add that servlet to our Web Application Deployment Descriptor (location: src/main/webapp/WEB-INF/web.xml):

<servlet>
  <servlet-name>hessian</servlet-name>
  <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
  <load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
  <servlet-name>hessian</servlet-name>
  <url-pattern>/hessian/*</url-pattern>
</servlet-mapping>

That configuration means that servlet is named as hessian and it is responsible for all request to the URL http://<URL to the Tomcat instanz>/<webapp-context>/hessian/* . Now, we have to configure the Hessian service interface that are dispatched by the servlet hessian. For it, we have to add a Spring configuration file in the same location like Web Application Deployment Descriptor (src/main/webapp/WEB-INF). The name of that file have to be hessian-servlet.xml (the pattern is <servlet name>-servlet.xml). Here, we configure the Hessian service interface:


<bean name="/HelloWorldService" class="org.springframework.remoting.caucho.HessianServiceExporter">
  <property name="service" ref="helloWorldService" />
  <property name="serviceInterface" value="com.github.skosmalla.hello.world.spring.hessian.HelloWorld" />
</bean>

That Spring configuration file defines a new application context. It is child application context of the root application context, loaded by the ContextLoaderListener. A child application context can see every bean of the root application context, but the root application context cannot see beans of its child application context (for more information, have look at the Spring Framework reference). The HessianServiceExporter has a reference to the service implementation, defined in the root application context.
The URL of that Hessian service interface is http://<URL to the Tomcat instanz>/<webapp-context>/hessian/HelloWorldService (Pattern is http://<URL to the Tomcat instanz>/<webapp-context>/hessian/<bean name of the HessianServiceExporter>).

So that these configurations can run in a servlet container, we have to add the dependencies, that contains HessianServiceExporter, DispatcherServlet and ContextLoaderListener, to the pom.xml :


<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-webmvc</artifactId>
</dependency>
<dependency>
  <groupId>com.github.skosmalla.spring.hessian</groupId>
  <artifactId>hello-world-impl</artifactId>
</dependency>
<dependency>
  <groupId>com.caucho</groupId>
  <artifactId>hessian</artifactId>
</dependency>

The Hessian dependency is needed at runtime and the hello-world-impl contains our business logic and the spring configuration file for the root application context.

With a mvn clean install Maven builds a WAR file in the project’s target folder. This WAR file can be deployed on a Tomcat.

The Hessian Test Client With Spring

Now we write a client to test the Hessian service. Therefore, we create a new Maven module.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-spring-hessian</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <artifactId>hello-world-client</artifactId>

  <name>Hello World Client</name>

</project>

Spring Framework offers a HessianProxyFactoryBean for calling the remote HelloWorld service. The configuration for this HessianProxyFactoryBean could look like the following code snippet:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

<bean id="helloWorldService"
  class="org.springframework.remoting.caucho.HessianProxyFactoryBean">
  <property name="serviceUrl"
    value="http:/localhost:8080/hello-world/hessian/HelloWorldService" />
  <property name="serviceInterface"
    value="com.github.skosmalla.hello.world.spring.hessian.HelloWorld" />
</bean>
</beans>

In the property serviceInterface we define the interface of the Hessian service, here com.github.skosmalla.hello.world.spring.hessian.HelloWorld. In the property serviceUrl we define the URL to the Hessian Service deployed on the Tomcat. In our sample the Tomcat is on localhost with port number 8080 and the web application is hello-world.

Now, this factory bean creates Hessian service proxy for us:


package com.github.skosmalla.hello.world.spring.hessian;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class HessianClient {

  public static void main(String[] args) {
    ApplicationContext appContext = new ClassPathXmlApplicationContext("META-INF/spring/hessian-config.xml");

    HelloWorld service = (HelloWorld) appContext.getBean("helloWorldService");

    String welcomeMessage = service.welcome();

    System.out.println(welcomeMessage);

  }
}

The dependencies for the client are the following one:

<dependencies>
  <dependency>
    <groupId>com.github.skosmalla.spring.hessian</groupId>
    <artifactId>hello-world-api</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
  </dependency>
  <dependency>
    <groupId>com.caucho</groupId>
    <artifactId>hessian</artifactId>
    <scope>runtime</scope>
  </dependency>
</dependencies>

If we start this client we will get “Hello World” on our command line.

Now, we have seen a full example how to set up a Hessian service in a Spring web application and how to call such a service remotely. The full code you can find on Github.

Links


Leave a comment

Read Classpath Resource from Jar Files with Spring

To read resources from classpath, you can use the Spring class ClassPathResource. The constructor with one argument reads only resources from file system.  But sometimes you want to read a resources from a jar file in the classpath.  For this case, you must build an own URLClassLoader from the URL of the jar file and use the ClassPathResource constructor with the arguments path and classLoader.  The path value is the path of the resource in the jar file.

An Example

We have a jar file, named example.jar. The content looks as follows:

example.jar
|_ META-INF
   |_ sample-resource.txt

We want to read the sample-resource.txt.  The source code for this example looks as follows:

URL jarUrl = new File("path_to_jar_file").toURI().toURL();
URLClassLoader jarLoader = new URLClassLoader(new URL[]{jarUrl}, Thread.currentThread().getContextClassLoader());
ClassPathResource sampleResource = new ClassPathResource("META-INF/sample-resource.txt",jarLoader);