SKM IT World

Just another blog about IT


1 Comment

Generate P2 Repository From Maven Artifacts In 2017

Some years ago, I wrote a blog post about how to generate a P2 repository based on Maven artifacts. That described approach is obsolete nowadays and I’d like to show a new approach that is based on the p2-maven-plugin that was created to solve exactly this problem.

P2-Maven-Plugin Integration in Maven Build Life Cycle

First at all, we bind the p2-maven-plugin’s goal site to the Maven’s life cycle phase package. This goal is responsible for the generation of the P2 repository.

<plugin>
  <groupId>org.reficio</groupId>
  <artifactId>p2-maven-plugin</artifactId>
  <version>1.3.0</version>
  <executions>
    <execution>
      <id>default-cli</id>
      <phase>package</phase>
      <goals>
        <goal>site</goal>
      </goals>
      <!--... -->
    </execution>
  </executions>
</plugin>

Generating P2 Repository

Now, we can define which Maven artifacts should be a part of the new P2 repository. It is irrelevant for the p2-maven-pluging if the defined artifacts have already a OSGi manifest or not. If no OSGi manifest exists, the plugin will generate one.


<execution>
<!-- ... -->
<configuration>
  <artifacts>
    <!-- specify your dependencies here -->
    <!-- groupId:artifactId:version -->
    <artifact>
      <id>com.google.guava:guava:jar:23.0</id>
      <!-- Artifact with existing OSGi-Manifest-->
    </artifact>
    <artifact>
      <id>commons-io:commons-io:1.3</id>
      <!-- Artifact without existing OSGi-Manifest-->
    </artifact>
  </artifacts>
</configuration>
</execution>

The artifacts are specified by the pattern groupId:artifactId:version. If you want to save some typing, use the Buildr tab on MVN repository website for copying the right dependency declaration format.

This sample configuration creates a P2 repository that look like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
└── plugins
    ├── com.google.code.findbugs.jsr305_1.3.9.jar
    ├── com.google.errorprone.error_prone_annotations_2.0.18.jar
    ├── com.google.guava_23.0.0.jar
    ├── com.google.j2objc.annotations_1.1.0.jar
    ├── commons-io_1.3.0.jar
    └── org.codehaus.mojo.animal-sniffer-annotations_1.14.0.jar

1 directory, 9 files

 

The default behavior of the plugin is, that all transitive dependencies of the defined artifact are also downloaded and packed into the P2 repository. If you don’t want it, then you have to set the option transitive to false in the corresponded artifact declaration. If you need the sources (if they exist in the Maven repository) of the defined artifact in the P2 repository, then you have to set the option source to true in the corresponded artifact declaration.

<!-- ... -->
<artifact>
  <id>com.google.guava:guava:jar:23.0</id>
  <transitive>false</transitive>
  <source>true</source>
</artifact>
<!-- ... -->

Then the generated P2 repository looks like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
└── plugins
    ├── com.google.guava.source_23.0.0.jar
    ├── com.google.guava_23.0.0.jar
    └── commons-io_1.3.0.jar

1 directory, 6 files

Generating P2 Repository With Grouped Artifacts

In some situations, you want to group artifacts in so-called feature. p2-maven-plugin provides an option that allows to group the Maven artifact directly into features. The definition of the artifacts is the same like above. The difference is that it has to be inside the corresponded feature. Then, the feature definition needs some meta data information like feature ID, feature version, description etc.


<!-- ...-->
<configuration>
  <featureDefinitions>
    <feature>
      <!-- Generate a feature including artifacts that are listed below inside the feature element-->
      <id>spring.feature</id>
      <version>4.3.11</version>
      <label>Spring Framework 4.3.11 Feature</label>
      <providerName>A provider</providerName>
      <description>${project.description}</description>
      <copyright>A copyright</copyright>
      <license>A licence</license>
      <artifacts>
        <artifact>
          <id>org.springframework:spring-core:jar:4.3.11.RELEASE</id>id>
        </artifact>
        <artifact>
          <id>org.springframework:spring-context:jar:4.3.11.RELEASE</id>id>
          <source>true</source>
        </artifact>
      </artifacts>
    </feature>
    <!--...-->
  </featureDefinitions>
  <!-- ... -->
<configuration>

Then the generated P2 repository looks like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
├── features
│   └── spring.feature_4.3.11.jar
└── plugins
    ├── org.apache.commons.logging_1.2.0.jar
    ├── org.springframework.spring-aop.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-aop_4.3.11.RELEASE.jar
    ├── org.springframework.spring-beans.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-beans_4.3.11.RELEASE.jar
    ├── org.springframework.spring-context.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-context_4.3.11.RELEASE.jar
    ├── org.springframework.spring-core_4.3.11.RELEASE.jar
    ├── org.springframework.spring-expression.source_4.3.11.RELEASE.jar
    └── org.springframework.spring-expression_4.3.11.RELEASE.jar

2 directories, 14 files

Of course both options (generating p2 repository with feature and only with plugins) can be mixed.

p2-maven-plugin provides more options like excluding specific transitive dependencies, referencing to other eclipse features and so on. For more information, please look at the p2-maven-plugin homepage.

Now, we can generate P2 repositories from Maven artifacts. We lacks of how to deploy this P2 repository to a Repository manager like Artifactory or Sonatype Nexus. Both repository manager supports P2 repositories, Artifactory in the Professional variant (cost money) and Sonatype Nexus in OSS variant (free). For Nexus, it’s important that you use the version 2.x. The newest version, 3.x, doesn’t yet support P2 repositories.

Deploying P2 Repository to a Repository Manager

First at all, we want that our generated P2 repository is packed into a zip file. Therefore, we add the tycho-p2-repository-plugin to the Maven build life cycle:


<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-p2-repository-plugin</artifactId>
  <version>1.0.0</version>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>archive-repository</goal>
      </goals>
    </execution>
  </executions>
</plugin>


Then, we have to mark this zip file, so that Maven recognize that it has to deploy it during the deploy phase to a repository manager. For this, we add the build-helper-maven-plugin to the Maven build life cycle.

<!-- Attach zipped P2 repository to be installed and deployed in the Maven repository during the deploy phase. -->
<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>3.0.0</version>
  <executions>
    <execution>
      <goals>
        <goal>attach-artifact</goal>
      </goals>
      <configuration>
        <artifacts>
          <artifact>
            <file>target/${project.artifactId}-${project.version}.zip</file>
            <type>zip</type>
          </artifact>
        </artifacts>
      </configuration>
    </execution>
  </executions>
</plugin>

Now, the generated P2 repository can be addressed by other projects. For more information about how to address the P2 repository, please have a look on the documentation of your repository manager.

A whole pom.xml sample can be found on Github.

Links

Advertisements


Leave a comment

Automatic Tomcat 8.5 Installation and Configuration as Windows Service

If you want to install Tomcat on Windows system as a service, you’ll get the recommendation to use the 32-/64-Bit Windows Service Installer. If you want to install the Tomcat manually, it’s fine. But you can’t use this installer for an automatic installation and configuration of Tomcat, because the installer is UI-based. The next sections explain how you can install and configure Tomcat on a CMD.

Tomcat Installation

  1. Download from the Apache Tomcat 8.5 download page the Core 64-bit Windows zip (or the 32-Bit zip).
  2. Unzip it (for example to C:\tomcat\)

That’s it. Now we have a ready-to-use Tomcat with default configuration values. But it isn’t install as a service.

Installation and Configuration As Windows Service

  1. Go to the bin folder in the installation folder of Tomcat (in the example  it’s C:\tomcat\apache-tomcat-8.5.11\bin)
  2. Install Tomcat as service named tomcat8 by calling service.bat install <servicename>
    C:\tomcat\apache-tomcat-8.5.11\bin>service.bat install tomcat8
    
  3.  tomcat8.exe //US//<servicename> followed by configuration parameter configures the Tomcat service. For example:
    C:\tomcat\apache-tomcat-8.5.11\bin>tomcat8.exe //US//tomcat8 --Startup=auto --JavaHome="C:\Program Files\Java\jre1.8.0_112" --JvmMs=2048 --JvmMx=4096 ++JvmOptions=-Dkey=value
    
  4. Start the Tomcat service with net start <servicename>
    net start tomcat8
    
  5. You can check on http://localhost:8080 whether Tomcat is installed correctly.

The configuration example (step 3) shows how to configure the JVM (heap space, Java option etc.), where Java is installed and which start type should use for the service. The full list of the possible configuration parameter for the Tomcat service can be found in the Apache Tomcat’s Windows Service documentation.

Now we have everything together for writing a Powershell script that does these steps automatically.

 


Leave a comment

My Lesson Learned From Doing Gilded Rose Kata

I’d like to share some of my thoughts about my approach to solve the Gilded Rose Refactoring Kata by Emily Bache. If you don’t know this kata, read the description for a better understanding. I have published my whole solution on GitHub . I tried to make a commit after every step, so you can keep track of my steps in the log of git. The chosen programming language is Java.

Solving Gilded Rose Step-By-Step

Let’s have a look at what I have done step-by-step.

Before adding the new feature, I wanted to refactor the given code base. Therefore, I started writing tests till I had a 100% line and branch coverage. During writing the tests, I was having the idea,, that the calculation of the quality is depended by the name of the item. Hence, the idea arose to use something similar like the Strategy Pattern. When I reached for 100% coverage, I tried to start with the implementation for the first strategy (“Aged Brie”). But I was unsure, what was my limit values for this first strategy. My problem was that I hadn’t tests for the limit values. So my first lessons learned was that 100% line or branch coverage doesn’t mean all test cases are covered. So I added tests for the limit values and finished implementing the “Aged Brie” strategy, added it to the original updateQualtity method (see below code snippet) and ran the tests. All tests were green.


ItemStrategy itemStrategy = new ItemStrategy();
...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   }

// original code follows
}

These cycle I repeated four times: Find missing test cases (mostly for limit values); add new tests for these cases; implement a further strategy; add this new strategy to the original updateQualtiy method and ran the tests. If the tests are green, the next cycle with a new strategy begins. At the end the extended updatedQuality method looked like the following code snippet.

ItemStrategy itemStrategy = new ItemStrategy();

...
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
      continue;
   } else if ("Sulfuras, Hand of Ragnaros".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForSulfurasItem(items[i]);
      continue;
   } else if("Backstage passes to a TAFKAL80ETC concert".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForBackstagePassItem(items[i]);
      continue;
   } else {
      items[i] = itemStrategy.updateQualityForNormalItem(items[i]);
      continue;
   }

// commented out original code
}

My second Lessons Learned was “Refactoring needs time” and the refactoring wasn’t finished. The next steps were cleaning up unnecessary code and refactoring the strategy implementations like replacing if-else construct by ternary operator and extracting if-condition to private methods.

After that I implemented the new feature “conjured item” following the above describe work flow. After this step I could say “Ready”, but I was unhappy with the if-else if-else chain. Therefore, I decided to extract each strategy implementation to an own class (following the “classic” strategy pattern). That helps to replace the if-else if-else chain by an itemStrategyMap. So the next Lesson Learned was “The status ‘Ready’ depends by the definition”.
The last step was doing clean up and choosing better names for the interface and its method.


static Map<String, ItemStrategy> itemStrategyMap = new HashMap<>();

static {
   itemStrategyMap.put("Aged Brie", new AgedBrieItemStrategy());
   itemStrategyMap.put("Sulfuras, Hand of Ragnaros", new SulfurasItemStrategy());
   itemStrategyMap.put("Backstage passes to a TAFKAL80ETC concert", new BackstagePassItemStrategy());
   itemStrategyMap.put("Conjured", new ConjuredItemStrategy());
}

public void updateQuality() {
   for (int i = 0; i < items.length; i++) {
      ItemStrategy itemStrategy = itemStrategyMap.getOrDefault(items[i].name, new NormalItemStrategy());
      items[i] = itemStrategy.updateItem(items[i]);
   }
}

Let’s summarize the Lesson Learned:
1) 100% line or branch coverage doesn’t mean all test cases are covered.
2) Refactoring needs time.
3) The status ‘Ready’ depends by the definition.
These insights aren’t really new for me. I can often observe these insights in my daily work. Nevertheless, it was good to have these insights again, following the rule “learning through repetition” ☺

What I forgot

I stopped after that step. Thinking about it some days later, I have realized that there exists more improvements. For example, the tests from GildedRoseTest class could be extracted to separate test classes regarding to the specific strategy classes.


1 Comment

How To Debug Groovy Script From Shell

Groovy is a scripting language, so it is possible to run Groovy code without compiling to Java byte code. The necessary condition is that Groovy is installed on your machine. Then, running a Groovy script in a shell looks like the following line.


~>groovy TestScript.groovy

Now, something is wrong with the script, only on a special environment. So you want to debug your Groovy script from the shell. Fortunately, it works for Groovy just like for Java. You only have to export the Java options for debugging.


~>export JAVA_OPTS="-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=y"

Now, we can debug our script running from the shell with our favorite IDE.


~>groovy TestScript.groovy
Listening for transport dt_socket at address: 4000


2 Comments

Commons VFS, SSHJ and JSch in Comparison

Some weeks ago I evaluated some SSH libraries for Java. The main requirements to them are file transferring and file operations on a remote machine. Therefore, it exists a network protocol based on SSH, SSH File Transfer Protocol (or SFTP). So I needed a SSH library that supports SFTP.

A research shows that it exits many SSH libraries for Java. I reduce the number of libraries to three for the comparison. I choose JSch, SSHJ and Apache’s Commons VFS for a deeper look. All of them support SFTP. JSch seems to be the de-facto standard for Java. SSHJ is a newer library. Its goal is to have a clear Java API for SSH. The goal of Commons VFS is to have a clear API for virtual file systems and SFTP is one of the supported protocol. Under the hood it uses JSch for the SFTP protocol. The libraries should cover following requirements:

  • client authentication over password
  • client authentication over public key
  • server authentication
  • upload files from local host over SFTP
  • download files to local host over SFTP
  • file operations on the remote host like move, delete, list all children of a given folder (filtering after type like file or folder) over SFTP
  • execute plain shell commands

Lets have a deeper look how the three libraries cover the requirements.

Client Authentication

All three libraries supports both required authentication methods. SSHJ has the clearest API for authentication (SSHClient.authUserPass(), SSHClient.authUserPublicKey()).


SSHClient sshClient= new SSHClient();
sshClient.connect(host);

// only for public key authentication
sshClient.authPublickey("user", "location to private key file");

// only for password authentication
sshClient.authPassword("user", "password");

In Commons VFS the authentication configuration depends which kind of authentication should be used. For the public key authentication, the private key has to set in the FileSystemOption and the user name is a part of the connection url. For the password authentication, user name and password is a part of the connection url.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
fileSystemManager.init();

// only for public key authentication
SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setIdentities(opts, new File[]{privateKey.toFile()});
String connectionUrl = String.format("sftp://%s@%s", user, host);

// only for password authentication
String connectionUrl = String.format("sftp://%s:%s@%s", user, password, host);

// Connection set-up
FileObject remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

The authentication configuration in JSch is similar to Commons VFS. It depends which kind of authentication should be used. The private key for the public key authentication has to be configured in the JSch object and the password for the password authentication has to be set in the Session object. For both, the user name is set, when the JSch object gets the Session object.


JSch sshClient = new JSch();

// only for public key authentication
sshClient.addIdentity("location to private key file");

session = sshClient.getSession(user, host);

// only for password authentication
session.setPassword(password);

session.connect();

Server Authentication

All three libraries supports server authentication. In SSHJ the server authentication can be enabled with SSHClient.loadKnownHost. It is possible to  add an own location of known_host file or it is used the default location that depends on the using platform.


SSHClient sshClient = new SSHClient();
sshClient.loadKnownHosts(); // or sshClient.loadKnownHosts(knownHosts.toFile());
sshClient.connect(host);

In Commons VFS the server authentication configuration is also a part of the FileSystemOption like the public key authentication. There, the location of the known_hosts file can be set.


SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions opts = new FileSystemOptions();
sftpConfigBuilder.setKnownHosts(opts, new File("location of the known_hosts file"));

In JSch it exists two possibilities to configure the server authentication. One possibility is to use the OpenSSHConfig (see JSch example for OpenSSHConfig). The another possibility is easier. The location of the known_hosts file can be set directly in JSch object.


JSch sshClient = new JSch();
sshClient.setKnownHosts("location of known-hosts file");

Upload/download Files Over SFTP

All three libraries supports uploads and downloads files over SFTP. SSHJ has very clear API for these operations. The SSHClient object creates a SFTPClient object. This object is responsible for the upload (SFTPClient.put) and for the download (SFTPClient.get).


SSHClient sshClient = new SSHClient();
// ... connection

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  // download
  sftpClient.get(remotePath, new FileSystemFile(local.toFile()));
  // upload
  sftpClient.put(new FileSystemFile(local.toFile()), remotePath);
}

In Commons VFS the upload and download files is abstracted as operation on a file system. So both are represented by the copyFrom method of a FileObject object. Upload is a copyFrom operation on a RemoteFile  object and download is a copyFrom operation on a LocalFile.


StandardFileSystemManager fileSystemManager = new StandardFileSystemManager();
// ... configuration
remoteRootDirectory = fileSystemManager.resolveFile(connectionUrl, connectionOptions);

LocalFile localFileObject = (LocalFile) fileSystemManager.resolveFile(local.toUri().toString());
FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  // download
  localFileObject.copyFrom(remoteFileObject, new AllFileSelector());

  // upload
  remoteFileObject.copyFrom(localFileObject, new AllFileSelector());
} finally {
  localFileObject.close();
  remoteFileObject.close();
}

JSch also supports a SFTPClient. In JSch it is called ChannelSFTP. It has two method for download (ChannelSFTP.get) and upload (ChannelSFTP.put).


// here: creation and configuration of session

ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();

  // download
  InputStream inputStream = sftpChannel.get(remotePath);
  Files.copy(inputStream, localPath);

  // upload
  OutputStream outputStream = sftpChannel.put(remotePath);
  Files.copy(locaPathl, outputStream);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Execute Shell Commands

Only Commons VFS doesn’t support executing plain shell commands. In SSHJ it is a two-liner. The SshClient starts a new Session object. This object executes the shell command. It is very intuitive.


// creation and configuration of sshClient

try (Session session = sshClient.startSession()) {
  session.exec("ls");
}

In Jsch the ChannelExec is responsible for executing shell commands over SSH. At first the command is set in the channel and then the channel has to be started. It isn’t so intuitive than in SSHJ.


// here: creation and configuration of session object

ChannelExec execChannel = null;
try {
  execChannel = (ChannelExec) session.openChannel("exec");
  execChannel.connect();
  execChannel.setCommand(command);
  execChannel.start();
} catch (JSchException ex) {
  throw new IOException(ex);
} finally {
  if (execChannel != null) {
    execChannel.disconnect();
  }
}

File Operations On the Remote Hosts

All libraries supports more or less ideal file operations over SFTP on remote machines. In SSHJ SFTPClient has also methods for file operations. The names of the methods are the same as the file operations on a Linux system. The following code snippet shows how to delete a file.


//here: creation and configuration of sshClient

try (SFTPClient sftpClient = sshClient.newSFTPClient()) {
  sftpClient.rm(remotePath);
}

Commons VFS’s core functionality is file operations. The usage takes getting used to. A file object has to be resolve and the file operations can be done on it.


// here: creation and configuration of remoteRootDirectory

FileObject remoteFileObject = remoteRootDirectory.resolveFile(remotePath);
try {
  remoteFileObject.delete();
} finally {
  remoteFileObject.close();
}

JSch’s SFTPClient ChannelSFTP has also method for file operations. The mostly file operations are supported by this channel. For e.g. the file copy operation on the remote machine has to be done by plain shell commands over the ChannelExec.

// here: creation and configuration of session
ChannelSftp sftpChannel = null;
try {
  sftpChannel = (ChannelSftp) session.openChannel("sftp");
  sftpChannel.connect();
  sftpChannel.rm(remotePath);
} catch (SftpException | JSchException ex) {
  throw new IOException(ex);
} finally {
  if (sftpChannel != null) {
    sftpChannel.disconnect();
  }
}

Conclusion

After this comparison I have two favourites, SSHJ and Commons VFS. SSHJ has a very clear API and I would choose it if I need a common SSH client or file operation support over SFTP is sufficient. I would choose Commons VFS if I have file operation over many file system protocols or a common SSH client is not needed. For the case, that I need both, I could use JSch directly to execute commands over SSH. The API of Commons VFS takes getting used to. But after understanding the concept behind, the usage of the API is straightforward.

The whole source code examples of this comparison are hosted on Github.

Useful Links

  1. SSHJ homepage
  2. JSch homepage
  3. Commons-vfs homepage
  4. Wikipedia page about SFTP
  5. Source Code of this comparison on Github


1 Comment

Unit And Integration Test Reports For Maven Projects In SonarQube

Since SonarQube 4.2. the test report isn’t generated by the Sonar Maven Plugin during a Maven build (see SonarQube’s blog post) . Therefore, the test report has to be generated by another plugin before Sonar Maven Plugin collects the information for the SonarQube server. Here, Jacoco Maven Plugin can help. It has the possibility to generate test report that are understandable for SonarQube. Jacoco Maven Plugin goes one step further, it has the possibility to generate a test report for integration test.

In the following sections, a solution is presented that meets following criteria:

  • Maven is used as build tool.
  • The project can be a multi module project.
  • Unit tests and integration tests are parts of each module. Here, integration tests are tests that test the integration between classes in a module.
  • Test reports are separate in unit test report and integration test report.

The road map for the next section is that firstly the Maven project structure is shown for the separation of unit and integration tests. Then the Maven project configuration is shown for having separate unit test runs and integration test runs.  After that, we have a look on the Maven project configuration for the test report generation separated in unit test and integration test. At the end, SonarQube’s configuration is shown for the test report visualization in the SonarQube’s dashboard.

Maven Project Structure

At first, we look at how a default Maven project structure looks like for a single module project.

my-app
├── pom.xml
├── src
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

The directory src/main/java contains the production source code and the directory src/test/java contains the test source code. We could put unit tests and integration tests together in this directory. But we want to separate these two types of tests in separate directories. Therefore, we add a new directory called src/it/java. Then unit tests are put in the directory src/test/java and the integration tests are put in the directory src/it/java, so the new project structure looks like the following one.

my-app
├── pom.xml
├── src
│   ├── it
│   │   └── java
│   │       └──
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

Unit And Integration Test Runs

Fortunately, the unit test run configuration is a part of the Maven default project configuration. Maven runs these tests automatically if following criteria are met:

  • The tests are in the directory src/test/java and
  • the test class name either starts with Test or ends with Test or TestCase.

Maven runs these tests during the Maven’s build lifecylce phase test.

The integration test run configuration has to be done manually. It exists Maven plugins that can help. We want that the following criteria are met:

  • integration tests are stored in the directory src/it/java and
  • the integration test class name either starts IT or ends with IT or ITCase and
  • integrations tests runs during the Maven’s build lifecycle phase integration-test.

Firstly, Maven has to know that it has to include the directory src/it/java to its test class path. Here, the Build Helper Maven Plugin can help. It adds the directory src/it/java to the test class path.


<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>1.8</version>
  <executions>
    <execution>
      <id>add-test-source</id>
      <phase>process-test-sources</phase>
      <goals>
        <goal>add-test-source</goal>
      </goals>
      <configuration>
        <sources>
          src/it/java
        </sources>
      </configuration>
     </execution>
     <execution>
       <id>add-test-resources</id>
       <phase>generate-test-resources</phase>
       <goals>
         <goal>add-test-resource</goal>
       </goals>
       <configuration>
          <resources>
            <resource>
              src/it/resources
            </resource>
          </resources>
       </configuration>
     </execution>
   </executions>
 </plugin>

The above code snippet has to be inserted into the section <project><build><plugins> in the project root pom.

Maven’s build lifecycle contains a phase called integration-test.  In this phase, we want to run the integration test. Therefore, we bind the Maven Failsafe Plugin to the phase integration-test:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>2.13</version>
  <configuration>
    <encoding>${project.build.sourceEncoding}</encoding>
  </configuration>
  <executions>
    <execution>
      <id>failsafe-integration-tests</id>
      <phase>integration-test</phase>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Again, the above code snippet also has to be inserted into the section <project><build><plugins> in the project root pom. Then Maven Failsafe Plugin runs the integration tests automatically, when their class name either starts with IT or ends with IT or ITCase.

Test Report Generation

We want to use the Jacoco Maven Plugin for the test report generation. It should generate two test reports, one for the unit test and one for the integration tests. Therefore, the plugin has to two separated agents, that have to be prepared. Then they generate the report during the test runs. The Maven’s build lifecycle contains own phases for preparation before the test phases (test and integration-test). The preparation phase for the test phase is called process-test-classes and the preparation phase for integration-test phase is called pre-integration-test. In these two phases we bind the Jacoco Maven Plugin, so the configuration of this plugin looks like the following code snippet (Again, it is a part of the section <project><build><plugins>):

<plugin>
  <groupId>org.jacoco</groupId>
  <artifactId>jacoco-maven-plugin</artifactId>
  <version>0.7.2.201409121644</version>
  <executions>
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.reportPath}
      </configuration>
      <id>pre-test</id>
      <phase>process-test-classes</phase>
      <goals>
        <goal>prepare-agent</goal>
      </goals>
    </execution>
<!-- we want to execute <span class="hiddenSpellError" pre="execute " data-mce-bogus="1">jacoco</span>:prepare-agent-integration in test phase,-->
but before executing maven failsafe plugin -->
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.itReportPath}</destFile>
      </configuration>
      <id>pre-itest</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>prepare-agent-integration</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The configuration element destFile is the path to the location, where the test reports should be stored. It is important to use the properties ${sonar.jacoco.reportPath} and ${sonar.jacoco.itReportPath}. These properties are used by SonarQube to find the test reports for the visualization.

Now, we can run the goal mvn install and our project is built inclusive unit and integration test and inclusive generating two test reports.

SonarQube Test Report Visualization

Now, we want to visualize our test reports in SonarQube. Therefore, we have to run the Sonar Maven 3 Plugin (command mvn sonar:sonar)  in our project after a successful build.

When we open our project in the SonarQube dashboard, we see only the report for the unit test per module. The reason is that the report visualization of the integration test has to be configured in SonarQube, separately. These configuration steps are described in the SonarQube documentation very well.

Summary

This blog describes how to generate test reports for unit and integration test during a Maven build. On GitHub, I host a sample project that demonstrate all configuration steps. As technical environment I use

  • Maven 3.2.5
  • Maven Plugins:
    • Maven Surefire Plugin
    • Maven Failsafe Plugin
    • Build Helper Maven Plugin
    • Jacoco Maven Plugin
    • Sonar Maven 3 Plugin
  • SonarQube 4.5.1
  • Java 7

Links

  1. SonarQube’s blog post Unit Test Execution in SonarQube
  2. Jacoco Maven plugin project site
  3. Introduction to Maven’s build lifecycle
  4. Maven Failsafe Plugin  project site
  5. Build Helper Maven Plugin project site
  6. SonarQube documentation about Code Coverage by Integration Tests for Java Project
  7. A sample Maven project on GitHub


Leave a comment

Configuration over JNDI in Spring Framework

From a certain point on, an application has to be configurable.  Spring Framework has a nice auxiliary tool for this issue since the first version 0.9 , the class PropertyPlaceholderConfigurer and since Spring Framework 3.1 the class PropertySourcesPlaceholderConfigurer. When you start a Google search for PropertyPlaceholderConfigurer, you will find many examples where the configuration items are saved in properties files. But in many Java enterprise applications, it is common that the configuration items are loaded over JNDI look ups. I’d like to demonstrate how the PropertyPlaceholderConfigurer (before Spring Framework 3.1) and accordingly PropertySourcesPlaceholderConfigurer (since Spring Framework 3.1) can help to ease the configuration over JNDI look ups in our application.

Initial Situation

We have an web application that has a connection to a database. This database connection has to be configurable. The configuration items are defined in a web application context file.


<Context docBase="/opt/tomcat/warfiles/jndi-sample-war.war" antiResourceLocking="true">
  <Environment name="username" value="demo" type="java.lang.String" override="false"/>
  <Environment name="password" value="demo" type="java.lang.String" override="false"/>
  <Environment name="url" value="jdbc:mysql://192.168.56.101:3306/wicket_demo" type="java.lang.String" override="false"/>
</Context> 

For loading these configuration items, the JNDI look up mechanism is used.

In our application we define a data source bean in a  Spring context XML file. This bean represents the database connection.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
     http://www.springframework.org/schema/context
         http://www.springframework.org/schema/context/spring-context.xsd">

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
        destroy-method="close">
        <property name="url" value="${url}" />
        <property name="username" value="${username}" />
        <property name="password" value="${password}" />
  <bean> 
</beans> 

Every value that starts and ends with ${} should be replaced by PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer at the time when launching the application. The next step is to set up PropertyPlaceholderConfigurer and accordingly PropertySourcesPlaceholderConfigurer.

Before Spring Framework 3.1 – PropertyPlaceholderConfigurer Set Up for JNDI Look Up

We define a PropertyPlaceholderConfigurer  bean  in a Spring context XML file. This bean contains to an inner bean that maps the property names of the data source bean to the corresponding JNDI name. The JNDI name consists of two parts. The first part is the name of the context in which the resource is (in our case java:comp/env/) and the second part is the name of the resource (in our case either username, password or url).

<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="properties">
        <bean class="java.util.Properties">
            <constructor-arg>
                <map>
                    <entry key="username">
                        <jee:jndi-lookup jndi-name="java:comp/env/username" />
                    </entry>
                    <entry key="password">
                        <jee:jndi-lookup jndi-name="java:comp/env/password" />
                    </entry>
                    <entry key="url">
                        <jee:jndi-lookup jndi-name="java:comp/env/url" />
                    </entry>
                </map>
            </constructor-arg>
        </bean>
    </property>
</bean>

Since Spring Framework 3.1 – PropertySourcesPlaceholderConfigurer Set Up for JNDI Look Up

Since Spring 3.1 PropertySourcesPlaceholderConfigurer should be used instead of PropertyPlaceholderConfigurer. This effects that since Spring 3.1 the <context:property-placeholder/> namespace element registers an instance of PropertySourcesPlaceholderConfigurer (the namespace definition must be spring-context-3.1.xsd) instead of PropertyPlaceholderConfigurer (you can simulate the old behaviour when you use the namespace definition spring-context-3.0.xsd). So our Spring XML context configuration is very short, when you comply some convention (based on the principle Convention over Configuration).

<context:property-placeholder/>

The default  behavior is that the PropertySourcesPlaceholderConfigurer iterates through a set of PropertySource to collect all properties values. This set contains JndiPropertySource per default in a Spring based web application. By default, JndiPropertySource looks up after JNDI resource names prefixed with java:comp/env. This means if your property is ${url}, the corresponding JNDI resource name has to be java:comp/env/url.

The source code of the sample  web application is hosted on GitHub.