SKM IT World

Just another blog about IT


Leave a comment

Pimp My Git – Git Mergetool

I like to work with git on the command line. But in some cases I prefer UI support. For example, solving merge conflicts is such a case. Git has a command mergetool, which can open a graphical tool to solve merge conflicts. But before you can use this command, you had to configure it. In this blog post I’d like to show you how to configure mergetool and how to use it.

Configuration

First at all, open a shell on Linux. On Windows, open Git Bash. Then choose a graphic tool that should support you solving merge conflicts. git mergetool –tool-help shows a list which tools are supported on your machine


 sparsick@sparsick-ThinkPad-T430s > git mergetool --tool-help
'git mergetool --tool=<tool>' may be set to one of the following:
                araxis
                kdiff3
                meld

The following tools are valid, but not currently available:
                bc
                bc3
                codecompare
                deltawalker
                diffmerge
                diffuse
                ecmerge
                emerge
                gvimdiff
                gvimdiff2
                gvimdiff3
                opendiff
                p4merge
                tkdiff
                tortoisemerge
                vimdiff
                vimdiff2
                vimdiff3
                winmerge
                xxdiff

Some of the tools listed above only work in a windowed
environment. If run in a terminal-only session, they will fail.

This command shows two lists. The first list shows all tools that are supported by git and that are available on your machine (in sample, it is araxis, kdiff3 and meld). The second list shows that are also supported by git, but they aren’t install on your machine.

I use meld as graphical tool. It’s runnable on Windows and Linux. If you haven’t install meld on your machine, then it’s now the right time to do it or choose another tool.

We want to set mergetool globally for all our repositories.


sparsick@sparsick-ThinkPad-T430s > git config --global merge.tool meld
sparsick@sparsick-ThinkPad-T430s > git mergetool
No files need merging

If git mergetool returns more than No files need merging, then the path to your graphic tool isn’t set in your $PATH system variable (The normal case on Windows systems). It’s possible to set the path to the graphical tool directly in git.

sparsick@sparsick-ThinkPad-T430s > git config --global mergetool.meld.path /c/Program\ Files\ \(x86\)/Meld/Meld.exe</pre>

Bear two important things in mind: mergetool is written without a dot between merge and tool and meld is a placeholder for the name of the graphical tool in the above sample. If you use another tool such like vimdiff, then the config key is called mergetool.vimdiff.path .

Now git mergetool is ready to use.

Usage

Now I’d like to demonstrate how to use git mergetool. It is used in when we have merge conflicts during a merge action. Let’s say we want to merge branch branch1 into master and this merge will have some merge conflicts.


sparsick@sparsick-ThinkPad-T430s > git merge branch1

Auto-merging test
CONFLICT (content): Merge conflict in test
Automatic merge failed; fix conflicts and then commit the result.

Now, we want to solve these conflicts with a graphical tool (in the example, it’s meld). git mergetool on the command line open the graphical tool of our choice.

sparsick@sparsick-ThinkPad-T430s > git mergetool

Merging:
test

Normal merge conflict for 'test':
{local}: modified file
{remote}: modified file

After solving the merge conflicts, the change has to commit.

sparsick@sparsick-ThinkPad-T430s > git status

On branch master
All conflicts fixed but you are still merging.
(use "git commit" to conclude merge)

Changes to be committed:

modified:   test

Untracked files:
(use "git add <file>..." to include in what will be committed)

test.orig
sparsick@sparsick-ThinkPad-T430s > git commit

You can see that we have a new untracked file test.orig . This is a backup of the merged file created by mergetool. You can configure that this backup should be removed after a successful merge.

sparsick@sparsick-ThinkPad-T430s > git config --global mergetool.keepBackup false

Further files are created when using git mergetool:

sparsick@sparsick-ThinkPad-T430s > git status

On branch master
Untracked files:
(use "git add ..." to include in what will be committed)

test.BACKUP.7344
test.BASE.7344
test.LOCAL.7344
test.REMOTE.7344

If only these files are currently untracked, then a git clean can help. Otherwise they have to be removed manually.

sparsick@sparsick-ThinkPad-T430s > git clean -f

Removing test.BACKUP.7344
Removing test.BASE.7344
Removing test.LOCAL.7344
Removing test.REMOTE.734

Links

  1. Meld Homepage
  2. git mergetool Documentation


Leave a comment

Automatic Tomcat 8.5 Installation and Configuration as Windows Service

If you want to install Tomcat on Windows system as a service, you’ll get the recommendation to use the 32-/64-Bit Windows Service Installer. If you want to install the Tomcat manually, it’s fine. But you can’t use this installer for an automatic installation and configuration of Tomcat, because the installer is UI-based. The next sections explain how you can install and configure Tomcat on a CMD.

Tomcat Installation

  1. Download from the Apache Tomcat 8.5 download page the Core 64-bit Windows zip (or the 32-Bit zip).
  2. Unzip it (for example to C:\tomcat\)

That’s it. Now we have a ready-to-use Tomcat with default configuration values. But it isn’t install as a service.

Installation and Configuration As Windows Service

  1. Go to the bin folder in the installation folder of Tomcat (in the example  it’s C:\tomcat\apache-tomcat-8.5.11\bin)
  2. Install Tomcat as service named tomcat8 by calling service.bat install <servicename>
    C:\tomcat\apache-tomcat-8.5.11\bin>service.bat install tomcat8
    
  3.  tomcat8.exe //US//<servicename> followed by configuration parameter configures the Tomcat service. For example:
    C:\tomcat\apache-tomcat-8.5.11\bin>tomcat8.exe //US//tomcat8 --Startup=auto --JavaHome="C:\Program Files\Java\jre1.8.0_112" --JvmMs=2048 --JvmMx=4096 ++JvmOptions=-Dkey=value
    
  4. Start the Tomcat service with net start <servicename>
    net start tomcat8
    
  5. You can check on http://localhost:8080 whether Tomcat is installed correctly.

The configuration example (step 3) shows how to configure the JVM (heap space, Java option etc.), where Java is installed and which start type should use for the service. The full list of the possible configuration parameter for the Tomcat service can be found in the Apache Tomcat’s Windows Service documentation.

Now we have everything together for writing a Powershell script that does these steps automatically.

 


Leave a comment

Migration Sonatype Nexus 2 to Nexus 3

I’d like to share my experience with migration Sonatype Nexus 2 to Nexus 3.

Starting Point

I used two Nexus instances:

  • A Nexus 2 Instance for Maven Repositories (2.13)
  • A Nexus 3 Instance for Bower Repositories (3.0.1)

Both instances had several types of repositories (host, proxy, group). The reason for this set up was that Sonatype recommended not to use Nexus 3 (pre 3.1) for Maven repositories and Nexus 2 doesn’t support Bower repositories.

Migration Path

  1. Update Nexus 2 instance to version 2.14.1 (Update Guide)
  2. Update Nexus 3 Instance to version 3.1 (Update Guide ). It’s important that you migrate to the new working directory layout.
  3. Follow the migration step for upgrading from version 2 to version 3 (Update Guide).
    • My scenario was Nexus 2 and Nexus 3 running on the same system.
    • I selected as Upgrade Method “File system copy”
    • I chose that only “repository configuration content” has to migrate.
  4. After a successful migration I had to adjust some configuration in Nexus and in the system that use Nexus:
    • Setup in Nexus 3 a new user for deploying artifacts.
    • Adjust URL to the repositories in Jenkins, Maven settings and deployment scripts.


Leave a comment

Continuous Integration Infrastructure With Windows – Scripting With PowerShell

In one of my current project, I deal with how to run a Continuous Integration (CI) infrastructure on Windows machines. I have had experience in running a CI infrastructure for five years, but it was always on Linux machines. So in the next months I will write some blog posts about my challenge with Windows machines from the perspective of a Linux fan girl :-). This first blog post is about shell scripting on Windows. But bear in mind: This blog post isn’t a tutorial for PowerShell scripting. It only explains striking feature coming from Linux background.

When you run a CI infrastructure, it’s a frequent practice to write little shell scripts to automate repeatable tasks. On a Linux system you would write your scripts in Bash or in a script language like Perl or Phyton. I usually write my script in Bash or in Groovy. I choose Groovy, because I’m a Java Developer and it is possible writing Groovy script in Java-style in the beginning and the second argument for me was, that the administration of Jenkins is easier with Groovy scripts. Jenkins supports a Groovy console for administration tasks and build job’s step also can be automated with Groovy in Jenkins, directly. So I use Groovy for other automated tasks to not use so many script languages at the same time. Now you can say, ok, what’s the problem. It is able to use Groovy on Windows system. The problem is the requirements in my project. It is only allowed to use Java, C# or PowerShell as programming language. But I want to write little scripts, so from this point of view, only PowerShell remains.

Switch from Bash to PowerShell – What is striking

Good news at first, working with PowerShell isn’t as creepy as working with DOS shell. But it’s different compared to Bash shell. In next section, I will report what I notice when I start writing Powershell scripts with a Bash shell background.

First at all, it helps all lot for writing script in PowerShell from the Linux’s point of view when you understand the difference between how Linux and Windows handle their configuration. In Linux, you only have to adjust some text files to change system’s configuration. Here, the configuration is text driven and shells in Linux are optimized to handle text. On the other hand in Windows, you use an API to adjust properties in objects to change system’s configuration. Here, the configuration is API driven. The next important point is that Microsoft provides a large class library, the .NET framework, that has object model of Windows’ system configuration. PowerShell reuses this object model as the base for type system. So scripting in PowerShell feels more like object-oriented programming. Fortunately, we can reuse all functionality of the .NET framework in our PowerShell scripts. So if you’re familiar with C# programming, the start with PowerShell scripting is very easy for you. So writing scripts for PowerShell feels like working with an OOP language.

So let’s look at some code sample for typical situations to see the difference between scripting for Bash and for PowerShell.

Writing Something to the Standard Output Stream

On the Bash side, you have the built-in command echo for that:

echo "Hello World"

For PowerShell, you have a so-called Cmdlet Write-Output:

Write-Output "Hello World"

Now, we want to write the value of a variable to standard output.

Bash

message="Hello World"
echo $message

PowerShell

 $message="Hello World" Write-Output $message 

That was easy, wasn’t it

Parsing Files for a Pattern

In our example, we want to parse only XML files after a specific pattern (in our case “search-pattern”) and count how often this pattern is match in all.

On the Bash side, we use for Linux typically pipeline pattern. First, we use grep for searching the pattern and then we pipe the result of grep to wc to count the matches.

grep -w *.xml -e "search-pattern" | wc -l

On PowerShell, it looks little bit different. First, we have to list all XML files with dir. This result is piped to the Cmdlet ForEach-Object. This Cmdlet gets a script block. In our case, the script block reads the content of a file and pipe it to Select-String Cmdlet. This is responsible for filtering after the given pattern and this filter result is piped to Measure-Object that can calculate the numeric properties of object. In our case, it should only count the matches. At the end, every count has to be added together. The important thing is that the result of every Cmdlet is an object that has properties.

$sum = 0
dir *.xml | ForEach-Object {
$sum += (Get-Content $_ | Select-String -Pattern 'search-pattern' | Measure-Object).Count
}
Write-Output $sum

Conditions

Conditions in Bash and in PowerShell look very similar . The only difference in my opinion is the using of bracket. In PowerShell, conditions look more like in C#.

Bash

if [ true -a true -o false -a 2 -gt 1 -a 1 -eq 1 -a 1 -lt 3 -a !false ];
then
  # do something if the condition is true
else
  # do something if the condition is false
fi

PowerShell

if ( 1 -and ( 1 -or 0) -and (2 -gt 1) -and ( 1 -eq 1) -and (1 -lt 3) -and (-not 0)) {
  # do something if the condition is true
} else {
  # do something if the condition is false
}

Setting System Environment Variable

In both systems, Linux and Window, it is possible to set system environment variable on different context, system-wide, process or user based.

Bash

Setting system environment variable with Bash works like the following line:


export BASH_EXAMPLE="some value"

This variable is active during the current process. If you want that the variable is active system-wide, you have to edit the file /etc/profile or you create executable shell scripts in the directory /etc/profile.d. If the variable should be active only for special user session, you have to edit the file ~/.bash_profile . You can do this programmatically with sed or awk.

Using the variable works like that


echo "$BASH_EXAMPLE"

PowerShell

In the PowerShell, you can call the .NET Framework API for setting the system environment variable.


[Environment]::SetEnvironmentVariable("WindowsExample", "Some value.", "Machine")

If the variable should be active only for special user or process, write User or Process instead of Machine. For process level there also exists another command:

$env:WindowsExample = "Some value."

The .Net Framework has a GetEnviromentVariable method for calling the variable.

[Environment]::GetEnvironmentVariable("WindowsExample", "Machine")
$Env:WindowsExample // shortcut

When setting system environment variable, you can see the big different between the both approach for configuration, text-driven (Linux) and API-driven (Windows).

Tool Support

I was pleasantly surprised when I figured out, that a PowerShell “IDE” exists, called PowerShell Integrated Scripting Environment (ISE). It supports you with code completion and has a complete documentation about the Cmdlets.
powershell-ise

PowerShell Integration in Jenkins

One use case for writing scripts is running them in a build job in Jenkins. Jenkins can’t run PowerShell scripts out of the box. Fortunately, it exists a Jenkins Plugin for that called PowerShell Plugin. After installing this plugin a build step Windows PowerShell is avaible in the job configuration. This step has a command field where you can add your script code directly, or you add the path to your script (see below example). In the second variant it is important to add exit $LASTEXITCODE otherwise the build doesn’t recognize that the PowerShell script failed.

my-first-script.ps1
exit $LASTEXITCODE

Further Information

  1. Manning’s book PowerShell in Action
  2. Jenkins’ PowerShell Plugin
  3. Microsoft TechNet about System Environment Variable.
  4. nixCraft’s blog post Linux: Set Environment Variable


1 Comment

Installation Cheat Sheet For LivingDoc

We wanted to evaluate the new Confluence plugin LivingDoc as a replacement to Fitnesse in order to execute automated web GUI tests.

  • Confluence 5.7.1 (Non-Cloud version)
  • Inside Confluence HSQL In-Memory DB for evaluation purpose
  • LivingDoc plugin 1.0.0.jar
  • Selenium Webdriver for automated Web Testing of
  • Spring Petclinic running inside a Tomcat 8
  • Java Version SDK 1.7.0.80
  • Maven 3.3.1

The following steps are an extension to the LivingDoc documentation. This documentation is very detailed, but if you struggle around some steps, check the following out. We recommend to use the search function of your browser to find the relevant parts. Additional this is not about Best Practices, but only about getting the setup running fast. Let´s start:

  1. Starting point is the LivingDoc Documentation under https://testit-livingdoc.atlassian.net/wiki > Current Documentation > Getting Started
  2. After following this steps, go to Current Documentation > Confluence plugin
  3. Unfortunately, there is no direct link to the current livingdoc-confluence5-plugin.jar, so go to https://github.com/testIT-LivingDoc/livingdoc-confluence/releases and choose livingdoc-confluence5-plugin-1.0.0.jar  (even there is already a version livingdoc-confluence5-plugin-1.1.0.jar) and download it.
  4. Next is the configuration of the Runner. Please look at the following pic:
    1_2016-03-06 14_20_37-LivingDoc Configuration - Confluence
    We replaced the classpath default value with the path, where you have downloaded the livingdoc-confluence5-plugin-1.0.0.jar on your machine.
  5. Next is Project Management:
    2_2016-03-06 16_37_48-LivingDoc Configuration - Confluence
    There, we choose our above prepared runner. Under classpaths, we copy the path to the jar of our Selenium project inclusive its dependencies. In step 7 below, it will described how this jar is built. This is needed so that the Selenium tests can be executed by LivingDoc.
  6. Next is remote agent: In order to get our scenario running, we got advice from the LivingDoc Developer Team to start a remote agent. You will get the complete remote agent jar under https://github.com/testIT-LivingDoc/livingdoc-core/releases . At that time, we chose livingdoc-remote-agent-1.0.0-complete.jar and downloaded it. Please start it like it is written  in the Livingdoc documentation (Current Documentation > Confluence Plugin > Advanced > Remote Agent).
  7. In order that the Selenium tests are executed by LivingDoc, we need a jar file with all our tests inclusive their dependencies. Therefore, we use Maven Assembly Plugin to build a jar with all dependencies. Below the configuration of the Maven Assembly Plugin (Link to whole POM)
    &lt;plugin&gt;
        &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
        &lt;artifactId&gt;maven-jar-plugin&lt;/artifactId&gt;
        &lt;executions&gt;
            &lt;execution&gt;
                &lt;goals&gt;
                    &lt;goal&gt;test-jar&lt;/goal&gt;
                &lt;/goals&gt;
            &lt;/execution&gt;
        &lt;/executions&gt;
    &lt;/plugin&gt;
    &lt;plugin&gt;
        &lt;artifactId&gt;maven-assembly-plugin&lt;/artifactId&gt;
        &lt;version&gt;2.6&lt;/version&gt;
        &lt;executions&gt;
            &lt;execution&gt;
                &lt;id&gt;jar-with-dep&lt;/id&gt;
                &lt;goals&gt;
                    &lt;goal&gt;single&lt;/goal&gt;
                &lt;/goals&gt;
                &lt;phase&gt;package&lt;/phase&gt;
                &lt;configuration&gt;
                    &lt;descriptors&gt;
                        &lt;descriptor&gt;src/assembly/src.xml&lt;/descriptor&gt;
                    &lt;/descriptors&gt;
                    &lt;descriptorRefs&gt;
                        &lt;descriptorRef&gt;jar-with-dependencies&lt;/descriptorRef&gt;
                    &lt;/descriptorRefs&gt;
                &lt;/configuration&gt;
            &lt;/execution&gt;
        &lt;/executions&gt;
    &lt;/plugin&gt;
    

    The descriptor format jar-with-dependencies can be found in the Maven Assembly Plugin site.

Summary

Bringing all together and let the Remote Agent running, we can execute from Confluence our Selenium Test. Now we are ready to rumble.

In the meantime, there is a VirtualBox image with everything inside under LiviningDoc documentation > Showcases. Furthermore, there is a new release 1.1 that supports Confluence 5.9.3. But if you want to install all by yourself, we hope this can save you some time.

4_2016-03-06 15_05_28-PetClinic __ a Spring Framework demonstrationPetClinicOwners - AcceptanceTests

Links

  1. Spring Pet Clinic Project
  2. LivingDoc on GitHub
  3. LivingDoc documentation

 

 

 

 


Leave a comment

Git Resources for Beginner

In this post I’d like to share resources that help me learning and understanding Git.

Links

Do you have some more resources that you can recommend? Let me know it and write a comment.


1 Comment

Unit And Integration Test Reports For Maven Projects In SonarQube

Since SonarQube 4.2. the test report isn’t generated by the Sonar Maven Plugin during a Maven build (see SonarQube’s blog post) . Therefore, the test report has to be generated by another plugin before Sonar Maven Plugin collects the information for the SonarQube server. Here, Jacoco Maven Plugin can help. It has the possibility to generate test report that are understandable for SonarQube. Jacoco Maven Plugin goes one step further, it has the possibility to generate a test report for integration test.

In the following sections, a solution is presented that meets following criteria:

  • Maven is used as build tool.
  • The project can be a multi module project.
  • Unit tests and integration tests are parts of each module. Here, integration tests are tests that test the integration between classes in a module.
  • Test reports are separate in unit test report and integration test report.

The road map for the next section is that firstly the Maven project structure is shown for the separation of unit and integration tests. Then the Maven project configuration is shown for having separate unit test runs and integration test runs.  After that, we have a look on the Maven project configuration for the test report generation separated in unit test and integration test. At the end, SonarQube’s configuration is shown for the test report visualization in the SonarQube’s dashboard.

Maven Project Structure

At first, we look at how a default Maven project structure looks like for a single module project.

my-app
├── pom.xml
├── src
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

The directory src/main/java contains the production source code and the directory src/test/java contains the test source code. We could put unit tests and integration tests together in this directory. But we want to separate these two types of tests in separate directories. Therefore, we add a new directory called src/it/java. Then unit tests are put in the directory src/test/java and the integration tests are put in the directory src/it/java, so the new project structure looks like the following one.

my-app
├── pom.xml
├── src
│   ├── it
│   │   └── java
│   │       └──
│   ├── main
│   │   └── java
│   │       └──
│   └── test
│       └── java
│           └──

Unit And Integration Test Runs

Fortunately, the unit test run configuration is a part of the Maven default project configuration. Maven runs these tests automatically if following criteria are met:

  • The tests are in the directory src/test/java and
  • the test class name either starts with Test or ends with Test or TestCase.

Maven runs these tests during the Maven’s build lifecylce phase test.

The integration test run configuration has to be done manually. It exists Maven plugins that can help. We want that the following criteria are met:

  • integration tests are stored in the directory src/it/java and
  • the integration test class name either starts IT or ends with IT or ITCase and
  • integrations tests runs during the Maven’s build lifecycle phase integration-test.

Firstly, Maven has to know that it has to include the directory src/it/java to its test class path. Here, the Build Helper Maven Plugin can help. It adds the directory src/it/java to the test class path.


<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>1.8</version>
  <executions>
    <execution>
      <id>add-test-source</id>
      <phase>process-test-sources</phase>
      <goals>
        <goal>add-test-source</goal>
      </goals>
      <configuration>
        <sources>
          src/it/java
        </sources>
      </configuration>
     </execution>
     <execution>
       <id>add-test-resources</id>
       <phase>generate-test-resources</phase>
       <goals>
         <goal>add-test-resource</goal>
       </goals>
       <configuration>
          <resources>
            <resource>
              src/it/resources
            </resource>
          </resources>
       </configuration>
     </execution>
   </executions>
 </plugin>

The above code snippet has to be inserted into the section <project><build><plugins> in the project root pom.

Maven’s build lifecycle contains a phase called integration-test.  In this phase, we want to run the integration test. Therefore, we bind the Maven Failsafe Plugin to the phase integration-test:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>2.13</version>
  <configuration>
    <encoding>${project.build.sourceEncoding}</encoding>
  </configuration>
  <executions>
    <execution>
      <id>failsafe-integration-tests</id>
      <phase>integration-test</phase>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Again, the above code snippet also has to be inserted into the section <project><build><plugins> in the project root pom. Then Maven Failsafe Plugin runs the integration tests automatically, when their class name either starts with IT or ends with IT or ITCase.

Test Report Generation

We want to use the Jacoco Maven Plugin for the test report generation. It should generate two test reports, one for the unit test and one for the integration tests. Therefore, the plugin has to two separated agents, that have to be prepared. Then they generate the report during the test runs. The Maven’s build lifecycle contains own phases for preparation before the test phases (test and integration-test). The preparation phase for the test phase is called process-test-classes and the preparation phase for integration-test phase is called pre-integration-test. In these two phases we bind the Jacoco Maven Plugin, so the configuration of this plugin looks like the following code snippet (Again, it is a part of the section <project><build><plugins>):

<plugin>
  <groupId>org.jacoco</groupId>
  <artifactId>jacoco-maven-plugin</artifactId>
  <version>0.7.2.201409121644</version>
  <executions>
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.reportPath}
      </configuration>
      <id>pre-test</id>
      <phase>process-test-classes</phase>
      <goals>
        <goal>prepare-agent</goal>
      </goals>
    </execution>
<!-- we want to execute <span class="hiddenSpellError" pre="execute " data-mce-bogus="1">jacoco</span>:prepare-agent-integration in test phase,-->
but before executing maven failsafe plugin -->
    <execution>
      <configuration>
        <destFile>${sonar.jacoco.itReportPath}</destFile>
      </configuration>
      <id>pre-itest</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>prepare-agent-integration</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The configuration element destFile is the path to the location, where the test reports should be stored. It is important to use the properties ${sonar.jacoco.reportPath} and ${sonar.jacoco.itReportPath}. These properties are used by SonarQube to find the test reports for the visualization.

Now, we can run the goal mvn install and our project is built inclusive unit and integration test and inclusive generating two test reports.

SonarQube Test Report Visualization

Now, we want to visualize our test reports in SonarQube. Therefore, we have to run the Sonar Maven 3 Plugin (command mvn sonar:sonar)  in our project after a successful build.

When we open our project in the SonarQube dashboard, we see only the report for the unit test per module. The reason is that the report visualization of the integration test has to be configured in SonarQube, separately. These configuration steps are described in the SonarQube documentation very well.

Summary

This blog describes how to generate test reports for unit and integration test during a Maven build. On GitHub, I host a sample project that demonstrate all configuration steps. As technical environment I use

  • Maven 3.2.5
  • Maven Plugins:
    • Maven Surefire Plugin
    • Maven Failsafe Plugin
    • Build Helper Maven Plugin
    • Jacoco Maven Plugin
    • Sonar Maven 3 Plugin
  • SonarQube 4.5.1
  • Java 7

Links

  1. SonarQube’s blog post Unit Test Execution in SonarQube
  2. Jacoco Maven plugin project site
  3. Introduction to Maven’s build lifecycle
  4. Maven Failsafe Plugin  project site
  5. Build Helper Maven Plugin project site
  6. SonarQube documentation about Code Coverage by Integration Tests for Java Project
  7. A sample Maven project on GitHub