SKM IT World

Just another blog about IT


Leave a comment

How to Format a Large Code Base Automatically

If you introduce code formatting rules retroactively, you have to solve the problem how to format existing code base according to the new formatting rules. You could checkout every code repository one by one in your IDE and click on Autoformat the whole project. But this is boring and waste of time. Fortunately, Intellij IDEA has a format CLI tool in its installation. You can locate it in the path <your Intellij IDEA installation>/bin. It’s called format.sh.

In the next section I’d like to show you how you can automate formatting big code base. First, I will show the preparation steps like exporting your code formatting rule setting from the IDE. Then, I will demonstrate how to use the CLI-Tool format.sh. At the end, I will show a small Groovy script that query all repositories (in this case they are Git repositories), formatting the code and push it back to the remote SCM.

Preparations

First at all, we need the code formatting rule setting exported from Intellij IDEA. In your Intellij IDEA follow the next step

  1. Open File -> Settings -Editor-> Code Style
  2. Click on Export…
  3. Choose a name for the XML file (for example, Default.xml) and a location where this file should be saved (for example, /home/foo ).

Then, checkout or clone your SCM repository and remember the location where you checkout/clone it (for example, /home/foo/myrepository).

Format Code Base Via format.sh  CLI Tool

Three parameters are important for format.sh:

  • -s : Set a path to Intellij IDEA code style settings .xml file (in our example: /home/foo/Default.xml).
  • -r : Set that directories should be scanned recursively.
  • path<n> : Set a path to a file or a directory that should be formatted (in our example: /home/foo/myrepository).

> ./format.sh
IntelliJ IDEA 2018.2.4, build IC-182.4505.22 Formatter
Usage: format [-h] [-r|-R] [-s|-settings settingsPath] path1 path2...
-h|-help Show a help message and exit.
-s|-settings A path to Intellij IDEA code style settings .xml file.
-r|-R Scan directories recursively.
-m|-mask A comma-separated list of file masks.
path<n> A path to a file or a directory.

> /format.sh -r -s ~/Default.xml ~/myrepository

It’s possible that the tool cancels scanning because of a java.lang.OutOfMemoryError: Java heap space. Then, you have to increase Java’s maximum memory size (-Xmx) in <your Intellij IDEA installation>/bin/idea64.vmoptions.


> nano idea64.vmoptions
-Xms128m
-Xmx750m // <- here increase the maximum memory size
-XX:ReservedCodeCacheSize=240m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-Djdk.http.auth.tunneling.disabledSchemes=""
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Dawt.useSystemAAFontSettings=lcd
-Dsun.java2d.renderer=sun.java2d.marlin.MarlinRenderingEngine

Groovy Script For Formatting Many Repository In a Row

Now, we want to bring everything together. The script should do four things:

  1. Find all repository URLs whose code has to be formatted.
  2. Check out / Clone the repositories.
  3. Format the code in all branches of each repostory.
  4. Commit and push the change to the remote SCM.

I choose Git as SCM in my example. The finding of the repository URLs depends on the Git Management System (like BitBucket, Gitlab, SCM Manager etc.) that you use. But the approach is in all system the same:

  1. Call the RESTful API of your Git Management System.
  2. Parse the JSON object in the response after the URLs.

For example, in BitBucket it’s like that:



@Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.7.1')
import groovyx.net.http.*

import static groovyx.net.http.ContentType.*
import static groovyx.net.http.Method.*

def cloneUrlsForProject() {

    def projectUrl = "https://scm/rest/api/1.0/projects/PROJECT_KEY/repos?limit=1000"
    def token = "BITBUCKET_TOKEN"

    def projects = []
    def cloneUrls = []

    def http = new HTTPBuilder(projectUrl)
    http.request(GET) {
        headers."Accept" = "application/json"
        headers."Authorization" = "Bearer ${token}"

        response.success = { resp -> projects = new JsonSlurper().parseText(resp.entity.content.text)}

        response.failure = { resp ->
            throw new RuntimeException("Error fetching clone urls for '${projectKey}': ${resp.statusLine}")
        }
    }

    projects.values.each { value ->
        def cloneLink = value.links.clone.find { it.name == "ssh" }
        cloneUrls.add(cloneLink.href)
    }

    return cloneUrls
}

Then, we have to clone the repositories and checkout each branch. In each branch, the format.sh has to be called. For the git operation, we use the jgit library and for the format.sh call we use a Groovy feature for process calling. In Groovy it’s possible to define the command as a String and then to call the method execute() on this String like “ls -l”.execute(). So the Groovy script for the last three tasks would be looked like that:

#!/usr/bin/env groovy
@Grab('org.eclipse.jgit:org.eclipse.jgit:5.1.2.201810061102-r')
import jgit.*
import org.eclipse.jgit.api.CreateBranchCommand
import org.eclipse.jgit.api.Git
import org.eclipse.jgit.api.ListBranchCommand
import org.eclipse.jgit.transport.UsernamePasswordCredentialsProvider

import java.nio.file.Files


def intellijHome = 'path to your idea home folder'
def codeFormatterSetting = 'path to your exported code formatter setting file'
def allRepositoriesUrls = ["http://scm/repo1","http://scm/repo2"] // for simplifying

allRepositoriesUrls.each { repository ->
    def repositoryName = repository.split('/').flatten().findAll { it != null }.last()
    File localPath = Files.createTempDirectory("${repositoryName}-").toFile()
    println "Clone ${repository} to ${localPath}"
    Git.cloneRepository()
       .setURI(repository)
       .setDirectory(localPath)
       .setNoCheckout(true)
       .setCredentialsProvider(new UsernamePasswordCredentialsProvider("user", "password")) // only needed when clone url is https / http
       .call()
       .withCloseable { git ->
        def remoteBranches = git.branchList().setListMode(ListBranchCommand.ListMode.REMOTE).call()
        def remoteBranchNames = remoteBranches.collect { it.name.replace('refs/remotes/origin/', '') }

        println "Found the following branches: ${remoteBranchNames}"

        remoteBranchNames.each { remoteBranch ->
            println "Checkout branch $remoteBranch"
            git.checkout()
               .setName(remoteBranch)
               .setCreateBranch(true)
               .setUpstreamMode(CreateBranchCommand.SetupUpstreamMode.TRACK)
               .setStartPoint("origin/" + remoteBranch)
               .call()

            def formatCommand = "$intellijHome/bin/format.sh -r -s $codeFormatterSetting $localPath"

            println formatCommand.execute().text

            git.add()
               .addFilepattern('.')
               .call()
            git.commit()
               .setAuthor("Automator", "no-reply@yourcompany.com")
               .setMessage('Format code according to IntelliJ setting.')
               .call()

            println "Commit successful!"
        }

        git.push()
           .setCredentialsProvider(new UsernamePasswordCredentialsProvider("user", "password")) // only needed when clone url is https / http
           .setPushAll()
           .call()

        println "Push is done"

    }
}

Do you have another approach? Let me know and write a comment below.

Advertisements


Leave a comment

Running Ansible on a Windows System

On my last conference talk (it was about Ansible and Docker at DevOpsCon in Berlin), I was asked what is the best way to run Ansible on a Windows system. Ansible itself requires a Linux-based system as the control machine. When I have to develop on a Windows machine, I install a Linux-based virtual machine to run the Ansible’s playbooks inside the virtual machine. I set up the virtual machine with Virtualbox and Vagrant. This tools allow me to share the playbooks easily between host and the virtual machine. so I can develop the playbook on the windows system and the virtual machine can have a headless setup. The next section shows you how to set up this tool chain.

 Tool Chain Setup

 At first, install VirtualBox and Vagrant on your machine. I additionally use Babun, a windows shell based on Cygwin and oh-my-zsh, for a better shell experience on Windows, but this isn’t necessary. Then, go to the directory (let’s called it ansible-workspace), where your Ansible’s playbooks are located. Create there a Vagrant configuration file with the command vagrant init:
ansible-workspace
├── inventories
│   ├── production
│   └── test
├── README.md
├── roles
│   ├── deploy-on-tomcat
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │       ├── cleanup-webapp.yml
│   │       ├── deploy-webapp.yml
│   │       ├── main.yml
│   │       ├── start-tomcat.yml
│   │       └── stop-tomcat.yml
│   ├── jdk
│   │   └── tasks
│   │       └── main.yml
│   └── tomcat8
│       ├── defaults
│       │   └── main.yml
│       ├── files
│       │   └── init.d
│       │       └── tomcat
│       ├── tasks
│       │   └── main.yml
│       └── templates
│           └── setenv.sh.j2
├── demo-app-ansible-deploy-1.0-SNAPSHOT.war
├── deploy-demo.yml
├── inventories
│   ├── production
│   └── test
├── roles
│   ├── deploy-on-tomcat
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │       ├── cleanup-webapp.yml
│   │       ├── deploy-webapp.yml
│   │       ├── main.yml
│   │       ├── start-tomcat.yml
│   │       └── stop-tomcat.yml
│   ├── jdk
│   │   └── tasks
│   │       └── main.yml
│   └── tomcat8
│       ├── defaults
│       │   └── main.yml
│       ├── files
│       │   └── init.d
│       │       └── tomcat
│       ├── tasks
│       │   └── main.yml
│       └── templates
│           └── setenv.sh.j2
├── setup-app-roles.yml
├── setup-app.yml
└── Vagrantfile

├── setup-app-roles.yml
├── setup-app.yml
└── Vagrantfile


Now, we have to choose a so-called Vagrant Box on Vagrant Cloud. A box is the package format for a Vagrant environment. It depends on the provider and the operation system that you choose to use. In our case, it is a Virtualbox VM image based on a minimal Ubuntu 18.04 system (box name is bento/ubuntu-18.04 ). This box will be configured in our Vagrantfile:

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-18.04"
end

The next step is to ensure that Ansible will be installed in the box. Thus, we use the shell provisioner of Vagrant. The Vagranfile will be extended by the provisioning code:

Vagrant.configure("2") do |config|
  # ... other Vagrant configuration
  config.vm.provision "shell", inline: &lt;&lt;-SHELL
    sudo apt-get update -y
    sudo apt-get install -y software-properties-common
    sudo apt-add-repository ppa:ansible/ansible
    sudo apt-get update -y
    sudo apt-get install -y ansible
    # ... other Vagrant provision steps
  SHELL
end
end

The last step is to copy the SSH credential into the Vagrant box. Thus, we mark the SSH credential folder of the host system as a Shared folder, so that we can copy them to the SSH config folder inside the box.
Vagrant.configure("2") do |config|
 
  # ... other Vagrant configuration
  config.vm.synced_folder ".", "/vagrant"
  config.vm.synced_folder "path to your ssh config", "/home/vagrant/ssh-host"
  # ... other Vagrant configuration

  config.vm.provision "shell", inline: &lt;&lt;-SHELL
    # ... other Vagrant provision steps
    cp /home/vagrant/ssh-host/* /home/vagrant/.ssh/.
  SHELL
end

On Github’s Gist you can found the whole Vagrantfile.

Workflow

After setting up the tool chain let’s have a look how to work with it. I write my Ansible playbook on the Windows system and run them from the Linux guest system against the remote hosts. For running the Ansible playbooks we have to start the Vagrant box.
> cd ansible-workspace
> vagrant up

When the Vagrant box is ready to use, we can jump into the box with:
 
> vagrant ssh 

You can find the Ansible playbooks inside the box in the folder /vagrant .  In this folder run Ansible:
 
> cd" /vagrant
> ansible-playbook -i inventories/test -u tekkie setup-db.yml

Outlook

Maybe on Windows 10 it’s possible to use Ansible natively, because of the Linux subsystem. But I don’t try it out. Some Docker fans would prefer a container instead of a virtual machine. But remember, before Windows 10 Docker runs on Windows in a virtual machine, so therefore, I don’t see a benefit for using Docker instead of a virtual machine. But of course with Windows 10 native container support a setup with Docker is a good alternative if Ansible doesn’t run on the Linux subsystem.
Do you another idea or approach? Let me know and write a comment.

Links

  1. VirtualBox
  2. Vagrant
  3. Whole Vagrantfile on Github.

 


Leave a comment

Apache2 as Reverse Proxy for NPM Registry Proxies in Sonatype Nexus 3

I use a NPM registry proxy in Sonatype Nexus 3 behind an Apache2 as reverse proxy. With the “standard” Apache2 VirtualHost configuration


<VirtualHost:80>

  ProxyRequests Off
  <Proxy *>
    Order deny,allow
    Allow from all
  </Proxy>
  ProxyPass / http://localhost:8081/
  ProxyPassReverse / http://localhost:8081/

</VirtualHost:80>

I got following failure when I tried to install the dependency @sinonjs/formatio.

$ yarn add @sinonjs/formatio --verbose
yarn add v1.3.2
warning package.json: No license field
verbose 0.337 Checking for configuration file "/home/sparsick/dev/workspace/yarn-test-module/.npmrc".
verbose 0.337 Checking for configuration file "/home/sparsick/.npmrc".
verbose 0.337 Found configuration file "/home/sparsick/.npmrc".
verbose 0.337 Checking for configuration file "/usr/etc/npmrc".
verbose 0.338 Found configuration file "/usr/etc/npmrc".
verbose 0.338 Checking for configuration file "/home/sparsick/dev/workspace/yarn-test-module/.npmrc".
verbose 0.338 Checking for configuration file "/home/sparsick/dev/workspace/.npmrc".
verbose 0.338 Checking for configuration file "/home/sparsick/dev/.npmrc".
verbose 0.338 Checking for configuration file "/home/sparsick/.npmrc".
verbose 0.338 Found configuration file "/home/sparsick/.npmrc".
verbose 0.338 Checking for configuration file "/home/.npmrc".
verbose 0.341 Checking for configuration file "/home/sparsick/dev/workspace/yarn-test-module/.yarnrc".
verbose 0.342 Found configuration file "/home/sparsick/dev/workspace/yarn-test-module/.yarnrc".
verbose 0.343 Checking for configuration file "/home/sparsick/.yarnrc".
verbose 0.344 Found configuration file "/home/sparsick/.yarnrc".
verbose 0.344 Checking for configuration file "/usr/etc/yarnrc".
verbose 0.344 Checking for configuration file "/home/sparsick/dev/workspace/yarn-test-module/.yarnrc".
verbose 0.345 Found configuration file "/home/sparsick/dev/workspace/yarn-test-module/.yarnrc".
verbose 0.345 Checking for configuration file "/home/sparsick/dev/workspace/.yarnrc".
verbose 0.345 Checking for configuration file "/home/sparsick/dev/.yarnrc".
verbose 0.345 Checking for configuration file "/home/sparsick/.yarnrc".
verbose 0.345 Found configuration file "/home/sparsick/.yarnrc".
verbose 0.345 Checking for configuration file "/home/.yarnrc".
verbose 0.347 current time: 2018-02-27T08:04:43.357Z
warning yarn-test-module: No license field
[1/4] Resolving packages...
verbose 0.45 Performing "GET" request to "http://mycompany/repository/npm-public/@sinonjs%2fformatio".
verbose 0.55 Request "http://mycompany/repository/npm-public/@sinonjs%2fformatio" finished with status code 404.
verbose 0.551 Error: Couldn't find package "@sinonjs/formatio" on the "npm" registry.
at /usr/lib/node_modules/yarn/lib/cli.js:49061:15
at Generator.next (<anonymous>)
at step (/usr/lib/node_modules/yarn/lib/cli.js:92:30)
at /usr/lib/node_modules/yarn/lib/cli.js:103:13
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
error Couldn't find package "@sinonjs/formatio" on the "npm" registry.
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.</span>

The problem is that Apache2 canonicalizes URLs as default. So I have to configure Apache2 to not canonicalize URLs and additionally, I have to allow encoded slashes:


<VirtualHost:80>

  ProxyRequests Off
  <Proxy *>
    Order deny,allow
    Allow from all
  </Proxy>
  ProxyPass / http://localhost:8081/ nocanon
  ProxyPassReverse / http://localhost:8081/

  AllowEncodedSlashes NoDecode

</VirtualHost:80>

With the above Apache2 Virtualhost configuration I could install my dependency via the NPM registry proxy.


$ yarn add @sinonjs/formatio

yarn add v1.3.2
warning package.json: No license field
info No lockfile found.
warning yarn-test-module: No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 2 new dependencies.
├─ @sinonjs/formatio@2.0.0
└─ samsam@1.3.0
warning yarn-test-module: No license field
Done in 0.60s.

Big thanks to Sonatype support team, that gave me this advice.


Leave a comment

Mocking SecurityContext in Jersey Tests

Jersey has a great possibility to write integration test for REST-APIs, written with Jersey. Just extend the class JerseyTest and go for it.

I ran in an issue, where I had to mock a SecurityContext, so that the SecurityContext includes a special UserPrincipal. The challenge is that Jersey wraps the SecurityContext in an own class SecurityContextInjectee in tests. So I have to add my SecurityContext Mock to this Jersey’s wrapper class. Let me demonstrate it in an example.

Let say I have the following Jersey Resource:


import javax.ws.rs.*;
import javax.ws.rs.core.*;

@Path("hello/world")
public class MyJerseyResource {

    @GET
    public Response helloWorld(@Context final SecurityContext context) {
        String name = context.getUserPrincipal().getName();
        return Response.ok("Hello " + name, MediaType.TEXT_PLAIN).build();
    }

}

In my test, I have to mock the SecurityContext, so that a predefined user principal can be used during the tests. I use Mockito as mocking framework. My mock looks like the following one

import java.security.Principal;
import javax.ws.rs.core.SecurityContext;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

final SecurityContext securityContextMock = mock(SecurityContext.class);
when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() {
    @Override
    public String getName() {
        return "Alice";
    }
});

For adding this mocked SecurityContext to the wrapper class SecurityContextInjectee, I have to configure a ResourceConfig with a modified ContainerRequestContext in my Jersey Test. The mocked SecurityContext can be set in this modified ContainerRequestContext and then it will be used in the wrapper class:


import java.security.Principal;

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.SecurityContext;

import org.glassfish.jersey.server.ResourceConfig;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

@Override
public Application configure() {
    final SecurityContext securityContextMock = mock(SecurityContext.class);
    when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() {
        @Override
        public String getName() {
            return "Alice";
        }
    });

    ResourceConfig config = new ResourceConfig();
    config.register(new ContainerRequestFilter(){
        @Override
        public void filter(final ContainerRequestContext containerRequestContext) throws IOException {
            containerRequestContext.setSecurityContext(securityContextMock);
        }
    });
    return config;
}

Then, the whole test for my resource looks like the following one:

import java.security.Principal;

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.Application;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.SecurityContext;

import org.apache.commons.httpclient.HttpStatus;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.test.JerseyTest;
import org.junit.Test;
import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

public class MyJerseyResourceTest extends JerseyTest {

    @Test
    public void helloWorld() throws Exception {
        Response response = target("hello/world").request().get();

        assertThat(response.getStatus()).isEqualTo(HttpStatus.SC_OK);
        assertThat(response.getEntity()).isEqualTo("Hello Alice");
    }

    @Override
    public Application configure() {
        final SecurityContext securityContextMock = mock(SecurityContext.class);
        when(securityContextMock.getUserPrincipal()).thenReturn(new Principal() {
            @Override
            public String getName() {
                return "Alice";
            }
        });

        ResourceConfig config = new ResourceConfig();
        config.register(new ContainerRequestFilter() {
            @Override
            public void filter(final ContainerRequestContext containerRequestContext) throws IOException {
                containerRequestContext.setSecurityContext(securityContextMock);
            }
        });
        return config;
    }
}

Do you have a smarter solution for this problem? Let me know it and write a comment below.


Leave a comment

Pimp My Git – Generate Content for .gitignore From the Scratch

When I start a new Git repository, I lose a lot of time to set up my .gitignore file and normally, I don’t match everything on the first shoot. Fortunately, there exists some tools, that help to bootstrapping it. I’d like to show two of them. One is a website that can be used on the command line and the another is a plugin for the IDE IntelliJ IDEA.

Website gitignore.io

There is a website http://gitignore.io that lists the common ignore pattern for you specific programming language, tool, IDE etc.
The usage is very simple: Fill the search with names of  tools, framework, programming language etc, which you want to use in your Git project, and the website generates the content for your .gitignore file.

You can also run gitignore.io from your command line. Therefore, you need an active internet connection and an environment function. I’ll demonstrate the integration of gitignore.io in zsh. For the integration in other shells or clients, please look into the documentation.

Firstly, we have to create a function gi in our ~/.zshrc:


echo "function gi() { curl -L -s https://www.gitignore.io/api/\$@ ;}" >> ~/.zshrc && source ~/.zshrc

Now, we can use it on the command line.


$ gi java,maven # Preview of the content for .gitignore

# Created by https://www.gitignore.io/api/java,maven

### Java ###
# Compiled class file
*.class

# Log file
*.log

# BlueJ files
*.ctxt

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
*.jar
*.war
*.ear
*.zip
*.tar.gz
*.rar

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

### Maven ###
target/
pom.xml.tag
pom.xml.releaseBackup
pom.xml.versionsBackup
pom.xml.next
release.properties
dependency-reduced-pom.xml
buildNumber.properties
.mvn/timing.properties

# Avoid ignoring Maven wrapper jar file (.jar files are usually ignored)
!/.mvn/wrapper/maven-wrapper.jar

# End of https://www.gitignore.io/api/java,maven

$ gi list # list currently available templates
1c-bitrix,a-frame,actionscript,ada,adobe
advancedinstaller,agda,alteraquartusii,altium,android
androidstudio,angular,anjuta,ansible,apachecordova
apachehadoop,appbuilder,appceleratortitanium,appcode,appcode+all
appcode+iml,appengine,aptanastudio,arcanist,archive
archives,archlinuxpackages,aspnetcore,assembler,atmelstudio
ats,audio,automationstudio,autotools,backup
basercms,basic,batch,bazaar,bazel
bitrix,bittorrent,blackbox,bluej,bower
bricxcc,buck,c,c++,cake
.... furthermore

$ gi java,maven >> .gitignore # append the content in your project's .gitignore

IntelliJ IDEA Plugin – .ignore

There is a plugin for IntelliJ IDEA that helps creating .gitignore file with content for your selected tool, programming language etc. . At first you have to install the plugin .ignore (Go to File -> Settings -> Plugins and search for .ignore).

You can now create .gitignore file via the .ignore plugin. By the way, the plugin can also create ignore files for other tools like Docker or Mercurial. Then a file generator is opened and you can choose templates of tools, programming language etc that you will use in the Git project.A preview shows you the possible content. A click on Generate and you are ready.

Do you have other tips and tricks to boost the initialization time of a Git project? Share them and write a comment below.

Links

  1. gitignore.io
  2. Website of .ignore


Leave a comment

How to Mark a Jenkins Job Red When Tests Fail In A Maven Build

The default setting in Jenkins is to mark a job yellow, when a Maven build fails because of failing tests. If you don’t want to have three status of your jobs, you can configure Jenkins so, that the jobs also mark red independent why a Maven build fails.

For this you will need administration rights on your Jenkins instance. Following steps have to be done:

  1. Go to Manage Jenkins -> Manage system.
  2. Add -Dmaven.test.failure.ignore=false to Maven Project Configuration -> Global Maven_OPTS.
  3. Save this change and that’s it.

Your next job run will consider this configuration. Unfortunately, this configuration has only effects for Maven jobs. Freestyle jobs ignore this configuration (see also this bug).

But a workaround exists:

  1. Install the TextFinder plugin via Manage Jenkins -> Manage Plugin.
  2. Open the Freestyle job’s configuration that should be marked red, when Maven tests fail.
  3. Click on Add a post-build action (in section Post-build Action) and select Jenkins Text Finder.
  4. Activate the check box Also search the console output.
  5. Add the value There are test failures to Regular expression.
  6. Save this change.

 

 

 


1 Comment

Generate P2 Repository From Maven Artifacts In 2017

Some years ago, I wrote a blog post about how to generate a P2 repository based on Maven artifacts. That described approach is obsolete nowadays and I’d like to show a new approach that is based on the p2-maven-plugin that was created to solve exactly this problem.

P2-Maven-Plugin Integration in Maven Build Life Cycle

First at all, we bind the p2-maven-plugin’s goal site to the Maven’s life cycle phase package. This goal is responsible for the generation of the P2 repository.

<plugin>
  <groupId>org.reficio</groupId>
  <artifactId>p2-maven-plugin</artifactId>
  <version>1.3.0</version>
  <executions>
    <execution>
      <id>default-cli</id>
      <phase>package</phase>
      <goals>
        <goal>site</goal>
      </goals>
      <!--... -->
    </execution>
  </executions>
</plugin>

Generating P2 Repository

Now, we can define which Maven artifacts should be a part of the new P2 repository. It is irrelevant for the p2-maven-pluging if the defined artifacts have already a OSGi manifest or not. If no OSGi manifest exists, the plugin will generate one.


<execution>
<!-- ... -->
<configuration>
  <artifacts>
    <!-- specify your dependencies here -->
    <!-- groupId:artifactId:version -->
    <artifact>
      <id>com.google.guava:guava:jar:23.0</id>
      <!-- Artifact with existing OSGi-Manifest-->
    </artifact>
    <artifact>
      <id>commons-io:commons-io:1.3</id>
      <!-- Artifact without existing OSGi-Manifest-->
    </artifact>
  </artifacts>
</configuration>
</execution>

The artifacts are specified by the pattern groupId:artifactId:version. If you want to save some typing, use the Buildr tab on MVN repository website for copying the right dependency declaration format.

This sample configuration creates a P2 repository that look like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
└── plugins
    ├── com.google.code.findbugs.jsr305_1.3.9.jar
    ├── com.google.errorprone.error_prone_annotations_2.0.18.jar
    ├── com.google.guava_23.0.0.jar
    ├── com.google.j2objc.annotations_1.1.0.jar
    ├── commons-io_1.3.0.jar
    └── org.codehaus.mojo.animal-sniffer-annotations_1.14.0.jar

1 directory, 9 files

 

The default behavior of the plugin is, that all transitive dependencies of the defined artifact are also downloaded and packed into the P2 repository. If you don’t want it, then you have to set the option transitive to false in the corresponded artifact declaration. If you need the sources (if they exist in the Maven repository) of the defined artifact in the P2 repository, then you have to set the option source to true in the corresponded artifact declaration.

<!-- ... -->
<artifact>
  <id>com.google.guava:guava:jar:23.0</id>
  <transitive>false</transitive>
  <source>true</source>
</artifact>
<!-- ... -->

Then the generated P2 repository looks like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
└── plugins
    ├── com.google.guava.source_23.0.0.jar
    ├── com.google.guava_23.0.0.jar
    └── commons-io_1.3.0.jar

1 directory, 6 files

Generating P2 Repository With Grouped Artifacts

In some situations, you want to group artifacts in so-called feature. p2-maven-plugin provides an option that allows to group the Maven artifact directly into features. The definition of the artifacts is the same like above. The difference is that it has to be inside the corresponded feature. Then, the feature definition needs some meta data information like feature ID, feature version, description etc.


<!-- ...-->
<configuration>
  <featureDefinitions>
    <feature>
      <!-- Generate a feature including artifacts that are listed below inside the feature element-->
      <id>spring.feature</id>
      <version>4.3.11</version>
      <label>Spring Framework 4.3.11 Feature</label>
      <providerName>A provider</providerName>
      <description>${project.description}</description>
      <copyright>A copyright</copyright>
      <license>A licence</license>
      <artifacts>
        <artifact>
          <id>org.springframework:spring-core:jar:4.3.11.RELEASE</id>id>
        </artifact>
        <artifact>
          <id>org.springframework:spring-context:jar:4.3.11.RELEASE</id>id>
          <source>true</source>
        </artifact>
      </artifacts>
    </feature>
    <!--...-->
  </featureDefinitions>
  <!-- ... -->
<configuration>

Then the generated P2 repository looks like the following one:


target/repository
├── artifacts.jar
├── category.xml
├── content.jar
├── features
│   └── spring.feature_4.3.11.jar
└── plugins
    ├── org.apache.commons.logging_1.2.0.jar
    ├── org.springframework.spring-aop.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-aop_4.3.11.RELEASE.jar
    ├── org.springframework.spring-beans.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-beans_4.3.11.RELEASE.jar
    ├── org.springframework.spring-context.source_4.3.11.RELEASE.jar
    ├── org.springframework.spring-context_4.3.11.RELEASE.jar
    ├── org.springframework.spring-core_4.3.11.RELEASE.jar
    ├── org.springframework.spring-expression.source_4.3.11.RELEASE.jar
    └── org.springframework.spring-expression_4.3.11.RELEASE.jar

2 directories, 14 files

Of course both options (generating p2 repository with feature and only with plugins) can be mixed.

p2-maven-plugin provides more options like excluding specific transitive dependencies, referencing to other eclipse features and so on. For more information, please look at the p2-maven-plugin homepage.

Now, we can generate P2 repositories from Maven artifacts. We lacks of how to deploy this P2 repository to a Repository manager like Artifactory or Sonatype Nexus. Both repository manager supports P2 repositories, Artifactory in the Professional variant (cost money) and Sonatype Nexus in OSS variant (free). For Nexus, it’s important that you use the version 2.x. The newest version, 3.x, doesn’t yet support P2 repositories.

Deploying P2 Repository to a Repository Manager

First at all, we want that our generated P2 repository is packed into a zip file. Therefore, we add the tycho-p2-repository-plugin to the Maven build life cycle:


<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-p2-repository-plugin</artifactId>
  <version>1.0.0</version>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>archive-repository</goal>
      </goals>
    </execution>
  </executions>
</plugin>


Then, we have to mark this zip file, so that Maven recognize that it has to deploy it during the deploy phase to a repository manager. For this, we add the build-helper-maven-plugin to the Maven build life cycle.

<!-- Attach zipped P2 repository to be installed and deployed in the Maven repository during the deploy phase. -->
<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <version>3.0.0</version>
  <executions>
    <execution>
      <goals>
        <goal>attach-artifact</goal>
      </goals>
      <configuration>
        <artifacts>
          <artifact>
            <file>target/${project.artifactId}-${project.version}.zip</file>
            <type>zip</type>
          </artifact>
        </artifacts>
      </configuration>
    </execution>
  </executions>
</plugin>

Now, the generated P2 repository can be addressed by other projects. For more information about how to address the P2 repository, please have a look on the documentation of your repository manager.

A whole pom.xml sample can be found on Github.

Links