SKM IT World

Just another blog about IT


Leave a comment

QA Milestones since the Foundation of an eHealth Framework

“Quality exists when the price is long forgotten”,  said Mr Royce to Mr Rolls with his petrol smeared face or Mr Rolls to Mr Royce – but anyway.

The question is about which price is he talking about?

Money, yes of course! But what about Pain, Features, Competition, Time, Returns, No-Family, Consumer Satisfaction, Feelgood Management, Lifecycle…. – I think you get the point.

The reason for the pain i feel right now is that it happens again to me for the third time to move my contribution for the Developer Network of ICW to another webspace.

So i was quite surprized and more important i´m very thankful to the offer of SKM (Big Hugs!) to be finally part of this enviroment full of motivation, positive energy and knowledge in order to link the following stuff for eternity to my Xing account.

To all who are magically attracted to quality, fighting against superficiality, sensitised to sense-of-self vs. awareness-of-others and promoters of best practises for Continuous Integration and Continuous Delivery.

Let´s go….

QA Milestones since the Foundation of the eHealth Framework (2010)

Imagine that you are responsible for the quality of the underlying technical platform in the scope of an electronic health record that is being developed from scratch. The platform
is the eHealth Framework (eHF). In addition, this solution is part of an integrated health suite with a software development kit (SDK) and various professional services products.
On the 02 December 2009 the tenth version of the eHealth Framework with the version number 2.9 was released. Therefore it is a good occasion to take a look back and remind
ourselves of the challenges we faced so far in QA, how we solved them and how we generated value.
As the saying goes: On time, on scope, on budget! The role that quality plays in this mix is the subject of this article.

 

QA from Scratch to First Release

When developing software from scratch you first need to lay the foundations so that when it is time to roll out the software all the features and usage scenarios can be tested. Sounds easy but it isn’t!

Here is a list of the highlights:
Product Management produced the Software Requirement Specification (SRS) with an overview of the priorities for developing the basis of the new LifeSensor, a personal health record. Based on the Software Requirement Specification, QA extracted all the relevant test cases, these are then reviewed by Product Management. The Tooling to carry out the necessary tests was built. To ensure the web services testing could be carried out we built a Java-based web services test client that catered for maximum flexibility and automation. The test data was stored in an MS Excel sheet. The development cycle was such that every Thursday there was a new version to be tested. Based on our testing we created new bugs as necessary. The respective Product Manager created acceptance tests for the newly developed features that had to be completed for the current week. In this way everyone was involved in an operational feature. In such situation be careful with statements like “….I’m 90% finished…” when you are already in the 3rd day of finalizing the requested missing 10%. I don’t want to criticize anyone because requirements can and do change and that causes effects from the DB straight to GUI and its usability. To steer the project there was a clear focus on pragmatic and sensible reporting: tested, not-tested and accepted Bugs including Top-Ten Risks.

Lessons learned:

  • It was quickly proved in practice that integrative tests between WS-to-GUI and GUI-to-WS were valuable. Yes there was a GUI too.
  • Another important point to share is:  Feature or QA-ready is NOT Production ready. So involve the operating administrators immediately to keep everything maintainable.
  • Take care about these guys!  Make the operating administrators to your friends. Do everything!

 

QA after Initial Roll-out

Once the software has gone into production each new feature must fit into the existing application both conceptually and functionally without introducing any unwanted side effects. In addition we were confronted with the mandatory requirement that each new version of a framework with existing clients must face.

The keyword here is: backwards compatibility. Here it is also important to mention that LifeSensor development was then split between developing its own specific components and its basis components, the latter becoming the backbone of ICW’s health care platform: This was the birth of the eHealth-Framework with its own development team.

Here are some of the highlights of this new team:
We changed our software development process to agile inside a V-Model and introduced Scrum. This means that we QAs worked in parallel with the development team and product mangers. We were no longer on the receiving end of getting a piece of software thrown over the fence and having to test it somewhere in isolation. The whole team was responsible for delivering the features. The whole team was responsible for quality. Not only QA was responsible for quality any more. This is the modern QA approach like in modern soccer. If the Bavarians have the ball the Lewandowski become the first line of defense. If you go deeper with this thought our approach and reason for existing is that QA has to deliver added value and support to development and product management in order to be accepted day by day. For instance, this means no longer being able to hide behind theoretical metrics discussions. We refactored the architecture of our test client and separated common components like object creation from the test data in MS Excel and test result reporting. The added value here was that other teams could easily reuse our test data and checks when using the eHealth Framework or if they wanted to build their own test client. We started to build up a continuous integration environment based on CruiseControl where every test in our test client was executed during the night. We developed further features in our test client in order to minimize duplication of test data based on localization needs. (for instance, different test data was necessary for Austria, Switzerland….) Once again: we had already released a software version and would have to release further software versions: So please think about this attached diagram from the Software Lifecycle Management session from the ICW Developer Conference 2008 and what that means for ensuring backwards compatibility.

 

eHF_Lifecycle

Lessons learned:

  • Do not underestimate the Software Lifecycle. And it is good to have the checks in a continuous integration process.
  • QA is not only testing. QA is not Build Management.
  • Build Management is an independent IT topic with special in-depth know-how and provides a crucial service for the development and the QA team.
  • With software in production be prepared that QA will get new roles as supporter and investigator of potential bugs.

QA Between Version 2.1 and 2.8

The first part dealt with with the historical roots of QA. We now make a big jump over the released versions. Meanwhile, the amount of test data multiplied. We also supported new platforms such as Glassfish. Now the existing tools had to be made manageable and usable with this amount of necessary test data. Meaning usability is not only a topic for developing a GUI. We carried out about 50.000 tests per night on the Web services level, per code line, and rising.

Here is a collection of highlights and ideas:
The test data in the Excel sheets were clustered together as fragments where it made sense. For example, now it’s no longer necessary to include the address in each Emergency Contact as test data; you can pick the one you want from a pool of addresses and reference this. This saves lots of maintenance work when the interface changes. With each negative test, we check the expected exception. However, exceptions and their messages are also subject to change. So what happens now to our backward compatibility tests? They will all fail. What have we done about this? We reused the localization feature that I talked about in “QA after initial roll out” and include there the newly expected exceptions for backwards compatibility testing. And how does the test now knows its context, is it a backward compatibility test or a normal test for its own initial version?  Talk with your build management team, they will provide you a switch.

The Continuous Integration (CI) environment itself was also continuously developed. To deal with the quantity and to scale better we moved to Hudson. Appropriate monitoring solutions were designed as well as a “merge assistant” was developed for more control over this process. Additionally, the tomcat server logs are now published by Hudson for every test that is run. In order to test Auditing and Encryption on the database level we developed direct access to the DB with reuse of the already written and automated test cases. Of course everything can be run on local machines as well as in the CI environment. We made a test tool evaluation but at that time it did not look so good for support in the agile Java domain and an artifact based approach.

Lessons learned:

  • Automation is king. Period! But be careful because automation can make you stupid. Systems and approaches are changing and therefore the test cases have to change respectively. Just configuring the test system and executing one BIG button is not QA but a hit-and-run approach – even when the test results are all “executed ok” in the log files. So we have to ask ourselves every time – Are we still testing the right things?
  • Greatness must be worked hard for every day and is not a default value. Establish such a likeable enviroment.
  • Now take a deep breath and think about what is the process worth in time problems? Everybody should think about it at least once again after a release is delivered.
  • Establish a weekly Bug smashing session. Here it’s important that not only the QA-Team works to resolve the bug but that decision makers from PM and Dev are also represented. The whole project will take a great benefit from that.

 

QA for the Latest Release – 2.9

It will not surprise anyone if I say the amount of work is no less. What was not mentioned so far is that we also have an outstanding development team in Bulgaria. That is, the communications must be guided by this. But there are enough tools to cater for this situation they only need to be properly implemented. Imagine that the whole team is located on the same floor, but everyone stays at their desk. Fortunately Scrum addresses this issue. It is not complicated.
So we’re into the final stretch. Here are the main highlights and some associated ideas:
We achieved the goal of including the whole system documentation (based on DITA) in our CI environment (for example, part of this documentation set is included on ICW’s partner DVDs). Not only bugs, but all story planning as well version tracking is completely mapped in Jira. This means we had absolutely everything that we need to create a release in our CI environment. Some bugs need to be fixed not only for future versions but also in older versions or code lines. How can I ensure that everything is remembered? During bug smashing if we determine that we need to provide a fix for an older version, we provide this information as an additional subtask in the Jira entry. Everything is now transparent and if there are side effects everyone can see if a fix was checked in and when it was checked in and QA can start with retesting.

As far as our CI environment went we had to solve the problem that in the meantime we carry out so many tests that they were not finished in the morning but rather at midday. In this regard our build management with development and QA worked together to improve the feedback time by optimizing the build time. Some intermediate goals to achieve the required solution included: different load balancing and tuning of integration tests.

How to improve further:
Have you ever experienced the situation where somebody calls you up and nervously asks “…QA! Do you have a test which checks the… (fill in some scenario you hope is covered)“ – always if the customer claims to have found a bug , suddenly in that second for every stakeholder this is such a clear and logical never to be forgotten test from that moment on. So. Hmmmmm! Why not identify these “obvious” ones a little bit sooner? How? We have product managers, developers and QAs and everybody brings their own perspective to every single feature. We can use these perspectives and you will be surprised how your test cases increase. Every test case you identify in such sessions is an additional security net and a good day for the quality of your product. Donate this to your system as much as possible. It is of great benefit.

Some of next challenges will be adding micro-benchmarking tests in our test scenarios. This will never replace a mandatory performance test from this specially skilled people in our performance lab for every release but gives you soon information about possible side effects when your system is growing up.
I hope you liked this short overview of our work and that you can find something here that will help you in your own daily work. Thanks to my colleagues in technical writing and build management for their input and help with this post.

Finished.

PS: So and there is a pdf version for you to download right into the mediathek.

 

 

Advertisements


1 Comment

Migration Path for Jenkins Subversion Plugin from Version 1.53 to 2.2

Introduction

Jenkins Subversion Plugin changes its credential management. Before version 2.2 you had to set your Subversion credential globally depend on domain name. In version 2.2 it’s changed. The credential management is still central but now you have to configure in every job which credential the job has to use for Subversion authentication .

If you have many jobs, you don’t want to touch every job to change the Subversion configuration. A Groovy script for Jenkins Script Console [1] can help. In the next section I will describe how the migration path looks like.

Migration Path

  1. Update the Subversion Plugin in your Jenkins instance to version 2.2.
  2. Add your Subversion credential to the global credential store:
    1. Go to Jenkins -> Credential -> Global credentials -> Add credential
    2. Add your credential. The scope has to be global.
  3. Go to the installation path of your Jenkins instance and open the file credential.xml.
  4. Search in credential.xml for the element <com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl> that contains your above set credential and copy the value of the element <id> .
    Example:

    <com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
      <scope>GLOBAL</scope>
      <id>63c3793c-e3fb-49eb-b45b-f6f8e7364876</id>
      <description></description>
      <username>jenkinssvnuser</username>
      HyDUenzpyDkbL9xMoQ0pxdK10l20VkXKEiy4+ZnjL9c=
    </com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
    
  5. Go to the Jenkins Script Console and run following Groovy script. Assign your credential id to the variable credentialId before running the script.
    def credentialId = '63c3793c-e3fb-49eb-b45b-f6f8e7364876'
    
    Hudson.instance.items.each { item ->
    
    if(item.scm instanceof hudson.scm.SubversionSCM) {
      println("JOB : "+item.name)
    
      def newLocations = new ArrayList()
    
      item.scm.locations.each {location ->
      newLocations.add(new hudson.scm.SubversionSCM.ModuleLocation(location.remote, credentialId, location.local, location.depthOption,location.ignoreExternalsOption))
    }
    
    def newScm = new hudson.scm.SubversionSCM(newLocations, item.scm.workspaceUpdater,
    item.scm.browser, item.scm.excludedRegions, item.scm.excludedUsers, item.scm.excludedRevprop, item.scm.excludedCommitMessages,
    item.scm.includedRegions, item.scm.ignoreDirPropChanges, item.scm.filterChangelog, item.scm.additionalCredentials)
    
    item.scm = newScm
    item.save()
    println("\n=======\n")
    }
    
  6. Finish.

This script is tested with Jenkins version 1.555 and Jenkins Subversion Plugin version 2.2. On Github [2] you can find more Groovy script for Jenkins Script Console.

Links

[1] Jenkins Script Console
[2] More Groovy Script for Jenkins on Github


15 Comments

Groovy Script for Jenkins Script Console to Add or Modify Discard Old Builds Setting

When your Jenkins server runs on for a while, your hard disk usage grows up because the default setting of each job is that every job build and build artifact obtained from this are stored in a build history.  Limiting this build history can save hard disk usage. Each job has a setting option called Discard Old Builds. In this setting you can set how long (in days) old builds keep in the build history (criteria 1) or how many old builds keep at most in the build history (criteria 2).  Jenkins matches first criteria 1 and then criteria 2. Beyond that there exists advanced setting options. This advanced setting options can configured how long and how many builds with build artifacts have to keep in the build history. After that Jenkins deletes the build artifact, but the logs, history, reports, etc for the build will be kept. These will be kept so as long as they match the criteria in the normal setting options.

When you have many jobs, you don’t want to configure every job manually. For that you can modify the Discard Old Builds setting in each job over a Groovy script at one go. Jenkins has a so-called Script Console [1].  In this console you have to put the following Groovy script and every job is modified to discard its old builds.

def daysToKeep = 28
def numToKeep = 10
def artifactDaysToKeep = -1
def artifactNumToKeep = -1

Jenkins.instance.items.each { item ->
  println("=====================")
  println("JOB: " + item.name)
  println("Job type: " + item.getClass())

  if(item.buildDiscarder == null) {
    println("No BuildDiscarder")
    println("Set BuildDiscarder to LogRotator")
  } else {
    println("BuildDiscarder: " + item.buildDiscarder.getClass())
    println("Found setting: " + "days to keep=" + item.buildDiscarder.daysToKeepStr + "; num to keep=" + item.buildDiscarder.numToKeepStr + "; artifact day to keep=" + item.buildDiscarder.artifactDaysToKeepStr + "; artifact num to keep=" + item.buildDiscarder.artifactNumToKeepStr)
    println("Set new setting")
  }

  item.buildDiscarder = new hudson.tasks.LogRotator(daysToKeep,numToKeep, artifactDaysToKeep, artifactNumToKeep)
  item.save()
  println("")

}

This script is tested with Jenkins version 1.534 and Jenkins Subversion Plugin version 1.53.

In my last posts ([2], [3]) I showed two other use cases for the Jenkins Script Console. All Groovy scripts can be found in GitHub [4]

Links

[1] Jenkins Script Console
[2] Post about how to rename Subversion host name in every job
[3] Post about how to add or modify Subversion repository browser in every job
[4] GitHub repository with several Groovy scripts for Jenkins script console


1 Comment

Groovy Script for Jenkins Script Console to Add or Replace Subversion Repository Browser

In my last post I showed how to rename the host name for Subversion in all jobs with one script. In this post I will show you how to set the repository browser in all jobs with one script. This script is based on the script from my last post.

def newRepositoryBrowserRootUrl = new URL("http://root.url.to.your.sventon.instance")
def newRepositoryInstance = "repository-instance-name"
def newRepositoryBrowser = new  hudson.scm.browsers.Sventon2(newRepositoryBrowserRootUrl, newRepositoryInstance)

Hudson.instance.items.each { item ->

    if(item.scm instanceof hudson.scm.SubversionSCM) {
        println("JOB: " + item.name)

        def newScm = new hudson.scm.SubversionSCM(Arrays.asList(item.scm.locations), item.scm.workspaceUpdater,
            newRepositoryBrowser, item.scm.excludedRegions, item.scm.excludedUsers, item.scm.excludedRevprop, item.scm.excludedCommitMessages,
            item.scm.includedRegions, item.scm.ignoreDirPropChanges, item.scm.filterChangelog)

        item.scm = newScm
        item.save()

        println("New Repository Browser: " +  item.scm.browser.class)
        println("\n=================\n")

    }
}

As aforementioned the above Groovy script uses Sventon 2.x as Subversion repository browser. However, Jenkins supports more Subversion repository browsers originally like

  • Assembla
  • CollabNetSVN
  • FishEyeSVN
  • SVNWeb
  • Sventon 1.x
  • ViewSVN
  • WebSVN

Jenkins supports other Subversion repository browsers by plugins like

  • Polarion WebClient  for Subversion
  • WebSVN 2.x

If you want to use an another Subversion repository browser, you have to change the first three lines:

def newRepositoryBrowserRootUrl = new URL("http://root.url.to.your.sventon.instance")
def newRepositoryInstance = "repository-instance-name"
def newRepositoryBrowser = new  hudson.scm.browsers.Sventon2(newRepositoryBrowserRootUrl, newRepositoryInstance)

For example, if you want to use SVNWeb as Subversion repository browser, you have to add following lines instead of the aforementioned lines:

def newRepositoryBrowserUrl = new URL("http://root.url.to.your.svn")
def newRepositoryBrowser = new hudson.scm.browsers.SVNWeb(newRepositoryBrowserUrl)

This script is tested with Jenkins version 1.534 and Jenkins Subversion Plugin version 1.53.

Links

  1. Blog Post – Groovy Script for Jenkins Script Console to Rename the Subversion Host Name
  2. Overview about supported Subversion repository browser by Jenkins
  3. Polarion Plugin
  4. WebSVN2 Plugin


4 Comments

Groovy Script for Jenkins Script Console to Rename the Subversion Host Name

The host name of a Subversion instance is renamed and now many Jenkins jobs have to be adjusted. One possibility is to change every job by hand. But this approach is very time-consuming and error-prone. A better possibility is to have a script based approach, that rename the Subversion host name of all jobs in one go. Jenkins has a feature, that helps us thereby. Jenkins has a script console to run Groovy script on Jenkins server [1].

Below you can see a Groovy script  that renames the Subversion host name in all jobs.


def oldHostName = "old.hostname.com"
def newHostName = "new.hostname.com"

Hudson.instance.items.each { item ->

  if(item.scm instanceof hudson.scm.SubversionSCM) {
    println("JOB : "+item.name)

    def newLocations = new ArrayList<hudson.scm.SubversionSCM.ModuleLocation>()

    item.scm.locations.each {location ->

      println("SCM Location Remote : " + location.remote)
      def newRemoteUrl = location.remote.replaceFirst(oldHostName, newHostName)

      newLocations.add(new hudson.scm.SubversionSCM.ModuleLocation(newRemoteUrl, location.local, location.depthOption,location.ignoreExternalsOption))
    }

    def newScm = new hudson.scm.SubversionSCM(newLocations, item.scm.workspaceUpdater,
    item.scm.browser, item.scm.excludedRegions, item.scm.excludedUsers, item.scm.excludedRevprop, item.scm.excludedCommitMessages,
    item.scm.includedRegions, item.scm.ignoreDirPropChanges, item.scm.filterChangelog)

    newScm.locations.each { location ->
      println("New SCM Location Remote : " + location.remote)
    }

    item.scm = newScm
    item.save()
    println("\n=======\n")
  }
}

This script is tested with Jenkins version 1.534 and Jenkins Subversion Plugin version 1.53.

Links

[1] Jenkins Script Console


Leave a comment

SaxException During a Sucessful Maven Build in Jenkins

During the inspection of  several  sucessful Maven 3 build output, following SaxException catchs my eye:

[INFO] Parsing file:/home/skosmalla/.m2/repository/repository.xml
[Fatal Error] repository.xml:3:1: Premature end of file.
org.xml.sax.SAXParseException: Premature end of file.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208)
at org.apache.felix.obrplugin.ObrUpdate.parseFile(ObrUpdate.java:347)
at org.apache.felix.obrplugin.ObrUpdate.parseRepositoryXml(ObrUpdate.java:324)
at org.apache.felix.obrplugin.ObrInstall.execute(ObrInstall.java:140)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239)
at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:158)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:122)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:74)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:287)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

I didn’t find a solution with google for this problem. But I found a hint that the repository.xml  is generated by Maven at the  end of  a build.  I noticed that only the Maven builds had this SaxException that were built parallelly (Jenkins had two build processors)  and all Maven builds used the same M2 repository.  So I had the idea that the reason of this exception could be that two almost finished Maven build tried to generate the repository.xml.  I actived in every job the option Use private Maven repository and the SaxException didn’t turn up anymore.