SKM IT World

Just another blog about IT

Leave a comment

Summary of SoCraTes 2016 Session “Hey dude, where is my tool chain?” – Working on Windows as a Linux User aka Let’s talk about Windows

This year on the conference SoCraTes I hosted a session for the first time. It was about working on a Windows system from the perspective of a Linux user.  A big thank to @ndrssmn, who motivated to host this session.

@yooogan was so nice to summarize the session in the SoCraTes wiki (big thank for that).  But the wiki page is only accessible for SoCraTes participants, so we decided that I republish it on my blog. Enjoy it.


  • Babun – Based on Cygwin, includes a CLI package manager (pact – like apt, yum, …) and preconfigured oh-my-zsh as shell
  • ConEmu – feature rich console emulator with tabs
  • Console2 (original)/(modified fork) – console emulator, multi tabs, configurable mouse behavior
  • PuTTYssh client (when you don’t have Babun/Cygwin anyway)

File Management

Text Editors

  • Notepad++ – all-purpose editor, syntax highlighting, file monitoring (tail -f)
  • Atom – editor; same settings in all your environments





  • Zim – Organize notes, saves to plain text
  • Greenshot – Screenshots, including obfuscation / comments, for documentation, connects to JIRA
  • yEd – multi-platform (Win/Linux/MacOS) graph editor, extensible palette, useful pictograms
  • Paint.NET – free image editor
  • GIMP – Swiss army knife for images


Disk Usage

  • RidNacs – graphical du
  • WinDirStat – even more colorful graphical du
  • ncdu – CLI, can be installed from Cygwin/Babun


  • WinCompose – a ( like) compose key for Windows – type äöë߀«»←↑↓→¡☺♥… like a boss!
  • SharpKeys – remap keyboard: CapsLockCtrl, ~Esc, etc.
  • AutoHotkey – very sophisticated keyboard macros / automation – full-fledged scripting language


  • PureText – remove formatting from pasted text
  • Ditto – clipboard manager

Pictures (taken from Twitter):

Let's talk about Windows, pt. 1
Let's talk about Windows, pt. 2
Let's talk about Windows, pt. 3


Leave a comment

My Lesson Learned From Doing Gilded Rose Kata

I’d like to share some of my thoughts about my approach to solve the Gilded Rose Refactoring Kata by Emily Bache. If you don’t know this kata, read the description for a better understanding. I have published my whole solution on GitHub . I tried to make a commit after every step, so you can keep track of my steps in the log of git. The chosen programming language is Java.

Solving Gilded Rose Step-By-Step

Let’s have a look at what I have done step-by-step.

Before adding the new feature, I wanted to refactor the given code base. Therefore, I started writing tests till I had a 100% line and branch coverage. During writing the tests, I was having the idea,, that the calculation of the quality is depended by the name of the item. Hence, the idea arose to use something similar like the Strategy Pattern. When I reached for 100% coverage, I tried to start with the implementation for the first strategy (“Aged Brie”). But I was unsure, what was my limit values for this first strategy. My problem was that I hadn’t tests for the limit values. So my first lessons learned was that 100% line or branch coverage doesn’t mean all test cases are covered. So I added tests for the limit values and finished implementing the “Aged Brie” strategy, added it to the original updateQualtity method (see below code snippet) and ran the tests. All tests were green.

ItemStrategy itemStrategy = new ItemStrategy();
for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);

// original code follows

These cycle I repeated four times: Find missing test cases (mostly for limit values); add new tests for these cases; implement a further strategy; add this new strategy to the original updateQualtiy method and ran the tests. If the tests are green, the next cycle with a new strategy begins. At the end the extended updatedQuality method looked like the following code snippet.

ItemStrategy itemStrategy = new ItemStrategy();

for (int i = 0; i < items.length; i++) {
   if("Aged Brie".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForAgedBrieItem(items[i]);
   } else if ("Sulfuras, Hand of Ragnaros".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForSulfurasItem(items[i]);
   } else if("Backstage passes to a TAFKAL80ETC concert".equals(items[i].name)) {
      items[i] = itemStrategy.updateQualityForBackstagePassItem(items[i]);
   } else {
      items[i] = itemStrategy.updateQualityForNormalItem(items[i]);

// commented out original code

My second Lessons Learned was “Refactoring needs time” and the refactoring wasn’t finished. The next steps were cleaning up unnecessary code and refactoring the strategy implementations like replacing if-else construct by ternary operator and extracting if-condition to private methods.

After that I implemented the new feature “conjured item” following the above describe work flow. After this step I could say “Ready”, but I was unhappy with the if-else if-else chain. Therefore, I decided to extract each strategy implementation to an own class (following the “classic” strategy pattern). That helps to replace the if-else if-else chain by an itemStrategyMap. So the next Lesson Learned was “The status ‘Ready’ depends by the definition”.
The last step was doing clean up and choosing better names for the interface and its method.

static Map<String, ItemStrategy> itemStrategyMap = new HashMap<>();

static {
   itemStrategyMap.put("Aged Brie", new AgedBrieItemStrategy());
   itemStrategyMap.put("Sulfuras, Hand of Ragnaros", new SulfurasItemStrategy());
   itemStrategyMap.put("Backstage passes to a TAFKAL80ETC concert", new BackstagePassItemStrategy());
   itemStrategyMap.put("Conjured", new ConjuredItemStrategy());

public void updateQuality() {
   for (int i = 0; i < items.length; i++) {
      ItemStrategy itemStrategy = itemStrategyMap.getOrDefault(items[i].name, new NormalItemStrategy());
      items[i] = itemStrategy.updateItem(items[i]);

Let’s summarize the Lesson Learned:
1) 100% line or branch coverage doesn’t mean all test cases are covered.
2) Refactoring needs time.
3) The status ‘Ready’ depends by the definition.
These insights aren’t really new for me. I can often observe these insights in my daily work. Nevertheless, it was good to have these insights again, following the rule “learning through repetition” ☺

What I forgot

I stopped after that step. Thinking about it some days later, I have realized that there exists more improvements. For example, the tests from GildedRoseTest class could be extracted to separate test classes regarding to the specific strategy classes.

Leave a comment

Continuous Integration Infrastructure With Windows – Scripting With PowerShell

In one of my current project, I deal with how to run a Continuous Integration (CI) infrastructure on Windows machines. I have had experience in running a CI infrastructure for five years, but it was always on Linux machines. So in the next months I will write some blog posts about my challenge with Windows machines from the perspective of a Linux fan girl🙂. This first blog post is about shell scripting on Windows. But bear in mind: This blog post isn’t a tutorial for PowerShell scripting. It only explains striking feature coming from Linux background.

When you run a CI infrastructure, it’s a frequent practice to write little shell scripts to automate repeatable tasks. On a Linux system you would write your scripts in Bash or in a script language like Perl or Phyton. I usually write my script in Bash or in Groovy. I choose Groovy, because I’m a Java Developer and it is possible writing Groovy script in Java-style in the beginning and the second argument for me was, that the administration of Jenkins is easier with Groovy scripts. Jenkins supports a Groovy console for administration tasks and build job’s step also can be automated with Groovy in Jenkins, directly. So I use Groovy for other automated tasks to not use so many script languages at the same time. Now you can say, ok, what’s the problem. It is able to use Groovy on Windows system. The problem is the requirements in my project. It is only allowed to use Java, C# or PowerShell as programming language. But I want to write little scripts, so from this point of view, only PowerShell remains.

Switch from Bash to PowerShell – What is striking

Good news at first, working with PowerShell isn’t as creepy as working with DOS shell. But it’s different compared to Bash shell. In next section, I will report what I notice when I start writing Powershell scripts with a Bash shell background.

First at all, it helps all lot for writing script in PowerShell from the Linux’s point of view when you understand the difference between how Linux and Windows handle their configuration. In Linux, you only have to adjust some text files to change system’s configuration. Here, the configuration is text driven and shells in Linux are optimized to handle text. On the other hand in Windows, you use an API to adjust properties in objects to change system’s configuration. Here, the configuration is API driven. The next important point is that Microsoft provides a large class library, the .NET framework, that has object model of Windows’ system configuration. PowerShell reuses this object model as the base for type system. So scripting in PowerShell feels more like object-oriented programming. Fortunately, we can reuse all functionality of the .NET framework in our PowerShell scripts. So if you’re familiar with C# programming, the start with PowerShell scripting is very easy for you. So writing scripts for PowerShell feels like working with an OOP language.

So let’s look at some code sample for typical situations to see the difference between scripting for Bash and for PowerShell.

Writing Something to the Standard Output Stream

On the Bash side, you have the built-in command echo for that:

echo "Hello World"

For PowerShell, you have a so-called Cmdlet Write-Output:

Write-Output "Hello World"

Now, we want to write the value of a variable to standard output.


message="Hello World"
echo $message


 $message="Hello World" Write-Output $message 

That was easy, wasn’t it

Parsing Files for a Pattern

In our example, we want to parse only XML files after a specific pattern (in our case “search-pattern”) and count how often this pattern is match in all.

On the Bash side, we use for Linux typically pipeline pattern. First, we use grep for searching the pattern and then we pipe the result of grep to wc to count the matches.

grep -w *.xml -e "search-pattern" | wc -l

On PowerShell, it looks little bit different. First, we have to list all XML files with dir. This result is piped to the Cmdlet ForEach-Object. This Cmdlet gets a script block. In our case, the script block reads the content of a file and pipe it to Select-String Cmdlet. This is responsible for filtering after the given pattern and this filter result is piped to Measure-Object that can calculate the numeric properties of object. In our case, it should only count the matches. At the end, every count has to be added together. The important thing is that the result of every Cmdlet is an object that has properties.

$sum = 0
dir *.xml | ForEach-Object {
$sum += (Get-Content $_ | Select-String -Pattern 'search-pattern' | Measure-Object).Count
Write-Output $sum


Conditions in Bash and in PowerShell look very similar . The only difference in my opinion is the using of bracket. In PowerShell, conditions look more like in C#.


if [ true -a true -o false -a 2 -gt 1 -a 1 -eq 1 -a 1 -lt 3 -a !false ];
  # do something if the condition is true
  # do something if the condition is false


if ( 1 -and ( 1 -or 0) -and (2 -gt 1) -and ( 1 -eq 1) -and (1 -lt 3) -and (-not 0)) {
  # do something if the condition is true
} else {
  # do something if the condition is false

Setting System Environment Variable

In both systems, Linux and Window, it is possible to set system environment variable on different context, system-wide, process or user based.


Setting system environment variable with Bash works like the following line:

export BASH_EXAMPLE="some value"

This variable is active during the current process. If you want that the variable is active system-wide, you have to edit the file /etc/profile or you create executable shell scripts in the directory /etc/profile.d. If the variable should be active only for special user session, you have to edit the file ~/.bash_profile . You can do this programmatically with sed or awk.

Using the variable works like that



In the PowerShell, you can call the .NET Framework API for setting the system environment variable.

[Environment]::SetEnvironmentVariable("WindowsExample", "Some value.", "Machine")

If the variable should be active only for special user or process, write User or Process instead of Machine. For process level there also exists another command:

$env:WindowsExample = "Some value."

The .Net Framework has a GetEnviromentVariable method for calling the variable.

[Environment]::GetEnvironmentVariable("WindowsExample", "Machine")
$Env:WindowsExample // shortcut

When setting system environment variable, you can see the big different between the both approach for configuration, text-driven (Linux) and API-driven (Windows).

Tool Support

I was pleasantly surprised when I figured out, that a PowerShell “IDE” exists, called PowerShell Integrated Scripting Environment (ISE). It supports you with code completion and has a complete documentation about the Cmdlets.

PowerShell Integration in Jenkins

One use case for writing scripts is running them in a build job in Jenkins. Jenkins can’t run PowerShell scripts out of the box. Fortunately, it exists a Jenkins Plugin for that called PowerShell Plugin. After installing this plugin a build step Windows PowerShell is avaible in the job configuration. This step has a command field where you can add your script code directly, or you add the path to your script (see below example). In the second variant it is important to add exit $LASTEXITCODE otherwise the build doesn’t recognize that the PowerShell script failed.


Further Information

  1. Manning’s book PowerShell in Action
  2. Jenkins’ PowerShell Plugin
  3. Microsoft TechNet about System Environment Variable.
  4. nixCraft’s blog post Linux: Set Environment Variable

Leave a comment

How to Install Serverspec in the Current Version on Ubuntu 14.04 LTS (Trusty)

If you google “serverspec install ubuntu”, you find the information that a package called ruby-serverspec in the standard package repository can be used to install Serverspec on an Ubuntu 14.04 LTS based system. Unfortunately, this package installs an outdated version of Serverspec. The next point is that if you try to install the newest version of Serverspec with gem (that’s the way that it is described on the Serverspec homepage), you will get the following error message:

~> sudo gem install serverspec
ERROR:  Error installing serverspec:
net-ssh requires Ruby version >= 2.0.


The problem is, when you install Ruby with sudo apt-get install ruby, the package manager installs Ruby in the version 1.9.1 .

Therefore, the next sections explain how to install Ruby and Serverspec in the newest version on an Ubuntu 14.04 LTS based system. Let’s start with Ruby that is required for Serverspec.

Ruby Installation

The cloud hosting service Brightbox provides Ruby package repositories for several Ubuntu versions and several Ruby version. I chose the repository for Ruby 2.3 packages, so the installation steps are:

~> sudo apt-get install software-properties-common
~> sudo apt-add-repository ppa:brightbox/ruby-ng
~> sudo apt-get update
~> sudo apt-get install ruby2.3
~> ruby --version
ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-linux-gnu]

Serverspec Installation

Now, we can install Serverspec like it is explained on the Serverspec homepage. In my case, I had to install rake separately.

~> sudo gem install serverspec rake


  1. Serverspec Homepage
  2. Brightbox Ruby package repositories for Ubuntu documentation

1 Comment

Installation Cheat Sheet For LivingDoc

We wanted to evaluate the new Confluence plugin LivingDoc as a replacement to Fitnesse in order to execute automated web GUI tests.

  • Confluence 5.7.1 (Non-Cloud version)
  • Inside Confluence HSQL In-Memory DB for evaluation purpose
  • LivingDoc plugin 1.0.0.jar
  • Selenium Webdriver for automated Web Testing of
  • Spring Petclinic running inside a Tomcat 8
  • Java Version SDK
  • Maven 3.3.1

The following steps are an extension to the LivingDoc documentation. This documentation is very detailed, but if you struggle around some steps, check the following out. We recommend to use the search function of your browser to find the relevant parts. Additional this is not about Best Practices, but only about getting the setup running fast. Let´s start:

  1. Starting point is the LivingDoc Documentation under > Current Documentation > Getting Started
  2. After following this steps, go to Current Documentation > Confluence plugin
  3. Unfortunately, there is no direct link to the current livingdoc-confluence5-plugin.jar, so go to and choose livingdoc-confluence5-plugin-1.0.0.jar  (even there is already a version livingdoc-confluence5-plugin-1.1.0.jar) and download it.
  4. Next is the configuration of the Runner. Please look at the following pic:
    1_2016-03-06 14_20_37-LivingDoc Configuration - Confluence
    We replaced the classpath default value with the path, where you have downloaded the livingdoc-confluence5-plugin-1.0.0.jar on your machine.
  5. Next is Project Management:
    2_2016-03-06 16_37_48-LivingDoc Configuration - Confluence
    There, we choose our above prepared runner. Under classpaths, we copy the path to the jar of our Selenium project inclusive its dependencies. In step 7 below, it will described how this jar is built. This is needed so that the Selenium tests can be executed by LivingDoc.
  6. Next is remote agent: In order to get our scenario running, we got advice from the LivingDoc Developer Team to start a remote agent. You will get the complete remote agent jar under . At that time, we chose livingdoc-remote-agent-1.0.0-complete.jar and downloaded it. Please start it like it is written  in the Livingdoc documentation (Current Documentation > Confluence Plugin > Advanced > Remote Agent).
  7. In order that the Selenium tests are executed by LivingDoc, we need a jar file with all our tests inclusive their dependencies. Therefore, we use Maven Assembly Plugin to build a jar with all dependencies. Below the configuration of the Maven Assembly Plugin (Link to whole POM)

    The descriptor format jar-with-dependencies can be found in the Maven Assembly Plugin site.


Bringing all together and let the Remote Agent running, we can execute from Confluence our Selenium Test. Now we are ready to rumble.

In the meantime, there is a VirtualBox image with everything inside under LiviningDoc documentation > Showcases. Furthermore, there is a new release 1.1 that supports Confluence 5.9.3. But if you want to install all by yourself, we hope this can save you some time.

4_2016-03-06 15_05_28-PetClinic __ a Spring Framework demonstrationPetClinicOwners - AcceptanceTests


  1. Spring Pet Clinic Project
  2. LivingDoc on GitHub
  3. LivingDoc documentation





Leave a comment

My Notes From Conference “Herbstcampus 2015”

In September I visited the conference “Herbstcampus” in Nuremberg. I took notes about some sessions, that I like to share with you. The notes are in German.

SOLIDes Design – Kriterien für objektorientiertes Design by David Tanzer

Solides Design

Wie geht’s? Was geht? Wann hat es Sinn? – Portierung von COBOL-Programmen nach Java by Carsten Siedentop


Überzogen – Technische Schulden by Gerrit Beine

Technische Schulden

Follow the Leader – Wie Sie Ihr Team beeinflussen und gestalten können by Sabine Zehnder

Follow the leader0001Follow the leader0002

So sieht’s aus! -Architekturüberblicke: Tipps und Tricks by Stefan Zörner