SKM IT World

Just another blog about IT


Leave a comment

Running Ansible on a Windows System

On my last conference talk (it was about Ansible and Docker at DevOpsCon in Berlin), I was asked what is the best way to run Ansible on a Windows system. Ansible itself requires a Linux-based system as the control machine. When I have to develop on a Windows machine, I install a Linux-based virtual machine to run the Ansible’s playbooks inside the virtual machine. I set up the virtual machine with Virtualbox and Vagrant. This tools allow me to share the playbooks easily between host and the virtual machine. so I can develop the playbook on the windows system and the virtual machine can have a headless setup. The next section shows you how to set up this tool chain.

 Tool Chain Setup

 At first, install VirtualBox and Vagrant on your machine. I additionally use Babun, a windows shell based on Cygwin and oh-my-zsh, for a better shell experience on Windows, but this isn’t necessary. Then, go to the directory (let’s called it ansible-workspace), where your Ansible’s playbooks are located. Create there a Vagrant configuration file with the command vagrant init:
ansible-workspace
├── inventories
│   ├── production
│   └── test
├── README.md
├── roles
│   ├── deploy-on-tomcat
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │       ├── cleanup-webapp.yml
│   │       ├── deploy-webapp.yml
│   │       ├── main.yml
│   │       ├── start-tomcat.yml
│   │       └── stop-tomcat.yml
│   ├── jdk
│   │   └── tasks
│   │       └── main.yml
│   └── tomcat8
│       ├── defaults
│       │   └── main.yml
│       ├── files
│       │   └── init.d
│       │       └── tomcat
│       ├── tasks
│       │   └── main.yml
│       └── templates
│           └── setenv.sh.j2
├── demo-app-ansible-deploy-1.0-SNAPSHOT.war
├── deploy-demo.yml
├── inventories
│   ├── production
│   └── test
├── roles
│   ├── deploy-on-tomcat
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │       ├── cleanup-webapp.yml
│   │       ├── deploy-webapp.yml
│   │       ├── main.yml
│   │       ├── start-tomcat.yml
│   │       └── stop-tomcat.yml
│   ├── jdk
│   │   └── tasks
│   │       └── main.yml
│   └── tomcat8
│       ├── defaults
│       │   └── main.yml
│       ├── files
│       │   └── init.d
│       │       └── tomcat
│       ├── tasks
│       │   └── main.yml
│       └── templates
│           └── setenv.sh.j2
├── setup-app-roles.yml
├── setup-app.yml
└── Vagrantfile

├── setup-app-roles.yml
├── setup-app.yml
└── Vagrantfile


Now, we have to choose a so-called Vagrant Box on Vagrant Cloud. A box is the package format for a Vagrant environment. It depends on the provider and the operation system that you choose to use. In our case, it is a Virtualbox VM image based on a minimal Ubuntu 18.04 system (box name is bento/ubuntu-18.04 ). This box will be configured in our Vagrantfile:

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-18.04"
end

The next step is to ensure that Ansible will be installed in the box. Thus, we use the shell provisioner of Vagrant. The Vagranfile will be extended by the provisioning code:

Vagrant.configure("2") do |config|
  # ... other Vagrant configuration
  config.vm.provision "shell", inline: <<-SHELL
    sudo apt-get update -y
    sudo apt-get install -y software-properties-common
    sudo apt-add-repository ppa:ansible/ansible
    sudo apt-get update -y
    sudo apt-get install -y ansible
    # ... other Vagrant provision steps
  SHELL
end
end

The last step is to copy the SSH credential into the Vagrant box. Thus, we mark the SSH credential folder of the host system as a Shared folder, so that we can copy them to the SSH config folder inside the box.
Vagrant.configure("2") do |config|
 
  # ... other Vagrant configuration
  config.vm.synced_folder ".", "/vagrant"
  config.vm.synced_folder "path to your ssh config", "/home/vagrant/ssh-host"
  # ... other Vagrant configuration

  config.vm.provision "shell", inline: <<-SHELL
    # ... other Vagrant provision steps
    cp /home/vagrant/ssh-host/* /home/vagrant/.ssh/.
  SHELL
end

On Github’s Gist you can found the whole Vagrantfile.

Workflow

After setting up the tool chain let’s have a look how to work with it. I write my Ansible playbook on the Windows system and run them from the Linux guest system against the remote hosts. For running the Ansible playbooks we have to start the Vagrant box.
> cd ansible-workspace
> vagrant up

When the Vagrant box is ready to use, we can jump into the box with:
 
> vagrant ssh 

You can find the Ansible playbooks inside the box in the folder /vagrant .  In this folder run Ansible:
 
> cd" /vagrant
> ansible-playbook -i inventories/test -u tekkie setup-db.yml

Outlook

Maybe on Windows 10 it’s possible to use Ansible natively, because of the Linux subsystem. But I don’t try it out. Some Docker fans would prefer a container instead of a virtual machine. But remember, before Windows 10 Docker runs on Windows in a virtual machine, so therefore, I don’t see a benefit for using Docker instead of a virtual machine. But of course with Windows 10 native container support a setup with Docker is a good alternative if Ansible doesn’t run on the Linux subsystem.
Do you another idea or approach? Let me know and write a comment.

Links

  1. VirtualBox
  2. Vagrant
  3. Whole Vagrantfile on Github.

 

Advertisements


Leave a comment

How to Install Serverspec in the Current Version on Ubuntu 14.04 LTS (Trusty)

If you google “serverspec install ubuntu”, you find the information that a package called ruby-serverspec in the standard package repository can be used to install Serverspec on an Ubuntu 14.04 LTS based system. Unfortunately, this package installs an outdated version of Serverspec. The next point is that if you try to install the newest version of Serverspec with gem (that’s the way that it is described on the Serverspec homepage), you will get the following error message:


~> sudo gem install serverspec
ERROR:  Error installing serverspec:
net-ssh requires Ruby version >= 2.0.

 

The problem is, when you install Ruby with sudo apt-get install ruby, the package manager installs Ruby in the version 1.9.1 .

Therefore, the next sections explain how to install Ruby and Serverspec in the newest version on an Ubuntu 14.04 LTS based system. Let’s start with Ruby that is required for Serverspec.

Ruby Installation

The cloud hosting service Brightbox provides Ruby package repositories for several Ubuntu versions and several Ruby version. I chose the repository for Ruby 2.3 packages, so the installation steps are:


~> sudo apt-get install software-properties-common
~> sudo apt-add-repository ppa:brightbox/ruby-ng
~> sudo apt-get update
~> sudo apt-get install ruby2.3
~> ruby --version
ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-linux-gnu]

Serverspec Installation

Now, we can install Serverspec like it is explained on the Serverspec homepage. In my case, I had to install rake separately.

~> sudo gem install serverspec rake

Links

  1. Serverspec Homepage
  2. Brightbox Ruby package repositories for Ubuntu documentation


1 Comment

Salt SSH Installation on Centos 5.5

Salt has the option to manage servers agentlessly.  Agentless means that the targets don’t need a agent process. The master orchestrates the target system over SSH. Therefor it exists an own command called salt-shh.  The following sections explain how to install Salt SSH on a CentOs 5.5 and how to configure minimally a master and its targets for a test connection. The how to is tested with Salt version 2014.1.11.

Installation

On Master Node

Salt SSH is a part of the master package, so we have to install salt-master.

sudo yum install salt-master

It also installs the optional dependencies. For an agent mode these dependencies can make trouble (see for more information Salt Installation on Centos 5.5). But for our case these dependencies can be ignored because the communication between master and target systems is over SSH.

On Target Nodes

On target nodes, we have to ensure that Python 2.6. are installed and some Python 2.6 modules (see Salt Dependency page). These are needed because the master copies Python scripts to the minions and run them on the targets. So the following steps has to be done.

  1. Enable EPEL Release
    sudo yum install epel-release
    
  1. Install Python 2.6 package and needed Python modules
    sudo yum install python26 python26-msgpack python26-PyYAML python26-jinja2 python26-markupsafe python-libcloud python26-requests
    

Configuration

This section describes only the important configuration issues for running the first command from a master to its targets. For further configuration possibilities, please read the Salt documentation about configuration.

The configuration depends whether the authentication uses password or public/private keys.

Password Authentication

  1. Go on target nodes.
  2. Enable SSH password authentication.
    1. Open /etc/ssh/sshd_config with your favorite editor.
    2. Ensure that the line PasswordAuthentication yes is active.
    3. Restart SSH.
      sudo service ssh restart
      
  1. Go on master node.
  2. Configure the connection to the targets.
    1. Open /etc/salt/roster with your favorite editor.
    2. Add for every target following content
          <Salt ID>:   # The id to reference the target system with
              host:    # The IP address or DNS name of the remote host
              user:    # The user to log in as
              passwd:  # The password to log in with
      
    1. Save the file.
  1. Test the communication.
    salt-ssh <Salt ID> test.ping
    

Public/Private Key Authentication

  1. Go on the master node.
  2. Prepare SSH for key authentication
    1. Call
      ssh-keygen
      
    2. Reply following question
          Enter file in which to save the key (/home/skosmalla/.ssh/id_rsa):
          Enter passphrase (empty for no passphrase):
          Enter same passphrase again:
      
    3. Keep the following information in mind.
          Your identification has been saved in /home/skosmalla/.ssh/id_rsa.
          Your public key has been saved in /home/skosmalla/.ssh/id_rsa.pub.
          The key fingerprint is:
          44:3e:ef:58:94:15:52:c2:88:ca:ab:21:43:53:3d:42 skosmalla@computer
      
    4. Copy the public key (in our example id_rsa.pub) to the targets.
      ssh-copy-id -i /home/skosmalla/.ssh/id_rsa.pub username@target_host
      
    5. Check, if the ssh access is working without password.
      ssh username@target_host
      
      
  1. Configure the connection to the targets.
    1. Open /etc/salt/roster with your favorite editor.
    2. Add for every target following content.
          <Salt ID>:   # The id to reference the target system with
              host:    # The IP address or DNS name of the remote host
              user:    # The user to log in as
              priv:    # File path to ssh private key, defaults to salt-ssh.rsa, in our example it is /home/skosmalla/.ssh/id_rsa.
      
  2. Test the communication.
    salt-ssh <Salt ID> test.ping
    

 Further Information


1 Comment

Salt Installation On Centos 5.5

When you follow the instruction step in the installation guide for Centos 5.5, the package manager automatically installs ZeroMQ in version 2.2. This ZeroMQ version makes some troubles. Therefore, Salt recommends using ZeroMQ in version >= 3.2. There is no ZeroMQ package for this version available in the official Centos 5 repositories, neither in EPEL. So the community prepares some RPMs for Centos 5 for installing ZeroMQ in version 3.3.2.2.

This post describes which steps has to be done to install Salt 2014.1.11 on a Centos 5.5 64bit.

Common Installation Steps on Salt Master Node and Salt Minion Nodes

  1. EnableEPEL repository
    sudo yum install epel-release
    
  2. Install the Python 2.6 development package
    sudo yum install python26-devel.x86_64
    
  3. Installl all RPMs listed on this site.

    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/libzmq3-3.2.2-13.1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/python-zmq-debuginfo-13.1.0-1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/python26-zmq-13.1.0-1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/python26-zmq-tests-13.1.0-1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/zeromq-3.2.2-13.1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/zeromq-debuginfo-3.2.2-13.1.x86_64.rpm
    sudo rpm -Uvh http://docs.saltstack.com/downloads/cent5/zeromq-devel-3.2.2-13.1.x86_64.rpm
    

Specific Installation Steps on Salt Master Node

Now, we can follow the official installation steps.

  1. Install Salt Master
    sudo yum install salt-master
    

Specific Installation Steps on Salt Minion Node

Again, we can follow the official installation step.

  1. Install Salt Minion
    sudo yum install salt-minion
    

Configuration

This section describes only the important configuration issues for running the first command from a master to its  minions. For the whole configuration possibilities, please check the Salt configuration documentation.

For a successful communication between master and minions, two configuration are important.

  • Set up the firewall on the master side and
  • key exchange between master and minions (because the communication is encrypted).

Firewall Configuration

By default Salt listens on ports 4505 and 4506. Therefore, the firewall has to be configured to accept incoming communication on these ports.

  1. Open /etc/sysconfig/iptables as root with your favourite editor.
  2. Add following lines
    -A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
    -A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT
    
  3. Restart the serviceiptables
    sudo service iptables restart
    

Key Exchange Configuration

The master can only send commands to minion whose keys are accepted by the master.

  1. Start the minion on the minion node.
    salt-minion
    
  2. Ensure that the master runs on the master node.
    salt-master
    
  3. On the master node, look which keys aren’t accepted
    salt-key -L
    
  4. To accept all unaccepted key call on the master node
    salt-key -A
    
  5. To test whether the minion is available by the master, call on the master node
    salt name-of-minion test.ping
    

Further Information