Virtual Machine Orchestration on GCE

Summary

In this article we tackle VM orchestration. We I touched on in other articles, the desire is to dynamically spin up VMs as necessary. Some of the constructions in Google Cloud that are used are instance templates, instance groups, load balancers, health checks, salt (both state and reactor).

First Things First

In order to dynamically spin up VMs we need an instance group. For an instance group to work dynamically we need an instance template.

Instance Template

For this instance template, I will name it web-test. The name for this is important but we’ll touch on that later on.

GCE - Instance Template - Name
GCE – Instance Template – Name
GCE - Instance Template - CentOS
GCE – Instance Template – CentOS

For this demonstration we used CentOS 8. It can be any OS but our Salt state is tuned for CentOS.

GCE - Automation
GCE – Automation

As we touched on in the Cloud-init on Google Compute Engine article, we need to automate the provisioning and configuration on this. Since Google’s CentOS image does not come with this we use the startup script to load it. Once loaded and booted, cloud-init configures the local machine as a salt-minion and points it to the master.

Startup Script below

#!/bin/bash

if ! type cloud-init > /dev/null 2>&1 ; then
  # Log startup of script
  echo "Ran - `date`" >> /root/startup
  sleep 30
  yum install -y cloud-init

  if [ $? == 0 ]; then
    echo "Ran - yum success - `date`" >> /root/startup
    systemctl enable cloud-init
    # Sometimes GCE metadata URI is inaccessible after the boot so start this up and give it a minute
    systemctl start cloud-init
    sleep 10
  else
    echo "Ran - yum fail - `date`" >> /root/startup
  fi

  # Reboot either way
  reboot
fi

cloud-init.yaml below

#cloud-config

yum_repos:
    salt-py3-latest:
        baseurl: https://repo.saltstack.com/py3/redhat/$releasever/$basearch/latest
        name: SaltStack Latest Release Channel Python 3 for RHEL/Centos $releasever
        enabled: true
        gpgcheck: true
        gpgkey: https://repo.saltstack.com/py3/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub

salt_minion:
    pkg_name: 'salt-minion'
    service_name: 'salt-minion'
    config_dir: '/etc/salt'
    conf:
        master: saltmaster263.us-central1-c.c.woohoo-blog-2414.internal
    grains:
        role:
            - web
GCE – Instance Template – Network tags – allow-health-checks

The network tag itself does not do anything at this point. Later on we will tie this into a firewall ACL to allow the Google health checks to pass.

Now we have an instance template. From our Intro to Salt Stack article we should have a salt server.

SaltStack Server

We have a state file here to provision the state but from our exposure we need salt to automagically do a few things.

Salt is a fairly complex setup so I have provided some of the files at the very bottom. I did borrow many ideas from this page of SaltStack’s documentation – https://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html

The first thing is to accept new minions as this is usually manual. We then need it to apply a state. Please keep in mind there are security implications of auto accepting. These scripts do not take that into consideration as they are just a baseline to get this working.

In order to have these automatically work, we need to use Salt reactor which listens to events and acts on them. Our reactor file looks like this. We could add some validation, particularly on the accept such as validating the minion name has web in it to push the wordpress state.

{# test server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' %}
minion_add:
  wheel.key.accept:
  - match: {{ data['id'] }}
{% endif %}
{% if data['act'] == 'accept' %}
initial_load:
  local.state.sls:
    - tgt: {{ data['id'] }}
    - arg:
      - wordpress
{% endif %}

This is fairly simple. When a minion authenticates for the first time, acknowledge it and then apply the wordpress state we worked on in our articicle on Salt State. Since we may have multiple and rotating servers that spin up and down we will use Google’s Load Balancer to point Cloudflare to.

Cloudflare does offer load balancing but for the integration we want, its easier to use Google. The load balancer does require an instance group so we need to set that up first.

Instance Groups

Instance groups are one of the constructions you can point a load balancer towards. Google has two types of instance groups. Managed, which it will auto scale based on health checks. There is also managed which you have to manually add VMs to. We will choose managed

GCE - New Managed Instance
GCE – New Managed Instance

This name is not too important so it can be any one you like.

GCE - Instance Group
GCE – Instance Group

Here we set the port name and number, an instance template. For this lab we disabled autoscaling but in the real world this is why you want to set all of this up.

Instance Group - Health Check
Instance Group – Health Check

The HealthCheck expects to receive an HTTP 200 message for all clear. It is much better than a TCP check as it can validate the web server is actually responding with expected content. Since WordPress sends a 301 to redirect, we do have to set the Host HTTP Header here, otherwise the check will fail. Other load balancers only fail on 400-599 but Google does expect only a HTTP 200 per their document – https://cloud.google.com/load-balancing/docs/health-check-concepts

Instance Group Provisioning
Instance Group Provisioning

And here you can see it is provisioning! While it does that, let’s move over to the load balancer.

Firewall Rules

The health checks for the load balancer come from a set range of Google IPs that we need to allow. We can allow these subnets via network tags. Per Google’s Healthcheck document, the HTTP checks come from two ranges.

VPC - Allow Health Checks!
VPC – Allow Health Checks!

Here we only allow the health checks from the Google identified IP ranges to machines that are tagged with “allow-health-checks” to port 443.

Google Load Balancer

Initial

This is a crash course into load balancers if you have never set them up before. It is expected you have some understanding of front end, back end and health checks. In the VPC section we need to allow these

Google Load Balancer - Start configuration
Google Load Balancer – Start configuration
Google Load Balancer - Internet
Google Load Balancer – Internet

Back End Configuration

Google’s load balancers can be used for internal only or external to internal. We want to load balance external connections.

Google Load Balancer - Back End Create
Google Load Balancer – Back End Create

We will need to create a back end endpoint.

Luckily this is simple. We point it at a few objects we already created and set session affinity so that traffic is persistent to a single web server. We do not want it hopping between servers as it may confuse the web services.

Front End Configuration

Health Check Validation

Give the load balancer provisioning a few minutes to spin up. It should then show up healthy if all is well. This never comes up the first time. Not even in a lab!

Google Load Balancer - Healthy!
Google Load Balancer – Healthy!

Troubleshooting

The important part is to walk through the process from beginning to end when something does not work. Here’s a quick run through.

  • On provisioning, is the instance group provisioning the VM?
  • What is the status of cloud-init?
  • Is salt-minion installing on the VM and starting?
  • Does the salt-master see the minion?
  • Reapply the state and check for errors
  • Does the load balancer see health?

Final Words

If it does come up healthy, the last step is to point your DNS at the load balancer public IP and be on your way!

Since Salt is such a complex beast, I have provided most of the framework and configs here – Some of the more sensitive files are truncated but left so that you know they exist. The standard disclaimer applies in that I cannot guarantee the outcome of these files on your system or that they are best practices from a security standpoint.

SaltStack – Assisting Windows Updates With win_wua

Summary

Today I got to play with the Salt win_wua module. Anyone that manages Windows servers, they know all about the second Tuesday of the month. the win_wua module can help greatly. I have recently been toying with Salt as mentioned in some of my other articles like Introduction to SaltStack.

Update Methodology

In the environments I manage, we typically implement Microsoft Windows Server Update Services (WSUS) but in a manual fashion so that we can control the installation of the patches. WSUS is more of a gatekeeper against bad patches. We approve updates immediate to only test servers. This lets us burn them in for a few weeks. Then when we’re comfortable, we push them to production. This greatly helped mitigate this conflict of Windows Updates – https://community.sophos.com/kb/en-us/133945

The process to actually install though is manual since we need to trigger the install. It involved manually logging into various servers to push the install button and then reboot. In my past complaints of this I was unable to find something to easily trigger the installation of windows updates.

Win_wua to the rescue

I originally thought I would need a salt state to perform this but the command line module is so easy, I did not bother.

salt TESTSERVER win_wua.list
TESTSERVER:
    ----------
    9bc4dbf1-3cdf-4708-a004-2d6e60de2e3a:
        ----------
        Categories:
            - Security Updates
            - Windows Server 2012 R2
        Description:
            Install this update to resolve issues in Windows. For a complete listing of the issues that are included in this update, see the associated Microsoft Knowledge Base article for more information. After you install this item, you may have to restart your computer.
        Downloaded:
.....

It then spews a ton of data related to the pending updates to be installed. Luckily it has an option for a summary. Surprisingly we use the same “list” to install by setting a flag. The install function expects a list of updates you wish to install but we just want to install all pending ones.

Before we install, check out the summary output

salt TESTSERVER win_wua.list summary=True
TESTSERVER:
    ----------
    Available:
        0
    Categories:
        ----------
        Security Updates:
            4
        Windows Server 2012 R2:
            4
    Downloaded:
        4
    Installed:
        0
    Severity:
        ----------
        Critical:
            3
        Moderate:
            1
    Total:
        4

Ok so let’s install and only see the summary

salt -t 60 LV-PSCADS01 win_wua.list summary=True install=True
LV-PSCADS01:
    Passed invalid arguments to win_wua.list: 'int' object is not callable
    
        .. versionadded:: 2017.7.0
    
        Returns a detailed list of available updates or a summary. If download or
        install is True the same list will be downloaded and/or installed.

Well that’s no fun! Not quite what we expected. It appears its a known bug on 2017.7.1 and fixed. Update your salt minion or perform the manual fix it listed and run again!

salt -t 60 TESTSERVER win_wua.list summary=True install=True
TESTSERVER:
    ----------
    Download:
        ----------
        Success:
            True
        Updates:
            Nothing to download
    Install:
        ----------
        Message:
            Installation Succeeded
        NeedsReboot:
            True
        Success:
            True
        Updates:
            ----------
            9bc4dbf1-3cdf-4708-a004-2d6e60de2e3a:
                ----------
                AlreadyInstalled:
                    False
                RebootBehavior:
                    Never Reboot
                Result:
                    Installation Succeeded
                Title:
                    2019-11 Servicing Stack Update for Windows Server 2012 R2 for x64-based Systems (KB4524445)
            9d665242-c74c-4905-a6f4-24f2b12c66e6:
                ----------
                AlreadyInstalled:
                    False
                RebootBehavior:
                    Poss Reboot
                Result:
                    Installation Succeeded
                Title:
                    2019-11 Cumulative Security Update for Internet Explorer 11 for Windows Server 2012 R2 for x64-based systems (KB4525106)
            a30c9519-8359-48e1-86d4-38791ad95200:
                ----------
                AlreadyInstalled:
                    False
                RebootBehavior:
                    Poss Reboot
                Result:
                    Installation Succeeded
                Title:
                    2019-11 Security Only Quality Update for Windows Server 2012 R2 for x64-based Systems (KB4525250)
            a57cd1d3-0038-466b-9341-99f6d488d84b:
                ----------
                AlreadyInstalled:
                    False
                RebootBehavior:
                    Poss Reboot
                Result:
                    Installation Succeeded
                Title:
                    2019-11 Security Monthly Quality Rollup for Windows Server 2012 R2 for x64-based Systems (KB4525243)

Of course, this is windows so we need a reboot. By default the win_system.reboot waits 5 minutes to reboot. With the flags below we can shorten that.

salt TESTSERVER system.reboot timeout=30 in_seconds=True

Salt State

If I wanted to automate the reboot after the update install, I could make this a state and check for the updates to trigger a reboot. In my scenario, I do not need it but if you want to try, check out this section for the win_wua states. The syntax is slightly different than the module we have been working with on this article.

Updating Multiple Server

If you want to update multiple servers at once you can do something like the following. The -L flag lets you set multiple targets as a comma separated

salt -t 60 -L TESTSERVER,TESTSERVER2,TESTSERVER3 win_wua.list summary=True install=True

salt -L TESTSERVER,TESTSERVER2,TESTSERVER3 system.reboot timeout=30 in_seconds=True

We could even set a salt grain to group these

salt -L TESTSERVER,TESTSERVER2,TESTSERVER3 grains.set wua_batch testservers
salt -G wua_batch:testservers win_wua.list summary=True install=True
salt -G wua_batch:testservers system.reboot timeout=30 in_seconds=True

Throttling

If you are running this on prem or just flat out want to avoid an update and boot storm, you can throttle it using “salt -b” as mentioned in Salt’s documentation.

# This would limit the install to 2 servers at a time
salt -b 2 -G wua_batch:testservers win_wua.list summary=True install=True

Final Words

This article is likely only good if you have salt in your environment somewhere but never thought about using it on Windows. It is a great tool at configuration management on Windows but most Windows admins think of other tools like GPO, SCCM, etc to manage Windows.

Salt State – Intro to SaltStack Configuration

Summary

This article picks up from Configuration Management – Introduction to SaltStack and dives into Salt State. It assumes you have an installed and working SaltStack. I call this intro to SaltStack configuration because this is the bulk of salt. Setting up the salt states and configuration. Understanding the configuration files, where they go and the format is the most important part of Salt.

The way I learn is a guided tour with a purpose and we will be doing just that. Our goal is first to create a salt state that installs apache.

Prepping Salt – Configuration

We need to modify “/etc/salt/master” and uncomment the following

#file_roots:
#  base:
#    - /srv/salt
#

This is where the salt states will be stored. Then we want to actually create that directory.

mkdir /srv/salt

vi /etc/srv/salt/webserver.sls

The contents of webserver.sls are as follows

httpd:
  pkg:
    - installed

This is fairly simple. We indicate a state “apache”, and define that package should be installed. We can apply it specifically as follows.

Applying Our First Salt State

# salt saltmaster1.woohoosvcs.com state.apply webserver
[WARNING ] /usr/lib/python3.6/site-packages/salt/transport/zeromq.py:42: VisibleDeprecationWarning: zmq.eventloop.minitornado is deprecated in pyzmq 14.0 and will be removed.
    Install tornado itself to use zmq with the tornado IOLoop.
    
  import zmq.eventloop.ioloop

saltmaster1.woohoosvcs.com:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: httpd
     Started: 16:31:35.154411
    Duration: 21874.957 ms
     Changes:   
              ----------
              apr:
                  ----------
                  new:
                      1.6.3-9.el8
                  old:
              apr-util:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              apr-util-bdb:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              apr-util-openssl:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              centos-logos-httpd:
                  ----------
                  new:
                      80.5-2.el8
                  old:
              httpd:
                  ----------
                  new:
                      2.4.37-12.module_el8.0.0+185+5908b0db
                  old:
              httpd-filesystem:
                  ----------
                  new:
                      2.4.37-12.module_el8.0.0+185+5908b0db
                  old:
              httpd-tools:
                  ----------
                  new:
                      2.4.37-12.module_el8.0.0+185+5908b0db
                  old:
              mailcap:
                  ----------
                  new:
                      2.1.48-3.el8
                  old:
              mod_http2:
                  ----------
                  new:
                      1.11.3-3.module_el8.0.0+185+5908b0db
                  old:

Summary for saltmaster1.woohoosvcs.com
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:  21.875 s

You’ll note 1) the annoying warning which I will be truncating from further messages but 2) that it installed httpd (apache). You can see it also installed quite a few other dependencies that apache required.

Let’s validate quickly

# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:httpd.service(8)

# ls -la /etc/httpd/
total 12
drwxr-xr-x.  5 root root  105 Nov 10 16:31 .
drwxr-xr-x. 80 root root 8192 Nov 10 16:31 ..
drwxr-xr-x.  2 root root   37 Nov 10 16:31 conf
drwxr-xr-x.  2 root root   82 Nov 10 16:31 conf.d
drwxr-xr-x.  2 root root  226 Nov 10 16:31 conf.modules.d
lrwxrwxrwx.  1 root root   19 Oct  7 16:42 logs -> ../../var/log/httpd
lrwxrwxrwx.  1 root root   29 Oct  7 16:42 modules -> ../../usr/lib64/httpd/modules
lrwxrwxrwx.  1 root root   10 Oct  7 16:42 run -> /run/httpd
lrwxrwxrwx.  1 root root   19 Oct  7 16:42 state -> ../../var/lib/httpd

Looks legit to me! We just installed our first salt state. You can “yum remove” httpd and apply again and it will install. The real power in configuration management is that it knows the desired state and will repeatedly get you there. It is not just a one and done. This is the main difference between provisioning platforms and configuration management.

Installing More Dependencies

WordPress also needs “php-gd” so let’s modify the salt state to add it and then reapply.

httpd:
  pkg:
    - installed

php-gd:
  pkg:
    - installed

Here you can see it did not try to reinstall apache but did install php-gd.

# salt saltmaster1.woohoosvcs.com state.apply webserver

saltmaster1.woohoosvcs.com:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 16:41:36.742596
    Duration: 633.359 ms
     Changes:   
----------
          ID: php-gd
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: php-gd
     Started: 16:41:37.376211
    Duration: 20622.985 ms
     Changes:   
              ----------
              dejavu-fonts-common:
                  ----------
                  new:
                      2.35-6.el8
                  old:
              dejavu-sans-fonts:
                  ----------
                  new:
                      2.35-6.el8
                  old:
              fontconfig:
                  ----------
                  new:
                      2.13.1-3.el8
                  old:
              fontpackages-filesystem:
                  ----------
                  new:
                      1.44-22.el8
                  old:
              gd:
                  ----------
                  new:
                      2.2.5-6.el8
                  old:
              jbigkit-libs:
                  ----------
                  new:
                      2.1-14.el8
                  old:
              libX11:
                  ----------
                  new:
                      1.6.7-1.el8
                  old:
              libX11-common:
                  ----------
                  new:
                      1.6.7-1.el8
                  old:
              libXau:
                  ----------
                  new:
                      1.0.8-13.el8
                  old:
              libXpm:
                  ----------
                  new:
                      3.5.12-7.el8
                  old:
              libjpeg-turbo:
                  ----------
                  new:
                      1.5.3-7.el8
                  old:
              libtiff:
                  ----------
                  new:
                      4.0.9-13.el8
                  old:
              libwebp:
                  ----------
                  new:
                      1.0.0-1.el8
                  old:
              libxcb:
                  ----------
                  new:
                      1.13-5.el8
                  old:
              php-common:
                  ----------
                  new:
                      7.2.11-1.module_el8.0.0+56+d1ca79aa
                  old:
              php-gd:
                  ----------
                  new:
                      7.2.11-1.module_el8.0.0+56+d1ca79aa
                  old:

Summary for saltmaster1.woohoosvcs.com
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time:  21.256 s

The output of state.apply is rather long so we likely will not post too many more. With that said, I wanted to give you a few examples of the output and what it looks like.

Downloading Files

Next we need to download the WordPress files. The latest version is always available via https://wordpress.org/latest.tar.gz. Salt has a Salt State for managed file to download but it requires us to know the hash of the file to ensure it is correct. Since “latest” would change from time to time, we do not know what that is. We have two options. The first is to store a specific version on the salt server and provide that. The second is to use curl to download the file.

download_wordpress:
  cmd.run:
    - name: curl -L https://wordpress.org/latest.tar.gz -o /tmp/wp-latest.tar.gz
    - creates: /tmp/wp-latest.tar.gz

In order for salt to not download the file every time, we need to tell it what the command we are running will store. We tell it, it creates the “/tmp/wp-latest.tar.gz” so it should only download if that file does not exist.

Downloading is not all we need to do though, we also need to extract it.

Extracting

extract_wordpress:
  archive.extracted:
    - name: /tmp/www/
    - source: /tmp/wp-latest.tar.gz
    - user: apache
    - group: apache

The archive.extracted module allows us to specify a few important parameters that are helpful. Applying the state we can see its there!

[root@saltmaster1 salt]# ls -la /tmp/www/
total 4
drwxr-xr-x.  3 apache root     23 Nov 10 16:57 .
drwxrwxrwt. 11 root   root    243 Nov 10 16:59 ..
drwxr-xr-x.  5 apache apache 4096 Oct 14 15:37 wordpress

We actually want it in the “/var/www/html” root so the webserver.sls was modified to the following.

extract_wordpress:
  archive.extracted:
    - name: /var/www/html
    - source: /tmp/wp-latest.tar.gz
    - user: apache
    - group: apache
    - options: "--strip-components=1"
    - enforce_toplevel: False

The issue is the tar has the “wordpress” directory as the root and we want to strip that off. We need the options to pass to tar to strip it. We also need the enforce_toplevel to false as Salt expects a singular top level folder. I found this neat trick via https://github.com/saltstack/salt/issues/54012

# Before
# ls -la /var/www/html/wp-config*
-rw-r--r--. 1 apache apache 2898 Jan  7  2019 /var/www/html/wp-config-sample.php
# salt saltmaster1.woohoosvcs.com state.apply webserver

# After
# ls -la /var/www/html/wp-config*
-rw-r-----. 1 apache apache 2750 Nov 10 18:03 /var/www/html/wp-config.php
-rw-r--r--. 1 apache apache 2898 Jan  7  2019 /var/www/html/wp-config-sample.php

Sourcing the Config

We now have a stock WordPress install but we need to configure it to connect to the database.

For that I took a production wp-config.php and placed it in “/srv/salt/wordpress/wp-config.php” on the salt master. I then used the following salt state to push it out

/var/www/html/wp-config.php:
  file.managed:
  - source: salt://wordpress/wp-config.php
  - user: apache
  - group: apache
  - mode: 640

Set Running Salt State

What could would Apache do if it weren’t running. We do need a salt state to enable and run it!

start_webserver:
  service.running:
  - name: httpd
  - enable: True
# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-11-10 18:27:24 CST; 1min 5s ago

Final Words

Through this we configured a webserver.sls salt state. We used it to install apache and a basic php module necessary as well as push out a configuration file. As you can likely tell from these instructions, it is an iterative approach to configuring the salt state for your need.

This first iteration of the webserver.sls is far from complete or best practice. It is meant as a beginner’s guide to walking through the thought process. Below is the full webserver.sls file for reference

httpd:
  pkg:
    - installed

php-gd:
  pkg:
    - installed

download_wordpress:
  cmd.run:
    - name: curl -L https://wordpress.org/latest.tar.gz -o /tmp/wp-latest.tar.gz
    - creates: /tmp/wp-latest.tar.gz

extract_wordpress:
  archive.extracted:
    - name: /var/www/html
    - source: /tmp/wp-latest.tar.gz
    - user: apache
    - group: apache
    - options: "--strip-components=1"
    - enforce_toplevel: False

/var/www/html/wp-config.php:
  file.managed:
  - source: salt://wordpress/wp-config.php
  - user: apache
  - group: apache
  - mode: 640

start_webserver:
  service.running:
  - name: httpd
  - enable: True

Configuration Management – Intro to SaltStack

Summary

SaltStack or Salt for short is an open source configuration management platform. It was first released in the early 2010’s as a potential replacement for Chef and Puppet. In this guide we will walk through some high level details of Salt and a basic install. If you already have Salt installed, please skip ahead to the next article when it is published.

Next – Salt State – Intro to SaltStack Configuration

What is Configuration Management?

A configuration management tool allows you to remotely configure and dictate configurations of machines. Through this multi-part series we will work through that with the use case of https://blog.woohoosvcs.com/. At some point this site may need multiple front ends. It has not been decided if that method will be Kubernetes, Google App Engine or VMs. If the VM route is chosen it will make sense to have an easy template to use.

What Configuration Management is not

Configuration management typically does not involve the original provisioning of the server. There are typically other tools for that such as Terraform.

Salt Architecture

Salt has three main components to achieve configuration management. Those are the salt master, minion and client. Salt can be configured highly available with multi master but it is not necessary to start out there. For the sake of this document and per Salt’s best practices we can add that later if necessary. – https://docs.saltstack.com/en/latest/topics/development/architecture.html

Salt Client

The salt client is a command line client that accepts commands to be issued to the salt master. It is typically on the salt master. You can use it to trigger expected states.

Salt Master

The Salt Master is the broker of all configuration management and the brains. Requests/commands received from the client make their way to the master which then get pushed to the minion.

Salt Minion

The minion is typically loaded onto each machine you wish to perform configuration management on. In our case, it will be the new front ends we spin up as we need them.

Firewall Ports

The Salt Master needs ports TCP/4505-4506 opened. The minions check in and connect to the master on those ports. No ports are needed for the minions as they do not listen on ports.

More on this can be found on their firewall page – https://docs.saltstack.com/en/latest/topics/tutorials/firewall.html

Where to install

Typically you want the master to be well connected since the minions will be connecting to it. Even if you are primarily on prem, it is not a bad idea to put a salt master in the cloud.

Installing Salt

Prerequisites

OS: CentOS 8
VM: 1 core, 1 GB RAM, 12 GB HDD
Install: Minimal

Installation

For the installation we will be closely following Salt’s documentation on installing for RHEL 8 – https://repo.saltstack.com/#rhel

For the sake of this lab we will have the client, master and minion all on the same server but it will allow us to build out the topology.

Now to the install!

# I always like to start out with the latest up to date OS
sudo yum update

# Install the salt repo for RHEL/CentOS
sudo yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el8.noarch.rpm

# Install minion and master
sudo yum install salt-master salt-minion

# Reboot for OS updates to take effect
reboot

Post Install Configuration

We need to setup a few things first. The Salt guide is great at pointing these out – https://docs.saltstack.com/en/latest/ref/configuration/index.html

On the minion we need to edit “/etc/salt/minion”. The following changes need to be made. If/when you roll this out into production you can use DNS hostnames.

#master: salt
master: 192.168.116.186

We will also open up the firewall ports on the master

firewall-cmd --permanent --zone=public --add-port=4505-4506/tcp
firewall-cmd --reload

Start up Configuration Management Tools

Now we are ready to start up and see how it runs!

sudo systemctl enable salt-master salt-minion
sudo systemctl start salt-master salt-minion

Wait about 5 minutes. It takes a little bit to initialize. Once it has you can run “sudo salt-key -L”. When a minion connects to the master, the master does not allow it to connect automatically. It has to be permitted/admitted. salt-key can be used to list minions and allow them.

$ sudo salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
saltmaster1.woohoosvcs.com
Rejected Keys:

$ sudo salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
saltmaster1.woohoosvcs.com
Proceed? [n/Y] Y
Key for minion saltmaster1.woohoosvcs.com accepted.
[dwcjr@saltmaster1 ~]$ sudo salt-key -L
Accepted Keys:
saltmaster1.woohoosvcs.com
Denied Keys:
Unaccepted Keys:
Rejected Keys:

We used salt-key -A to accept all unaccepted keys.

Testing

$ sudo salt saltmaster1 test.version
[WARNING ] /usr/lib/python3.6/site-packages/salt/transport/zeromq.py:42: VisibleDeprecationWarning: zmq.eventloop.minitornado is deprecated in pyzmq 14.0 and will be removed.
    Install tornado itself to use zmq with the tornado IOLoop.
    
  import zmq.eventloop.ioloop

No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received
[root@saltmaster1 ~]# salt '*' test.version
[WARNING ] /usr/lib/python3.6/site-packages/salt/transport/zeromq.py:42: VisibleDeprecationWarning: zmq.eventloop.minitornado is deprecated in pyzmq 14.0 and will be removed.
    Install tornado itself to use zmq with the tornado IOLoop.
    
  import zmq.eventloop.ioloop

saltmaster1.woohoosvcs.com:
    2019.2.2

Well that is an ugly error code. It seems to have been introduced in 2019.2.1 but not properly fixed in 2019.2.2. My guess is the next release will fix this but it seems harmless. – https://github.com/saltstack/salt/issues/54759. We do, however, get the response so this is a success.

Final Words

At this point we have a salt-master and salt-minion setup, albeit on the same host. We have accepted the minion on the master and they are communicating. The next article will start to tackle setting up Salt states and other parts of the salt configuration.

Next – Salt State – Intro to SaltStack Configuration