Viper - Setting up an installation server

Here's the guide to setting up a completely functional Viper server, on a system running Debian GNU or Ubuntu.

The procedure can be followed both standalone on a physical machine, or inside the OpenVZ container (for openvz, see Viper in OpenVZ container first).

Download

The easiest way to download the files is to clone them from the Git repository and place in /etc/ldap/viper/.

apt-get install git-core

mkdir -p /etc/ldap
cd /etc/ldap
git clone git://github.com/docelic/Viper.git viper
cd viper

Net::LDAP::FilterMatch fix

IMPORTANT

Net::LDAP's FilterMatch module contains a bug that you have to patch manually until it is fixed in the official distribution (track bug progress report here).

The patch is simple, and included in Viper distribution as file support/FilterMatch.pm.patch. Apply it with:

patch -p0 < support/FilterMatch.pm.patch

Installation

To set everything up, you will use script scripts/viper-setup.sh, either by directly running it or opening it, reading and choosing which steps you want to execute.

In any case, the purpose of the script is to get Viper up and running quickly, with a complete default config.

The default config is quite independent and can be ran on probably every machine that does not already run LDAP/DHCP/Puppet server. (If you do run some of those servers on the host and have important data that cannot be deleted or forgotten, maybe you have not chosen a good machine for installing Viper the first time).

One of the script's tasks is installing all config files from the etc/ subdirectory onto the real /etc/ on the filesystem. This is done by creating the /etc/... directories and symlinking the needed config files to Viper's etc/ subdirectory. I find that approach more useful for the moment. If you do not want symlinks and want to really copy the files, edit the top of scripts/viper-setup.sh and set CP_ARG="".

To install, you can run the setup script as follows:

sh scripts/viper-setup.sh

It's worth noting that the script is idempotent, that is, can be run multiple times with no adverse effects. So if any part of the process fails, you can fix the problem and run the script again.

After the setup script runs and the system gets set up, you will have a clean, known-good base on which you can run the test suite, and upon which you can start creating your own configuration.

Testing

After installation, you should have a working setup populated with default data. This includes a client with name "c1.com", and three hosts, h1, h2 and h3.

Based on that default data, there are tests you can run:

Testing with ldapsearch

ldapsearch -x -b ou=dhcp
ldapsearch -x -b ou=defaults
ldapsearch -x -b ou=clients

ldapsearch -x -b cn=h2,ou=hosts,o=c1.com,ou=clients

ldapsearch -x -b cn=popularity-contest/participate,ou=hosts,ou=defaults
ldapsearch -x -b cn=debian-installer/locale,cn=h2,ou=hosts,o=c1.com,ou=clients
ldapsearch -x -b cn=ntp/servers,cn=h2,ou=hosts,o=c1.com,ou=clients

ldapsearch test results

Ldapsearch query for cn=h2,ou=hosts,o=c1.com,ou=clients is a pretty good way of determining if everything is working alright. Here's how the output from the command should look like (the exact attributes are not important, it's just important that there are no unprocessed values in the output. That is, nothing with '$' and nothing with only half-populated information).

$ ldapsearch -x -b cn=h2,ou=hosts,o=c1.com,ou=clients

# extended LDIF
#
# LDAPv3
# base  with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# h2, hosts, c1.com, clients
dn: cn=h2,ou=hosts,o=c1.com,ou=clients
objectClass: top
objectClass: device
objectClass: dhcpHost
objectClass: ipHost
objectClass: ieee802Device
objectClass: puppetClient
cn: h2
ipHostNumber: 10.0.1.8
macAddress: 00:11:6b:34:ae:8d
puppetclass: test
puppetclass: ntp::server
dhcpHWAddress: ethernet 00:11:6b:34:ae:8d
dhcpOption: host-name "h2"
dhcpOption: routers 10.0.1.1
dhcpOption: domain-name-servers 192.168.1.254
dhcpOption: nis-domain "c1.com"
dhcpOption: domain-name "c1.com"
dhcpOption: subnet-mask 255.255.255.0
dhcpOption: broadcast-address 10.0.1.255
dhcpStatements: fixed-address 10.0.1.8
hostName: h2
ipNetmaskNumber: 255.255.255.0
clientName: c1.com
ipNetworkNumber: 10.0.1.0
ipBroadcastNumber: 10.0.1.255
domainName: c1.com

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

Testing with scripts/node_data

perl scripts/node_data h2.c1.com

Testing with scripts/preseed

QUERY_STRING=ip=10.0.1.8 perl scripts/preseed

Testing with HTTP client

wget http://10.0.1.1/cgi-bin/preseed.cfg?ip=10.0.1.8 -O /tmp/preseed.cfg

Post-setup

After Viper has been installed and tested, there are a couple final things that need to be done, that will allow real clients to connect and perform installation and configuration. Here's the list:

HTTP setup

Client hosts which are candidates for installation need to be able to retrieve the preseed file over HTTP, so the preseed CGI script needs to be linked in the cgi-bin directory.

If your cgi-bin dir is in the standard location, /usr/lib/cgi-bin/, this was already done by the setup script.

The client hosts will then reach the preseed file at location url=http://SERVER/cgi-bin/preseed.cfg.

You do not need to specify this location explicitly, because DHCP has been configured to send "filename" argument to the client host, informing it of the preseed file URL.

This allows you to connect client host to the network, take standard Debian boot media, choose Advanced -> Automatic installation and complete the installation without any input, be it at the d-i boot prompt or during installation.

Note that if for some reason you decide not to use Viper's DHCP and/or do not specify the "filename" option, the d-i installer will try to load the preseed file from the default location. That location is http://SERVER/d-i/DISTNAME/preseed.cfg, and you will have to configure the web server accordingly.

EthX interface config

By default, Viper expects that each configured client site is on some subnet, and that the Viper server is accessible at address .1 in that subnet (for example, client "c1.com" that gets installed as part of test data is on subnet 10.0.1.0/24 and expects Viper server at 10.0.1.1).

Changing this isn't impossible (or even hard), but is out of scope of this document.

So, while you were able to run the tests without caring about this, you will however have to make Viper available on 10.0.1.1 to allow real clients to connect.

Here's how to do it for 10.0.1.1 (the procedure should be repeated for all other configured subnets). The example scenario here shows Viper host with two network interfaces: eth0 that uses DHCP and hooks to whatever the host's parent network and gateway to the Internet is, and eth1 that is intended for Viper and client subnets.

ifconfig eth1 inet 10.0.1.1 netmask 255.255.255.0
invoke-rc.d ipmasq restart # (If you have it installed)

To configure eth1 on every boot, add it to /etc/network/interfaces with a stanza like this:

allow-hotplug eth1
iface eth1 inet static
	address 10.0.1.1
	netmask 255.255.255.0

Note: to support further client subnets on the same eth1 interface, you would use eth1 aliases, such as eth1:1, eth1:2, etc. (Or, in case you installed Viper in the OpenVZ container, you would create additional ethX devices in the container and add them all to the bridge vzbr1).

Gatewaying

Client hosts will most probably need some access to the Internet, even if you create a local mirror of the Debian packages.

In the default configuration, clients are configured to access the net through Viper server via NAT/IP forwarding.

To make that work, you will need to apt-get install ipmasq on the Viper server. (In case of Viper running in an OpenVZ container, install ipmasq on the physical host, not the container.)

Ipmasq will install and start automatically, properly configuring everything, but you will have to do a small change in the iptables rules as follows:

# Print out how many forward rules there are. (Then you should
# subtract 2 from that number to get last rule ID. (i.e. 8 - 2 = 6)).

iptables -L FORWARD -v | wc -lr

# Delete last, DROP rule from forward chain and
# change policy to accept

iptables -D FORWARD 6
iptables -P FORWARD ACCEPT

Also, when you're running Viper as a container under OpenVZ, then the container itself will have the .1 addresses (i.e. 10.0.1.1), but you will not be able to use it as router, because the container can't do forwarding (the 'nat' table only exists on the physical host). The physical host in that case will be configured to have eth1 at address 10.0.1.254, so you will need to edit ldifs/c1.com.ldif, search for "router: " and change the value from 10.0.1.1 to 10.0.1.254 (and likewise for other subnets), instructing client hosts to use Viper's physical host for forwarding. Don't forget to run 'make' in the ldifs/ subdirectory to apply the change.

Changing the configuration / adding new clients

After Viper installation, testing and preparing for clients to connect, you can move onto adding new clients and hosts.

See Adding clients and hosts.