20110222

Puppet/Facter Question: How to determine if you are running puppet in chroot environment

Update: solved


Solution:

Thanks to Daniel Pittman from Puppetlabs, the solution is really easy.
export FACTER_chroot=whatever
chroot puppetd -vdt --waitforcert 60 -l /var/log/puppetrun.log

Eh voila, facter gives you the $chroot facter lib, and you can use it in your manifests.

Thanks again, Daniel, you made a happy puppet user even more happier :)



Dear Lazyweb,

Think about running puppet in chroot environment.

Your repices were written to trigger some resource deployment when a service resource (using sysV, upstart, systemd) says "Yes, I'm running".

But normally starting a service inside a chroot via upstart should fail, therefor your dependency won't be triggered.

Now, we have facter, and facter is a nice tool to actually determine if you are deploying on a real hardware, or a virtual machine (like on ESX or other solutions).

But, honestly, I didn't find any facter variable which tells me: "Yes, this is a chroot".

And right now, it's already late, I don't find a good solution how to determine, that I'm doing some work inside a chroot and not on the live system.

Dear Lazyweb, if you know a good solution (facter plugin, whatever) please leave a comment.

Thank you in advance.

20110218

New Year, New Company ;)

A new era starts for me.

Since yesterday (2011-02-17) I'm not working for my old company anymore.

As some of you heard the news, a global, worldwide SaaS company bought my old employer Netviewer AG.

Therefore, many of my colleagues had/have/will change the company, and I had to decide if I do the same.

It took me quite some days and hours to think about this step, but finally I decided to go the way of my colleagues.

Therefore I signed yesterday a new contract with the new company and as well signed the cancellation agreement with my old company.

From today I'm a happy employee of Citrix Online, Germany.

And as I'm working now for a, well US controlled company, I have to state here, that everything what I write on my private blog is my own opinion and doesn't, in any way, represent the opinion of the company I'm working for.

Let's see what'll happen. The future awaits me.

20110215

sudo over ssh magic

Imagine,

you have a datacenter full of Ubuntu Servers. 

Imagine,

you are the guy with sudo rights.

Imagine,

you need to run a command on all those servers, 
but this command needs to run with superuser privileges.

Imagine,

you didn't tweak your /etc/sudoers to allow 
this command to run without a password.

Imagine,

you try this: ssh $host sudo command_to_run

Realise,

this will ask you always for your sudo password
and it is echoing your password to your output device

But,

there is hope!

Find,

ssh -t -t -t $host sudo -S command <<EOF
<enter your password here>
EOF


Preferences for this to work:
  1. ssh authentication via public key without a passphrase (you have an account for such purposes with a holy secret ssh key without a passphrase)
  2. you are sitting alone in front of your workstation to enter your sudo password without anyone seeing it.
Explanation:

  1. ssh $host sudo command
    will echo the sudo  password back to your terminal, this is nothing you want
  2. ssh -t forces the allocation of a pseudo-tty (read ssh(1) )
  3. ssh -t -t -t forces the allocation of a tty allocation, even if ssh has no local tty (read ssh(1) )
  4. sudo -S causes sudo to read the password from stdin instead of the terminal device
  5. ssh -t -t -t $host  in combination with sudo -S <command> <<EOF\nyour password\nEOF\n
    is what you really need, to execute a sudo command on a remote host over ssh.


Conclusion:

You have a file with a list of IPs or hostnames for remote hosts you need to do something on with sudo.
A little script like the following will help you here:


#!/bin/bash

for i in `cat ip.lst` ; do 
     ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -t -t -t ${i} "sudo -S command <<EOF
<your password>
EOF
"
done

20110204

Ubuntu 10.04 LTS + Portchannel Bonds + Active-Passive Bonds

Update 3: It has nothing to do with Upstart, I'm sure about it now, after spending 4 hours of debugging.

Oh hell, I wonder why I'm always running into strange situations regaring Ubuntu Server, Network and Upstart (I hope it's upstart ;))

Ok, here we go with the setup:

Imagine you have a server with several ethernet interfaces.

eth0, eth1, eth2, eth3

Now imagine further, that eth0 and eth1 will be bonded as portchannel with LACP (bond-mode 4). Forget the xmit_hash_policy right now (this will be layer3+4, but this is not important right now).

Having Lucid and Upstart in place, the config looks like this:

auto bond0
iface bond0 inet static
   address 192.168.1.10
   netmask 255.255.255.0
   bond-slaves none
   bond-mode 4
   bond-miimon 100

auto bond1
iface bond1 inet static
   address 192.168.1.11
   netmask 255.255.255.0
   bond-slaves none
   bond-mode 4
   bond-miimon 100

auto eth0
iface eth0 inet manual
    bond-master bond0
    bond-primary eth0 eth1

auto eth1
iface eth1 inet manual
    bond-master bond0
    bond-primary eth0 eth1

auto eth2
iface eth2 inet manual
    bond-master bond1
    bond-primary eth2 eth3

auto eth3
iface eth3 inet manual
   bond-master bond1
   bond-primary eth2 eth3

The machine comes up, and I can ping the default interfaces just fine.
So, this setup is correct, the access vlans on the Cisco switch are set correctly, and the etherchannel config on the Cisco switch is also correct. There we go.

Now I want to have over the two portchannel bonds an active-passive bond. So, I'm going to change the config like this:


auto bond0
iface bond0 inet static
   address 0.0.0.1
   netmask 255.255.255.255
   bond-slaves none
   bond-mode 4
   bond-miimon 100

auto bond1
iface bond1 inet static
   address 0.0.0.2
   netmask 255.255.255.255
   bond-slaves none
   bond-mode 4
   bond-miimon 100

auto bond2
iface bond2 inet static
   address 192.168.1.10
   netmask 255.255.255.0
   bond-slaves bond0 bond1
   bond-mode 1
   bond-miimon 100

auto eth0
iface eth0 inet manual
    bond-master bond0
    bond-primary eth0 eth1

auto eth1
iface eth1 inet manual
    bond-master bond0
    bond-primary eth0 eth1

auto eth2
iface eth2 inet manual
    bond-master bond1
    bond-primary eth2 eth3

auto eth3
iface eth3 inet manual
   bond-master bond1
   bond-primary eth2 eth3


On Ubuntu Jaunty, this setup worked out of the box (minus, that the manual eth* interfaces were not necessary, I had the bond-slaves directly configured on bond0 and bond1, but for Lucid it needs to be this way).

Ok, reboot the machine, comes up and no ping possible, but all interfaces are up and running.
Even bond2 is correctly enslaved with bond0 and bond1.

So, now I'm stucked. I think it has something to do with the setup of the NICs and bonds.

The way it should be:

  1. Upstart will start /etc/init/networking.conf on local-filesystems and stopped udevtrigger.
    This will bring up the bond interfaces bond0, bond1 and bond2
  2. Upstart will then bring up the hardware interfaces eth0, eth1, eth2 and eth3 and put them correctly as slaves to bond0 and bond1.
But what about interface bond2?
bond2 will come up with bond0 and bond1 as slaves, but bond0 and bond1 don't have their bond-slaves ready yet. So bond2 don't know anything about the needed hardware interfaces.

How can I tell upstart to wait for the hardware interfaces, before the virtual interfaces are started?
In other words, I need to defer /etc/init/networking.conf to be executed after the hardware interfaces are up and running.

If this would work somehow, I could even get rid of the unneeded eth0/eth1/eth2/eth3 manual configurations for the hardware NICs, and I'm able to go back to a more sane /etc/network/interfaces configuration.

Help is appreciated.

UPDATE: I uploaded an image of the setup which worked out of the box on Ubuntu Jaunty. So you can imagine what I'm trying to achieve.


UPDATE 2: Found another guy on the Novell forum which had the same problem. (http://forums.novell.com/novell-product-support-forums/suse-linux-enterprise-server-sles/sles-networking/398736-bond-bonds-bonding-2-aggregate-bonds-active-backup.html) but in 2009 that worked for me (Ubuntu Jaunty)