Time flies...

yes, time flies.

And I didn't write anything on this blog since April, just before I started again for COL.
But that's the issue when you are busy with Life and Work, right?

So what happened during the last couple of months?

First of all, as already said, I started, again, to work for COL as an Architect in the OPS department. And directly, after my second day back in the Karlsruhe Office, I was already working on a product and company acquisition. Success. Product was migrated to our infrastructure in less then 3 weeks, and was ready to be shown during the Citrix Synergy event in San Francisco (while you, fellow Ubuntians were enjoying the UDS in May in the Bay Area :))

Beginning of June I traveled to the USA. Oh well, me and the States. Difficult topic. But, you know what, I'm am actually here:

(Courtesy of Wikipedia Author John Wiley)

Yes, this is California, this is Santa Barbara. And no, I'm not on holiday here, but actually it feels like.
Sadly, I had to leave my family over there in Germany. Well, next time this will change.

To be honest, I had to change my whole opinion about the american people, especially here in California. I didn't meet anybody who was unfriendly or nasty to me, this german bad boy.

I met good, friendly, open minded people, and some of them I  already call 'Friends'.
Today is actually my second month here, and I still have one month to go. 

Right now I'm working on different projects. And one of these projects will be to support Ubuntu in our datacenters. Working on that is a challenge, because right now it's only RedHat. So we have to change a lot of infrastructure. Distribution agnostic deployment system, Puppet, and other cool topics.

One of the coolest topics, but, is working with one of the FreeIPA maintainers. We need to support FreeIPA on Ubuntu, which is not that easy right now, because the state of FreeIPA in Ubuntu is more than perfect. 

"But FreeIPA is a RedHat/Fedora Project" you will say, but I have to admit, that FreeIPA works (on Fedora), and it's more then cool.
But, when there is a will, there will be also a way. And there we go.

Collaboration to the rescue. 

The FreeIPA Upstream team is very helpful. And having one of the contributors sitting 2 or 3 cubes away from me, is even better. 

Anyways, I filed already some nasty bug reports, and we will fix them for the future. 
On of the next steps should be to provide support for FreeIPA 2.2.0 (or even FreeIPA 2.3.x, which is in beta state).

Anyways, I don't want to bore you with technical details, this will be another blog post, I'm already writing :) 

So, how is the life here in SBA, CA as a German?


Life's easy :) One day you go deep sea fishing, and on the other day you fly to the Silicon Valley, San Jose with your Boss to do some datacenter work.

My greek Boss :) 

Beautiful, isn't it?

Happy Happy Smile

And if this is not enough action, actually in SBA you can do better. 

Canary Hotel, you'll get German Erdinger Weissbier from tap. 

When you are around this area, just visit the Bouchon Restaurant, this is awesome, not cheap, but the food is really, really, really (do I repeat myself???) [sounds like Jono] AWESOME!!
Especially when you are a lucky guy like me, not paying for it ;) And when you are even more lucky, as I was, the owner of this restaurant will speak German to you, and present you this good mexican "water":

Anyhow, if you are missing your own country, especially the German Country, go to Brummis Restaurant, and enjoy the German hospitality.

Is it just fun to be here? No, not at all. But when you are surrounded by the sun and you palmtrees infront of your house, it is fun. Promise.

Anyhow, I'm not just here for the fun. I have to work, and I'm actually doing a lot of work. 

DC² is actually having  a 1.0.0 release in the next upcoming weeks. 
There are some bugfixes and improvements to several Debian and Ubuntu packages waiting on my disk to be pushed out to the bugtrackers.
Integration work between Ubuntu and FreeIPA, and as well some documentation on how to communicate to the FreeIPA server with remote APIs in a secure manner (with Kerberos Ticket Delegation from one host to another host and then executing commands on this one without being logged in, sounds weird? Yes, impossible? No !! And it's fun, believe me).

And some other work on projects, you can find on Launchpad, but not in Ubuntu or Debian, like the Percona MySQL Project. They could need some help btw, with regards to a clean build system and preparing packages from upstream source for Debian/Ubuntu and RPM Based distributions.

AND!!! There is CloudStack, the Cloud Project of Citrix Inc. which was handed over to the Apache Foundation. 

 Anyhow, I don't want to brag more...but I love to be here in this area. So, let's see, it is my first trip since ages to the US, but this won't be my last trip to the US. I could also imagine to move here.

Well, anyways in the meantime I'm working and waiting for the next big bang on August, 15th in Mountain View . Kiss, Moetley Crue and The Treatment will play a concert in the Shoreline Amphitheatre

and this is "just around the corner" so I had to book some tickets. So if you are going to, just let me know, eventually we can meet before or after the concert in Mountain View and have a drink or two :) 
On the 16th I'll be in San Francisco ( at least that's the plan ), if this is a place for you to meet up, you know how to get in contact ;)

Anyways, I heard and read today, that the next UDS will be in Copenhagen, Denmark. This is awesome, and eventually I can combine a business trip to Copenhagen and visiting UDS to have some good discussions with some key people :) 

There we go. A lot of updates, good stories, awesome country.

Rock On!


It's time to make some noise

It's April and I didn't touch this weblog for more than 4 months now.
What happend? Why is "SAdig" so silent?

'Cause I'm busy, that's why.

So what happened during the last months? A lot I have to say...

The December 2011 Story

December 2011 was my last month of being employed at Citrix Online Germany, formerly known as Netviewer. Sadly, after all the fuzz and stress about my resignation I became sick and my doctor recommended to stay at home. Well actually he put me on sick leave for almost half of December. I don't know if I should thank him for that or if I should hate him for that. Anyhow, during this time, I had really a hard time. I had fever, my whole body refused to work properly. From 250% to 0% working powers it's just like a cold turkey for a drug addict.
Really, I wasn't ok and my Christmas holiday wasn't really nice.

But this is the past.

The January to March 2012 Story

From January 2012 on I started to work for a consultant Company named Inovex. The good thing of this story was, that I took my work with me. As a result of my resignation from Citrix Online/Netviewer my former bosses hired me as a consultant for the Netviewer system from Inovex.
Normally people will be happy, you start a new job, you have probation time, and you can work on the old daily business again, for better money and with less stress. Furthermore I was able to work on DC² more often and finally had the chance to create good installation packages.

But did this change of job changed anything for me?

You know, it's hard when you are a consultant coming back to your old company. No one will see you as external resource and most likely you don't see the others as customers. Eventually it's a personal issue, but this was how I felt. Therefore, nothing changed for me.
Furthermore, the work I had to do was not that complicated, I built this system in coorperation with my fellow OPS buddies. We created a system which just runs. No big incidents, no serious crashes on the hardware, no serious crashes of the software. The OS is running like a charm, thanks to Ubuntu 10.04 LTS, so not so much to do.
But my job wasn't to do anything exceptional, except writing documentation. Documentation which was already there or not needed anymore, because this system has only a limited lifetime and some of the things we created are not needed anymore. So, sitting there writing documentation for something which isn't needed anymore, doesn't make sense, and  worse, it didn't tickle my brain. I felt like "Crap, I don't do anything".

After some discussions with my family and with myself, I decided to quit this job, too.

Honestly, the people at Inovex are smart people. Friendly, cooperative, fast, and living on the edge of technology. I just say, it was not an issue with Inovex but me, that's why I quit.

I learned in the past, that it's more important to be happy for yourself, and not for anything or anybody else. If you are not happy, you can't make others happy.

The Future

Honestly, I really don't know what the future has for me, but I'm not without a "new" job.
Sometimes, you need some luck and good friends and a company which can learn from the past.

So, I was hired again by Citrix Online, but with a complete different working area. This deal was made on a short notice, and I never expected to do that, but hey, even my family and me are in need of some money for food, drinks and fuel.

I can't write about what 'm doing or what I will be doing, because all of this is very confidential.
I won't work anymore on our old Netviewer System, that's the past.

There are new opportunities showing up, there are really great projects coming along and I'm being involved. There is also Ubuntu work involved.

Furthermore, it is good to hear and read about Citrix Systems Inc. that they are moving their CloudStack Project to the Apache Project and that Citrix System is now a Platinum Sponsor of the Apache Foundation.

A Short Notice To The Fellow Readers

I know that some people will be surprised others will say "I did know that" or "I thought about this already".

All this wasn't planned. It just happened like many other situations are just happening in life.

I won't enable comments for this entry. So if you want to get in touch, you know my email address so write an email.


Life changes

So, the end of the year is near and with New Years Eve my life will change in some areas.

First of all, I'll be leaving Netviewer (now: Citrix Online) after (round about) 4 years of giving my energy to the company. During these 4 years we (StephanT, SvenW, FelixR and JensG) accomplished a lot.

StephanT and I were designing and building 2 new products on the datacenter level. Long nights, lot of brainfck and (of course we are talking about the real IT people: Operations) a lot of beer. Together with SvenW we worked  out a nice, not so complicated, redundant, high avaiable and secure MySQL infrastructure, we introduced a new monitoring system (OpenNMS), added Puppet Magic to our deployment and we (or let's just say I) developed on top of FAI a new and easy system to deploy bare metal, VMs and everything which is able to run Linux (Debian/Ubuntu/RH/SuSE/...) and does PXE boot.

We changed the behavior between OPS and Development, we introduced On Duty weeks and we succeed to redeploy a complete datacenter infrastructure in less then 48 hours, with less then 15 minutes downtime.

During these 4 years my son Sean Ryan was born and I got married to the most adorable woman of the world. I changed my name, and I finally didn't touch a computer in the evenings (even when this was really hard).

From next year on, I'll be working for a company named Inovex. It's a consulting company and I'll join the Systems Engineering team. I'm really happy to work for them, because some of my former colleagues from ComBOTS (now: Kizoo) are working there too. And what's really amazing, FelixR and JensG are joining Inovex, too.

So, the next couple of months will be filled with new duties, ideas and fresh work.

Furthermore, I'm working on the DC² side of life to have an easy installation process for that, so I can keep my promise to Thomas Lange (FAI Maintainer) that he can release the Enterprise Version of FAI next year ;)

Ubuntu work is also on the list of todos, hopefully I can dedicated more time to Ubuntu again. I'm really missing it.

This is my last post for this year, and I wish to all of you around the world a good time during the upcoming holidays and wish you a good start into the new year 2012.


[20111025] Ubuntu 12.04 Merge/Sync Report

Todays merges and syncs:

  • libunwind (merge, uploaded)
  • fai (nosync, nomerge, waiting for 4.0 release)
  • abiword (sync)
  • agave (merge, uploaded)
  • gnome-mousetrap (sync)
  • hoichess (sync)
  • lasso (sync)
  • libgwenhywfar (merge, uploaded)


DC² goes Android

Somehow I wanted to learn something about Android Development.

So, DC² goes Android. Check the video:


Or click on the source video 

As always, you'll find the source of the application on launchpad.

Have fun.


(DC)²: Going forward to a 1.0 Release

It was a bit blog silence about me during the last 2 months, but I was really busy with some projects @work.

Today, I would like to write some bits and pieces about the "DataCenter Deployment Control" Project aka (DC)².
In my last article you could see (DC)² in action, deploying some virtual machines on a Xen Server (or on VMWare or on BareMetal or on every device which is able to do some PXE).

At this time, (DC)² was using as backend the Django Framework and as RPC module the fantastic RPC4Django. As database engine was MySQL in use. We used as tftp server the very good working tftpy and as PXE bootloader PXELinux from Syslinux. As frontend development framework I used the "Qooxdoo"  JavaScript framework.

Now, I was improving all of this.

The Backend

First of all, I replaced Django and RPC4Django with web.py and a self developed XMLRPC and JSON-RPC module. With less overhead all RPC calls are much faster now.
Furthermore, I revisited the whole RPC namespace and refactored most of it.

Another important change was to go away from the relational database (MySQL), as this was introducing more complexity to the project.
When I started to think about moving away from the relational model to a document oriented model, I was  giving first CouchDB a try. But CouchDB wasn't the best candidate for this, so I had a look at MongoDB.

And MongoDB it is.

So, with MongoDB and PyMongo you can work without special table models, but if you want you are able to implement a relational db style, which was needed from some workflows in my case.
Furthermore, the replication and sharding functionality of MongoDB was exactly what I was looking for. Easy to setup and configure.

And MongoDB gives you JSON output, or when you work with PyMongo native dictionary types, which was important to me, because one feature I wanted for (DC)² that its documents can be easily improved.


We do auto inventory for servers. That means, I needed some infos from the servers which are unique.
I defined my server document like that:


Reading this, we just need a server UUID (which you can normally find under /sys/class/dmi/id/product_uuid , if this displays 00000 or something else then a uuid nowadays, you should stone your hardware manufacturer) and a serial number (/sys/class/dmi/id/product_serial).
These informations are needed to identify a server. Any other infos are not necessary (during the inventory job I try to get those infos, but actually they are just not that important).

But this record is not complete. Some server admins do need more informations like "how many CPU cores does the server has?" or "How much memory does the server has?" If you want to add this information you just add them to the inventory job (how you do it, is a topic for another article). But you just push the record with the needed fields and your added fields just to the same RPC call, and (DC)² will just save it to MongoDB.

And this is possible all over the system. I defined some informations which are needed for the standard work, which is really enough to help you deploy servers and help you with the bookkeeping, but you can add as much informations you need on top. Without changing one bit of backend code.

The Middleware

Well, (DC)² is mostly bookkeeping and configuration management and helping you to control your server fleet.
The deployment itself is done by FAI - Fully Automatic Installation. Which is an easy tool to deploy Debian Based Distros as well as RPM Based Distros like RHEL, CentOS, SLES, Fedora, Scientific Linux etc.

So, how does it interact with (DC)²?

As said before, the backend is speaking XMLRPC and JSON-RPC. The JSON-RPC part is for the frontend, the XMLRPC part is for the middleware and all tools needing data from (DC)².

The PXE Booting is also improved. Instead of using TFTP for loading the kernel and initrd I switched from the old pxelinux to the new gpxelinux (included in syslinux 4.02).
GPXElinux needs only tftp for loading the gpxelinux.0 file, all other files are being downloaded via HTTP protocol.
This gives you a wonderful possibility to cheaply scale your deployment infrastructure.

The Frontend

The frontend changed not so dramatically as the backend, but good things are still be found.
First of all, I put most of the code into separate modules. So, right now, there are modules for the models, which are used for JSON-RPC calls and pushing back the data to the widgets.
There is a module for all globally used widgets. You'll see that there is one widget which is mostly used. It's called "TableWidget" and has mostly all functionality in it.

But you put any widget you need into the tab view.

You see that the webfrontend is just looking like a desktop application. Which was indeed the purpose of using Qooxdoo and no "HTML add on framework" like  Dojo or YUI. I needed a real developers framework, and Qooxdoo is really one of the best. You can code like Nokias Qt and it's following the OOP paradigma most of the time.

And even for me, someone who had no clue about Javascript it was easy to learn and adapt.

To show you how easy it is, to add a new tab with a tablewidet, here is the javascript code of the Servers Tab.

Code Example:

_showInventoryServers:function(e) {            
            if ("inventory_server" in this._tabPages) {
            } else {
                var server_tbl=new dc2.models.Servers(dc2.helpers.BrowserCheck.RPCUrl(false));
                var server_search_dlg=new dc2.dialogs.search.Servers();               
                var server_edit_dialog=new dc2.dialogs.EditServer();
                var server_table_options={
                    searchFunctions: {
                var server_table=new dc2.widgets.TableWidget(server_table_options);

A closer look to the other code you can have on the DC² code browsing page on Launchpad. You'll find all the code on Launchpad.
The current frontend version of (DC)² is using Qooxdoo Version 1.5.

New Features


As you can see on the screenshots, there is a another menu entry with the name "CS²".

This (CS)²  means "Centralized SSL CA System" and helps you to manage your SSL host keys, CSRs, Certs and CRLs. Mostly used in the deployment system for Puppet or FreeIPA or whatever tools you are using which are in need of SSL Authorization.
(CS)² can be integrated in (DC)² but is also usable as standalone application. It has, equally to (DC)², a XMLRPC and JSON-RPC backend, has a qooxdoo frontend and is completly written in Python. Check out the screenshots.

RPM Based Distributions

Thanks to the work of the great Michael Goetze FAI is able to install RPM Based Distros like CentOS or Scientific Linux. I converted the CentOS Deployment to RHEL 5 and RHEL 6, so now, you are able to deploy mostly all world wide used  RPM based Distributions with FAI.
Thanks to Thomas Lange, who is the new maintainer of Rinse, who added my patch to it. 

What's still going to come?

I'm working on a Xen Management Center for (DC)², so you can provision Xen VMs (HVMs/PVs) in one tool without using any other tool.

This is a bit tricky, but it's coming along.
This module will also be available as integration into (DC)²  and as standalone application.
You will also have an XMLRPC and JSON-RPC Backend.
Eventually (this is not set) this RPC backend will also handle VMWare ESX server provisioning. We'll see.


Quick tip for installing Ubuntu as Paravirtualized Guest on XenServer via PXE Boot

Most of the time, when you are using your Amazone Cloud instances, you are working on XenSever.
Most of the time, all your Ubuntu instances are paravirtualized (PV) and not fully hardware virtualized like the Windows instances (HVM).

Well, let's imagine you have your own XenServer and you want to install Ubuntu with your already in place deployment solution, which is using the standard PXE/TFTP way...(Ubuntu is just an example, actually it works for mostly all Linux Distros which are able to be deployed via network).

The first question you need to ask, what's the difference between PV and HVM machines.
To answer that, you just have to have a look on the Xen Wiki:

Quote from http://wiki.xensource.com/xenwiki/XenOverview:

Xen supported virtualization types

Xen supports running two different types of guests. Xen guests are often called as domUs (unprivileged domains). Both guest types (PV, HVM) can be used at the same time on a single Xen system.

Xen Paravirtualization (PV)

Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require virtualization extensions from the host CPU. However paravirtualized guests require special kernel that is ported to run natively on Xen, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware operating systems.
PV guests don't have any kind of virtual emulated hardware, but graphical console is still possible using guest pvfb (paravirtual framebuffer). PV guest graphical console can be viewed using VNC client, or Redhat's virt-viewer. There's a separate VNC server in dom0 for each guest's PVFB.
Upstream kernel.org Linux kernels since Linux 2.6.24 include Xen PV guest (domU) support based on the Linux pvops framework, so every upstream Linux kernel can be automatically used as Xen PV guest kernel without any additional patches or modifications.
See XenParavirtOps wiki page for more information about Linux pvops Xen support.

Xen Full virtualization (HVM)

Fully virtualized aka HVM (Hardware Virtual Machine) guests require CPU virtualization extensions from the host CPU (Intel VT, AMD-V). Xen uses modified version of Qemu to emulate full PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc for HVM guests. CPU virtualization extensions are used to boost performance of the emulation. Fully virtualized guests don't require special kernel, so for example Windows operating systems can be used as Xen HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
To boost performance fully virtualized HVM guests can use special paravirtual device drivers to bypass the emulation for disk and network IO. Xen Windows HVM guests can use the opensource GPLPV drivers. See XenLinuxPVonHVMdrivers wiki page for more information about Xen PV-on-HVM drivers for Linux HVM guests.

So, using a na├»ve approach, the difference is that a HVM machine "simulates" a real hardware server, while a PV machine is using the hardware resources from the XenServer Host.
A HVM machine provides a bios, the PV machine does not. I don't want to go into other details and the description is far away from the truth, but it helps to see the difference.

Well, now we are coming to the problems, how can you do a PXE install on a PV machine, when the PV does not provide a boot bios or whatever it needs to do the initial boot request?

There are some howtos how to deploy a Linux OS on a PV machine on XenServer via PXE (e.g. XEN PXE Boot Howto by Zhigang Wang) but they go too far. It can be easier.

Having your template for a PV machine on your XenServer, and you provision one PV machine from this template, we can start with the experiment.

You can see from your Xen console, during bootup you don't see any bios message, or PXE boot message, as you would see on a normal HVM machine.
But, when you check in your XenCenter under VM -> Start/Shutdown Menu, you see one Entry under the Reboot Entry. It's labeled: "Start in Recovery Mode".

When your machine is stopped, and you click on this menu item, the machine boots with a bios or better to say, it boots with a PXE bootloader and does everything as a HVM machine.
What? You provisioned a PV machine, and now you have a HVM?

Right, that's all to it. When you stop the machine now, it goes back to the normal PV state. How cool is that?

But, what is the magic behind this special "Recovery Mode"?

Honestly, it took me some time, to find the solution.

What I did to find out more about this, I dug into XenServer XMLRPC API to get some more detailed informations about the VMs.

The devs of XenServer are really cool, they provide an XMLRPC API Server and they also provide a Python XMLRPC API Wrapper.
(I don't go into details about all the methods and calls, you should read the XenServer XMLRPC API Documentation and also the Python Examples, you can also download the XenApi.py module from there)

Let's do some easy hacking:

First, get your python XENAPI source and start connecting to your XenServer:

from XenAPI import Session
if __name__=="__main__":

Now you are connected and authenticated.
To make same things easier, you need to write down your machines title/label. Let's imagine, our PV machine is named "PV-Test".

To get the informations we need we need to get first the VM record from the XenServer:


Now we have actually the whole description of this VM in our "vm_rec" variable.
The type is a dict, so it's easy to iterate through it and get all the informations we need:

for i in vm_rec.keys():
    print "%s => %s" % (i, vm_rec[i])
The important infos we need are the following keys:

  • PV_args
  • PV_bootloader => pygrub
  • PV_ramdisk
  • PV_kernel
  • PV_bootloader_args
  • PV_legacy_args
  • HVM_boot_params
  • HVM_boot_polic
On my test machine the values are like this:

  • PV_args => 
  • PV_bootloader => pygrub
  • PV_ramdisk => 
  • PV_kernel => 
  • PV_bootloader_args => 
  • PV_legacy_args => 
  • HVM_boot_params => {}
  • HVM_boot_policy => 
the PV_bootloader => pygrub tells us, that Xen will use a dedicated menu.lst from your machines /boot/grub (grub-legacy format, not grub-pc)
This is the default way of booting your Ubuntu instances on Amazon today. 

To simulate now the Recovery Mode programatically, you need to switch from PV pygrub boot method to HVM boot method. And thanks to some Magic Of Xen, or better what I realized is, that HVM boot methods are always first, before PV boot methods.

To enable HVM network boot from your python tool, you just have to do this:

s.xenapi.VM.set_HVM_boot_policy(vm_rec,"BIOS order")
When you start now your machine, you will see it boots via PXE.

To switch back to your normal PV boot method, you just empty those settings:

Now you successfully simulated the Recovery Boot of your XenCenter.

But hey, there are some things to know:

All Releases of Ubuntu who are using UUIDs in FSTAB for your disks, are easily to deploy. During installation in HVM mode, you will see your normal disk names like /dev/sda etc.
After switching back to PV mode, you don't have /dev/sda etc anymore, but other device names, but this is no problem for your Ubuntu install, because it can map the UUIDs to your new device names. No Problems here. But make sure you have your "grub-legacy-ec2" package installed, I think I'll ask for a rename, of this package, because it's not ec2 specific, but Xen pygrub specific.

Other Linux Distros, which don't use UUIDs for device mounting will have problems here. You need to rewrite your fstab to use the new device names.

But it's good to know, that you can use your PXE deployment solution to deploy better performing PV machines on your XenServer without changing one thing.