Time flies…

yes, time flies.

And I didn’t write anything on this blog since April, just before I started again for COL.
But that’s the issue when you are busy with Life and Work, right?

So what happened during the last couple of months?

First of all, as already said, I started, again, to work for COL as an Architect in the OPS department. And directly, after my second day back in the Karlsruhe Office, I was already working on a product and company acquisition. Success. Product was migrated to our infrastructure in less then 3 weeks, and was ready to be shown during the Citrix Synergy event in San Francisco (while you, fellow Ubuntians were enjoying the UDS in May in the Bay Area :))

Beginning of June I traveled to the USA. Oh well, me and the States. Difficult topic. But, you know what, I’m am actually here:

Yes, this is California, this is Santa Barbara. And no, I’m not on holiday here, but actually it feels like.
Sadly, I had to leave my family over there in Germany. Well, next time this will change.

To be honest, I had to change my whole opinion about the american people, especially here in California. I didn’t meet anybody who was unfriendly or nasty to me, this german bad boy.

I met good, friendly, open minded people, and some of them I already call ‘Friends’.
Today is actually my second month here, and I still have one month to go.

Right now I’m working on different projects. And one of these projects will be to support Ubuntu in our datacenters. Working on that is a challenge, because right now it’s only RedHat. So we have to change a lot of infrastructure. Distribution agnostic deployment system, Puppet, and other cool topics.

One of the coolest topics, but, is working with one of the FreeIPA maintainers. We need to support FreeIPA on Ubuntu, which is not that easy right now, because the state of FreeIPA in Ubuntu is more than perfect.

“But FreeIPA is a RedHat/Fedora Project” you will say, but I have to admit, that FreeIPA works (on Fedora), and it’s more then cool.
But, when there is a will, there will be also a way. And there we go.

(Linked from: http://www.stage2planning.com/blog/bid/48775/Collaboration-Always-Starts-With-Clients)
Collaboration to the rescue.
The FreeIPA Upstream team is very helpful. And having one of the contributors sitting 2 or 3 cubes away from me, is even better.

Anyways, I filed already some nasty bug reports, and we will fix them for the future.
On of the next steps should be to provide support for FreeIPA 2.2.0 (or even FreeIPA 2.3.x, which is in beta state).

Anyways, I don’t want to bore you with technical details, this will be another blog post, I’m already writing 🙂

So, how is the life here in SBA, CA as a German?
(Linked from http://blog.milford.io/tag/awesome/)
!!!!

Life’s easy 🙂 One day you go deep sea fishing, and on the other day you fly to the Silicon Valley, San Jose with your Boss to do some datacenter work.

My greek Boss 🙂

Beautiful, isn’t it?

Happy Happy Smile

And if this is not enough action, actually in SBA you can do better.

Canary Hotel, you’ll get German Erdinger Weissbier from tap.

When you are around this area, just visit the Bouchon Restaurant, this is awesome, not cheap, but the food is really, really, really (do I repeat myself???) [sounds like Jono] AWESOME!!
Especially when you are a lucky guy like me, not paying for it 😉 And when you are even more lucky, as I was, the owner of this restaurant will speak German to you, and present you this good mexican “water”:

Anyhow, if you are missing your own country, especially the German Country, go to Brummis Restaurant, and enjoy the German hospitality.

Is it just fun to be here? No, not at all. But when you are surrounded by the sun and you palmtrees infront of your house, it is fun. Promise.

Anyhow, I’m not just here for the fun. I have to work, and I’m actually doing a lot of work.

DC² is actually having a 1.0.0 release in the next upcoming weeks.
There are some bugfixes and improvements to several Debian and Ubuntu packages waiting on my disk to be pushed out to the bugtrackers.
Integration work between Ubuntu and FreeIPA, and as well some documentation on how to communicate to the FreeIPA server with remote APIs in a secure manner (with Kerberos Ticket Delegation from one host to another host and then executing commands on this one without being logged in, sounds weird? Yes, impossible? No !! And it’s fun, believe me).

And some other work on projects, you can find on Launchpad, but not in Ubuntu or Debian, like the Percona MySQL Project. They could need some help btw, with regards to a clean build system and preparing packages from upstream source for Debian/Ubuntu and RPM Based distributions.

AND!!! There is CloudStack, the Cloud Project of Citrix Inc. which was handed over to the Apache Foundation.

Anyhow, I don’t want to brag more…but I love to be here in this area. So, let’s see, it is my first trip since ages to the US, but this won’t be my last trip to the US. I could also imagine to move here.

Well, anyways in the meantime I’m working and waiting for the next big bang on August, 15th in Mountain View . Kiss, Moetley Crue and The Treatment will play a concert in the Shoreline Amphitheatre

and this is “just around the corner” so I had to book some tickets. So if you are going to, just let me know, eventually we can meet before or after the concert in Mountain View and have a drink or two 🙂
On the 16th I’ll be in San Francisco ( at least that’s the plan ), if this is a place for you to meet up, you know how to get in contact 😉

Anyways, I heard and read today, that the next UDS will be in Copenhagen, Denmark. This is awesome, and eventually I can combine a business trip to Copenhagen and visiting UDS to have some good discussions with some key people 🙂

There we go. A lot of updates, good stories, awesome country.

Rock On!

(DC)²: Going forward to a 1.0 Release

It was a bit blog silence about me during the last 2 months, but I was really busy with some projects @work.

Today, I would like to write some bits and pieces about the “DataCenter Deployment Control” Project aka (DC)².
In my last article you could see (DC)² in action, deploying some virtual machines on a Xen Server (or on VMWare or on BareMetal or on every device which is able to do some PXE).

At this time, (DC)² was using as backend the Django Framework and as RPC module the fantastic RPC4Django. As database engine was MySQL in use. We used as tftp server the very good working tftpy and as PXE bootloader PXELinux from Syslinux. As frontend development framework I used the “Qooxdoo” JavaScript framework.

Now, I was improving all of this.

The Backend

First of all, I replaced Django and RPC4Django with web.py and a self developed XMLRPC and JSON-RPC module. With less overhead all RPC calls are much faster now.
Furthermore, I revisited the whole RPC namespace and refactored most of it.

Another important change was to go away from the relational database (MySQL), as this was introducing more complexity to the project.
When I started to think about moving away from the relational model to a document oriented model, I was giving first CouchDB a try. But CouchDB wasn’t the best candidate for this, so I had a look at MongoDB.

And MongoDB it is.

So, with MongoDB and PyMongo you can work without special table models, but if you want you are able to implement a relational db style, which was needed from some workflows in my case.
Furthermore, the replication and sharding functionality of MongoDB was exactly what I was looking for. Easy to setup and configure.

And MongoDB gives you JSON output, or when you work with PyMongo native dictionary types, which was important to me, because one feature I wanted for (DC)² that its documents can be easily improved.

Example:

We do auto inventory for servers. That means, I needed some infos from the servers which are unique.
I defined my server document like that:

SERVER_RECORD = {
“uuid”:True,
“serial_no”:True,
“product_name”:False,
“manufacturer”:False,
“location”:False,
“asset_tags”:False
}

Reading this, we just need a server UUID (which you can normally find under /sys/class/dmi/id/product_uuid , if this displays 00000 or something else then a uuid nowadays, you should stone your hardware manufacturer) and a serial number (/sys/class/dmi/id/product_serial).
These informations are needed to identify a server. Any other infos are not necessary (during the inventory job I try to get those infos, but actually they are just not that important).
But this record is not complete. Some server admins do need more informations like “how many CPU cores does the server has?” or “How much memory does the server has?” If you want to add this information you just add them to the inventory job (how you do it, is a topic for another article). But you just push the record with the needed fields and your added fields just to the same RPC call, and (DC)² will just save it to MongoDB.

And this is possible all over the system. I defined some informations which are needed for the standard work, which is really enough to help you deploy servers and help you with the bookkeeping, but you can add as much informations you need on top. Without changing one bit of backend code.

The Middleware

Well, (DC)² is mostly bookkeeping and configuration management and helping you to control your server fleet.
The deployment itself is done by FAI – Fully Automatic Installation. Which is an easy tool to deploy Debian Based Distros as well as RPM Based Distros like RHEL, CentOS, SLES, Fedora, Scientific Linux etc.

So, how does it interact with (DC)²?

As said before, the backend is speaking XMLRPC and JSON-RPC. The JSON-RPC part is for the frontend, the XMLRPC part is for the middleware and all tools needing data from (DC)².

The PXE Booting is also improved. Instead of using TFTP for loading the kernel and initrd I switched from the old pxelinux to the new gpxelinux (included in syslinux 4.02).
GPXElinux needs only tftp for loading the gpxelinux.0 file, all other files are being downloaded via HTTP protocol.
This gives you a wonderful possibility to cheaply scale your deployment infrastructure.

The Frontend

The frontend changed not so dramatically as the backend, but good things are still be found.

First of all, I put most of the code into separate modules. So, right now, there are modules for the models, which are used for JSON-RPC calls and pushing back the data to the widgets.
There is a module for all globally used widgets. You’ll see that there is one widget which is mostly used. It’s called “TableWidget” and has mostly all functionality in it.

But you put any widget you need into the tab view.

You see that the webfrontend is just looking like a desktop application. Which was indeed the purpose of using Qooxdoo and no “HTML add on framework” like Dojo or YUI. I needed a real developers framework, and Qooxdoo is really one of the best. You can code like Nokias Qt and it’s following the OOP paradigma most of the time.

And even for me, someone who had no clue about Javascript it was easy to learn and adapt.

To show you how easy it is, to add a new tab with a tablewidet, here is the javascript code of the Servers Tab.

Code Example:

_showInventoryServers:function(e) {            
            if ("inventory_server" in this._tabPages) {
                this._tabView.setSelection([this._tabPages["inventory_server"]]);
            } else {
                var server_tbl=new dc2.models.Servers(dc2.helpers.BrowserCheck.RPCUrl(false));
                var server_search_dlg=new dc2.dialogs.search.Servers();               
                var server_edit_dialog=new dc2.dialogs.EditServer();
                var server_table_options={
                    enableAddEntry:false,
                    enableEditEntry:true,
                    enableDeleteEntry:true,
                    enableReloadEntry:true,                   
                    editDialog:server_edit_dialog,
                    searchFunctions: {
                        searchDialog:server_search_dlg
                    },
                    tableModel:server_tbl,
                    columnVisibilityButton:false,
                    columnVisibility:[
                                      {
                                        column:0,
                                        visible:false
                                      }
                                      ]
                };               
                var server_table=new dc2.widgets.TableWidget(server_table_options);
                this._addTabPage("inventory_server",'Servers',server_table);
                server_table.showData();
            }
        },

A closer look to the other code you can have on the DC² code browsing page on Launchpad. You’ll find all the code on Launchpad.
The current frontend version of (DC)² is using Qooxdoo Version 1.5.

New Features

CS²

As you can see on the screenshots, there is a another menu entry with the name “CS²”.

This (CS)² means “Centralized SSL CA System” and helps you to manage your SSL host keys, CSRs, Certs and CRLs. Mostly used in the deployment system for Puppet or FreeIPA or whatever tools you are using which are in need of SSL Authorization.
(CS)² can be integrated in (DC)² but is also usable as standalone application. It has, equally to (DC)², a XMLRPC and JSON-RPC backend, has a qooxdoo frontend and is completly written in Python. Check out the screenshots.

RPM Based Distributions

Thanks to the work of the great Michael Goetze FAI is able to install RPM Based Distros like CentOS or Scientific Linux. I converted the CentOS Deployment to RHEL 5 and RHEL 6, so now, you are able to deploy mostly all world wide used RPM based Distributions with FAI.
Thanks to Thomas Lange, who is the new maintainer of Rinse, who added my patch to it.

What’s still going to come?

Netflix and Geo-blocking Content – What you may have missed

With the Internet, we get a world free of any boundaries. Everyone gets access to everything they require. There is no discrimination at all. But there is.

Video streaming services such as Netflix provide different contents to users of different regions.

You can watch all your favorite Netflix shows here, in the US but same may not be available when you travel to Australia, for example. In Netflix Australia, movies and contents are quite different from that of the US. They cater to a different set of viewers.

The content of the US Netflix is geo-blocked there, in the same way, Australian content is blocked for us. What is geo-blocking and how is geo-blocking done? Let’s answer these questions here.

What is geo-blocking?

While Netflix has some shows available in many regions, if you try to access the contents exclusive to Australia you will get an error. An error message that this content is not available in your region.

Netflix and Geo-blocking ContentYou can see this sort of thing also when you want to see the contents of say, BBC, the broadcaster in the UK. They have the contents available only for the residents in the UK.

This is known as geo-blocking. It is a system used to restrict your access to certain contents on the internet, based on your geographical location.

How is geo-blocking done?

Every device that you use to be online has got a unique identification number. This number, the IP address, is used to identify a device connected to the Internet.

Each time you visit a website, your device makes a request to the server to access the contents. With each request, the device also sends its IP address, so that the server knows where to deliver the contents.

But how the Netflix server does determine your location. The answer is with the help of your ISP. Whether you use AT&T, Verizon or Comcast, they each have a certain set of IP addresses to allocate. When you buy services from them, you get one of those IP addresses.

There are databases to map IP addresses to countries. This is how a server knows the geographical location of a device. The server checks the database with each request and then decides to approve or reject the request.

Why does Netflix block contents?

Like many other streaming services, Netflix also has licensing limitations. The licensing agreements with the content providers, restrict what Netflix can stream in different markets.

While most of the contents are available to the US market, there are movies and shows which are made for Australian taste. To limit these to the local market and implement the licensing terms, Netflix employs geo-blocking.

These geo-blocking terms pertain even to the Netflix originals.

Can we unblock the contents?Netflix unblock using VPN

Yes, you can. The key to geo-blocking is your IP address. If you change the address, you can bypass the geo-block. All you need is to use a Virtual Private Network.

A VPN server encrypts your network and hides your IP address. You get an IP address of Australia when you connect to an Australian server.

Thus, with an Australian IP address, you can access Netflix Australia, right from your home.

Self-driving Cars – Why they are a coders nightmare?

We already know self-driving cars are already becoming a thing in this modern and exponentially growing technologic world. We would not be surprised if they become indispensable for futuristic cities everyday environment, as these cities will be looking for numerous ways to make a living a lot more efficient.

We have already seen big names having a go on self-driving cars such as Google. Intel who reportedly created their first self-driving chip technology or Domino’s who are already being the flagship brand, bringing pizzas to your home on a self-driven car. Let’s take a look at Tesla too, all these just to name a few.

self-driving cars by GoogleEven some States have unveiled their set of rules specifically for self-driven cars. There is absolutely no doubt that self-driving cars are coming even though some big car manufacturers like BMW and Porsche are strongly against them allegedly because they create cars for human experience and the driving pleasure can not be taken away by machines.

We believe self-driving cars are a good thing but like everything in this world, with great power, comes great responsibility. The arrival of self-driving cars has brought some questions that are still unanswered and that have even become an ethics issue. The problem is that when self-driving cars become the standard way of transport, they will inevitably arrive situations that will be catastrophic for either the passenger or the people liable outside the car.

The classic ethnic problem of which one is a correct thing to do, kill the passenger and save three pedestrians or three people in another car or all the way around? Will the AI that these cars possess be able to identify the most moral way to act or in this case to steer, when in a situation where death is imminent? Will the AI ever learn how to mimic human behavior or even human ethics accurately?

self-driving cars by TeslaAll these questions will, if they aren’t already, be directly linked to the coders that will be in charge of programming these self-driving cars to take that sort of decisions. Not only will they have that weight on their shoulders.

How will manufacturers guarantee the security of the self-driving cars, ensuring that the codes programmed by the coder ordered by the higher up (may it be the government or the manufacturers) will not be hacked or compromised into doing another, completely different, the thing is programmed to?

Coders will have the responsibility to program and to bullet proof their work.

How will the government secure the perimeter against malicious coders? They’ll be using their computer programming skills and knowledge to create a code that can indeed deviate the reactions above or even program it to find a group of people and going directly towards them.

self-driving cars UberWe have already seen people using Ubuntu to create self-driving cars, and Uber is already testing self-driving cars to make pickups.

In the end, regardless of any problem that might arise, the government, or whoever will be in charge of those coding decisions, will need to take into consideration ethics, logic and what better serves the public order to make a decision.

We believe that self-driving cars will be imminent and that the issues that we now encounter will be addressed accordingly, making self-driving cars the most popular way of public transportation and changing the idea that owning a private car is a luxury.

Teach yourself how to code with these terrific resources

Do I want to learn to code, now what?

Many people out there, like me, were thinking at some point about this uprising and fascinating IT world. There are thousands of new IT students every day, and it seems like there will never be enough. I was an absolute beginner when I entered this new world and the first thing that I’ve noticed made my motivation and desire to improve even bigger.

learn to codeWe all do have friends who are somehow connected with IT world; it doesn’t matter if it is web developing, web design or any other branch. Every one of them was very supportive and welcoming which was very strange at the beginning because they might be talking to new competition.

Contrary to that, everybody is aware that there is so much work that nobody is a potential threat to their own business. Every person that you know will recommend you something for better and quicker improvement, but the truth is there is not quick way.

Learning how to code requires hard work and dedication but it is not impossible, and you can, even, do it by yourself. I was suggested by friends who are coding for living that it is possible to learn how to code at home and that many successful coders started that way.

How to choose the best way to learn to code?

It is essential to try different ways of learning how to code. There are many books, classes, online courses that will help you to get to code. I can only speak for myself, but I’m sure that many would agree- the best way to learn programing is through some online school.

If you are disciplined enough to sit through course and practice, you will improve quickly. At the beginning I tried everything from reading books, watching different videos online, and I was even considering going back to college.

best way to learn to codeAll of that was not necessary because there are so many great online courses, like Code Academy, w3schools and many others. It doesn’t matter if you have Ubuntu, Windows 7 or any other system, you can have any of those courses with it. Some of them are free, and some of them take small compensation but offer hours of video tutorials, e-books, blogs, etc. It is a perfect way to learn how to code because there are explanations, exercises, and examples.

I was filled with enthusiasm because that looked like a straightforward way to learn to code. By now I had books, hours of online classes and I have found myself learning every day for at least 5 hours, and I had a full-time job. It was like a mystery video game which I couldn’t get enough.

When can I start working?

It will take some time to learn how to code, but all of that depends on how quick you are improving and how much time you spend learning and practicing.
There are people who start very quickly, in a matter of months, but the average length is around one year. This is not much if you are considering that this is one of the best business branches and it is only getting bigger.

Like in every other job, the beginning is the hardest. When you feel more comfortable and natural with coding, you will learn that this is very exciting and challenging work to do.

Is there a future for Ubuntu? You might be Surprised

Linux has been going through constant changes and developing for many years, and I have been its user for more than a few years.

future for UbuntuAt first, I was freaking out when I’ve heard about complete changes of my favorite operating system. But what I have learned from previous changes is that it will be good eventually and that I have nothing to worry about.

Will new changes be good for Ubuntu?

Since I’ve been coding for living for many years now, I was asking the same question like millions of Ubuntu users- Is there a future for it? The answer is YES, but it is not so simple. This system is frequently changing, and if you are looking for a static operative system, you should look elsewhere. There are many who are programming on other systems like Windows 10 or MacOS, but the popularity of Linux is growing fast.

We all heard that there would be some changes regarding Gnome succeeding Unity as a default desktop. That might be true, but this is not the first time that a significant time change, like this one, happens. Ubuntu has been changing and experimenting almost all the time since I’ve been using it. Nevertheless, I was skeptical about this one, but if you think a little bit more about this, you will realize that this is a good thing.

So far, I am more than satisfied with changes that have been made so far. Older users might remember a similar situation in 2003. When Red Hat has been dropped, and Red Hat enterprise Linux has been developed. We saw many changes like these, and after all, we “recovered” successfully. It will still be the most usable open source desktop network but with slight changes.

Some changes might surprise you!

Users who know nothing about Ubuntu might be amazed when they update to the new version. New applications, completely new desktop interface, and new interaction method – they are just some of the many changes.

Ubuntu GnomeSince Ubuntu Gnome and Ubuntu Desktop are developing resources together and focusing on one platform, this might be the idea with many benefits. But what excites me the most is the fact that there will be a bigger focus on Snaps. Snaps is one of the most popular manager packages, and I was happy to use them.

What to expect from Ubuntu in the future?

Since I’ve been a user for many years, now I can only expect changes and constant searching for perfection. New ideas and learning from previous mistakes makes us better. I’m sure that same will happen with new Ubuntu.
My advice for everybody would be to be patient and to be prepared and informed about those new changes.

It is not something that we haven’t dealt with before, and certainly not the last time. It wasn’t so scary like we thought it might be. It will take some time to get used to new changes but overall, future for Ubuntu looks bright, and with right moves, they will grow even better and bigger. I know that I am excited.

AWESOME!

Oh wow…it’s not the first time, that I’m reading a really big flamewar…but this is…AWESOME!!

Not that we are only making a fool of ourselves, no, we are just giving everybody outside of our business a picture, that most of the OpenSource people are just kids in a sandpit.

I know, I know, commercial interests are coming first, but honestly, do we need to nitpick?

There is Mark, with a clear view of what he wants to achieve on the linux desktop, and prominent spokesman of Canonical and Ubuntu.
There is GNOME, KDE, Jeff, Aaron, Jono, GregKH, Jef oh I’m too lazy to write down all the names involved.

Seriously, I’m a true believer, that the OpenSource Business as we know it, is not going to survive. We need to change things, as we did in the past, as we do now, and as we are going to do in the future. Community wise and especially in commercial business.

But what’s going on here?

We’re digging our own grave. Nobody will take us as serious business partner, when we will go on as we do right now. Not in the server market, not in the desktop market.

Really, OpenSource is about choice. If someone has a view of doing this like so, and others do have another view, let’s fork it, change it, see if it works. Other projects or stakeholders or companies will take what they need, and leave the rest to the sharks.

Nobody, right now, is without a sin. Everyone involved has something to say, whysoever, someone is pissed personally, another one is pissed, because it’s not what he or she expects to see from the other party. and so on and so on.

And then there are the fanboys and fangirls. They do have their own view, and they rattle, too (I include myself into this group, but I’m really a fan of Ubuntu, RH, SuSE so I could rattle a lot)

But really, right now, I see more destruction then cure. It’s more “You poked my eye, I’ll slice your nose”. This is really not going to help here.

We are destroying ourselves, we are throwing away a good reputation.

Hopefully we can settle all this sh*t during a conference sitting around a table with some cool drinks and smoking a pipe of peace.

Anyways, what’s amazing to see, that only pawns are fighting. Not the kings or queens. There is no Jane (Canonical) or Jim (RedHat) or Ronald (Novell)

I really would like to see, that we are going back to business. Let’s get Unity rolling, let’s improve Gnome-shell and Plasma, there is still so much to do and we all can participate and we could all have a win-win situation.

But please, let us stop this celebrity death match…there will be no winner.

My 2 Euro Cent

s/FAI Manager/\(DC\)²/

Dear Datacenter Community,

I would like to present to you a new project of mine (and hopefully yours in the future):

(DC)² is the new name of my “FAI Manager” project.
(DC)² is the short form of “DataCenter Deployment Control”

As Michael Prokop (GRML Lead/Debian Developer/FAI Developer) reported on his blog, I presented this project during the FAI Developer Workshop at LinuxHotel, Essen, Germany.

The feedback of all attendees was very positive and also Thomas Lange, Chief FAI Developer and Project Lead FAI).

I promised to release the project as Opensource and this will happen during the next coming days.

The project will be hosted on Launchpad.net (https://launchpad.net/dc2) and therefore we will be using the different tools of Launchpad to maintain it.

Meanwhile, I’m receiving a lot of eMails because of the announcement Michael prepared, and I have to thank everyone who is interested in doing some work for it.

As mentioned before, there is an early video of the tool in action.

When you are interested in this project, you should follow the FAI mailinglist (directions are on the FAI website).

When the source is released on Launchpad, you will get the message on this blog, or on the FAI maillinglist.

I have to thank many people who were helping to get this project rolling, especially my employer Netviewer AG, which is already using the first version of this project.
Furthermore I have to thank Thomas Lange for “approving” this project as a start for a better administration console for FAI.

My Life, My Work, My OpenSource

Breaking News: Google is going to acquire Novell and the Unix copyright

Some secret source inside Google Inc. reported to us, that Google is going to acquire Novell, the Company who brought you Novell Netware.

After all court fights between SCO and Novell, regarding the Unix Copyright, the Google Board was afraid to be sued by SCO, too, as most of Googles infrastructure is running on the illegal Unix derivate “Linux”.

Eric Schmidt, Google Board member and CEO of Google, said to, as reported by our secret source, Larry Page:
“I have no time or the money to fight against SCO. After the desaster of Google China, we need to save the money for bribing the old farts on the Chinese Government, that they open up the great firewall. Therefore we need to buy Novell and their staff, because Novell is the copyright holder of Unix, and you know I was at Novell, and when we have the copyright, we will be evil!”

It’s not known, when this will happen, but there is already a plan how Google will make revenue out of the new Unix Copyright.

Nikesh Arora, President, Global Sales Operations and Business Development will start to enforce Unix Licensing to all Linux, BSD, Solaris, AIX, Xenix and Windows Distributors and Linux, BSD, Solaris, AIX, Xenix and Windows Users who won’t sign up for an account to a Google Service, like GMAIL or Google Buzz or porting their software towards Googles Apps API.

“If those people won’t share their private data with us, we will enforce our rights. There is no way to avoid Google, the easy way or the hard way. Google hates users, Google don’t know. All Microsoft Users, and especially the Microsoft Board including the fat, dancing Ballmer and the funky Gates, will be sued, because Microsoft does use a good portion of Unix Sources inside their crappy kernel”.

Chris DiBona, public sector engineering manager at Google Inc, wasn’t suprised, as our insder reports.
“We knew (for some years) already that Microsoft is using a lot of GPL and BSD Licensed Software Snippets inside their Microsoft products. We reverse engineered the Microsoft Kernel and found that they still have portions of Minix Source. This changed for Windows 7. The Windows 7 kernel is mostly the Linux Kernel with some adjustments from BSD and removed License and Copyright statements. Therefore, Microsoft will have a to pay a lot of Copyright fees to us. This will be fun! But we still have a problem about something we found while disassembling the kernel. We found a piece of code named >>jbacon_robot.c< <. Somehow it always sends out messages in random intervals to Twitter and Facebook with a random ammount of “awesome”, “rock” or “horsemen” + more random content. It has no real functionality, it’s just there. We wonder if this is the real trojan horse the world speaks about when the user’s windows computer is crashing. Another surprise was to see, that Microsoft Office actually is an older branch of StarOffice with a refined UI.” (StarOffice became during time OpenOffice)

When we (I and my colleagues) heard this news, we were shocked.
But when you get bad news, there is also good news.
I shouldn’t be writing about this, but after this bad news from Google, I have to tell you this secret (Sorry, Mark!):
Ubuntu and Canonical won’t be hit by this war between Google and the Users.
Canonical and their Launchpad Team are already working on rewriting the Linux Kernel in Python, so actually there is no C code anymore, and nobody will sue Canonical or Ubuntu Users.
The new python kernel will be shipped to the Ubuntu community after the Ubuntu 10.04 LTS release.

Oh, and I’m happy to report that the lucid+1 release will be named “Monster Marble”.