arch bash cakephp conf dauth devops drupal foss git golang information age life linux lua mail monitoring music mysql n900 netlog openstack perf photos php productivity python thesis travel uzbl vimeo web2.0

dautostart, a standalone freedesktop-compliant application starter

I couldn't find a standalone application/script that implements freedesktop compliant (XDG based) autostarting of applications, so I decided to write my own.
The project is at .

Right now, all the basics seem to work (except "Autostart Of Applications After Mount" of the spec).
It's probably not bugfree. I hacked it together in a few hours (but it works for me :-). Bugreports welcome!

read more

Opening files automatically on mainstream Linux desktops

Xfce/Gnu/Linux works amazingly well on my moms workstation, with one exception: opening files automatically with the correct program.

The two biggest culprits are:

  • Gtk's "open file with" dialog: if any Gtk program doesn't know how to open a file it brings up this dialog that is horrible to use. You can search through your entire VFS for the right executable. No thumbnails, no usage of .desktop files, $PATH, autocompletion and not even limiting the scope to directories such as /usr/bin
  • Mozilla software such as Firefox and Thunderbird: they only seem to differentiate files by their mimetype, not by extension. There are add-ons to make it easier to edit these preferences, but eventually you're in a dead end because you get files with correct extensions but unuseful mimetimes (application/octet-stream)

Luckily the fd.o guys have come up with .desktop files.
read more

Poor mans dmenu benchmark

I wanted to know how responsive dmenu and awk, sort, uniq are on a 50MB file (625000 entries of 80 1-byte chars each).

read more

Announcing the Netlog Developer Pages

At work, we've setup the Netlog Developer Pages

It is the place where you can/will find all information around our OpenSocial implementation, our own API, skin development, sample code and so on.
We've also launched a group where you can communicate with fellow developers and Netlog employees.
The page also features a blog where you can follow what is going on in the Netlog Tech team.

PS: We've also updated our jobs page

gtk dialogs for (shell)scripts with zenity and the ask-pass gui tools for ssh-add

Phew! where to start? Probably at this blogpost. It's about making it very easy to work with external encrypted volumes. I'm not going to talk about the article itself but about a great tool i discovered thanks to it: Zenity.
read more

AIF: the brand new Arch Linux Installation Framework

Recently I started thinking about writing my own automatic installer that would set up my system exactly the way I want.
(See rethinking_the_backup_paradigm_a_higher-level...)

I looked at the official Arch install scripts to see if I could reuse parts of their code, but unfortunately the code was just one big chunk of bash code with the main program and "flow control" (you must first do this step, then that), UI-code (dialogs etc) and backend logic (create filesystems, ...) all mangled up and mixed very closely together.
Functionality-wise the installer works fine, but I guess the code behind it is the result of years of adding features and quick fixes without refactoring, making it impossible to reuse any of the code.

So I started to write AIF: the Arch Linux Installation Framework
read more

Why rewriting git history? And why should commits be in imperative present tense?

There are tons of articles describing how you can rewrite history with git, but they do not answer "why should I do it?". A similar question is "what are the tradeoffs / how do I apply this in my distributed workflow?".
Also, git developers strongly encourage/command you to write commit message in imperative present tense, but do not say why. So, why?
I'll try to answer these to the best of my abilities, largely based on how I see things. I won't get too detailed (there are enough manuals and tutorials for the exact concepts and commands).
read more

Handling a remote rename/move with Git

I recently had to rename a repo on my Github account. Github has made this very easy but it's just one side of the issue. Obviously you must also update any references to this remote in other clones, otherwise pushes, fetches etc won't work anymore.

You can do this in two ways:

  • open .git/config and modify the url for the remote manually
  • git remote rm origin && git remote add origin$user/$project.git

That's it! All will work fine again.

Dvcs-autosync: An open source dropbox clone... well.. almost

I found the Dvcs-autosync project on the vcs-home mailing list, which btw is a great list for folks who are doing stuff like maintaining their home directory in a vcs.
In short: Use cases:
read more

Fosdem 2009

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I'm particulary interested in:

Fosdem 2007 review

Every year, during a special weekend in February, the University Libre of Brussels suddenly becomes a little more geeky.
It's that time of the year when many European (and some inter-continental) colleagues join us at
Fosdem: the Free and Open source Software Developers' European Meeting (more info here).

read more

Libui-sh: a library providing UI functions for shell scripts

== A library providing UI functions for shell scripts ==

When you write bash/shell scripts, do you write your own error/debug/logging/abort functions?
Logic that requests the user to input a boolean, string, password, selection out of a list,
date/time, integer, ... ?

Libui-sh is written to take care of all that.
libui-sh is meant to a be a general-purpose UI abstraction library for shell scripts.
Low impact, easy to use, but still flexible.
cli by default, can optionally use ncurses dialogs as well.

read more

I'm done with Gnome/Gconf

I'm managing my ~ in svn but using gnome & gconf makes this rather hard.
They mangle cache data together with user data and user preferences and spread that mix over several directories in your home (.gconf, .gnome2 etc).
The .gconf directory is the worst. This is where many applications store all their stuff. User preferences but also various %gconf.xml files, which seem to be updated automatically everytime 'something' happens: They keep track of timestamps for various events such as when you press numlock or become available on pidgin.
I'm fine with the fact they do that. I'm sure it enables them to provide some additional functionality. But they need to do it in clearly separated places (such as xdg's $XDG_CACHE_HOME directory)
read more

Assymetric keys instead of passwords for SSH authentication to increase security and convenience

I've been using OpenSSH already for a while and although I've seen mentions of "public key authentication" and "RSA encryption" several times in it's config files, I never decided to figure out what it did exactly, and stuck to password authentication. But now the guys at work explained how it works and after reading more about it, I'm totally hooked on it!

It's a feature in ssh protocol version 2 (thus it's around for already a while, e.g. we can all use it without updating something) which essentially comes down to this: you generate an asymmetric key pair and distribute the public key to all remote hosts. When logging in to that host, the host will encrypt a random integer, which only you can decrypt (using the private key) and hence prove your identity. Too secure your private key you'll store it encrypted with a password. Ssh-agent (which is bundled with openssh) is the tool that interacts with ssh to perform this task: when logging in to a host, ssh-agent will open the private key for you automatically if it can decrypt it with the password it receives from you But the problem is you'll have to load (enter your password and decrypt the key) each time again.

This is where keychain comes in, or you can use SSH Agent (don't confuse this with the ssh agent that comes with openssh) if you're a Mac user and like gui's. These tools basically ask you your passwords for all private keys you wish to use in a session (with session I mean "the whole time of using your computer"), decrypt the encrypted key on your hard disk and cache the decrypted key in ram, so it can be used in all terminals you open.

For more information:
OpenSSH key management, Part 1: Understanding RSA/DSA authentication
OpenSSH key management, Part 2: Introducing ssh-agent and keychain
OpenSSH key management, Part 3: Agent forwarding and keychain improvements (freaks only ;-))

Have fun

new AIF release

My holidays present for Arch devs and users: AIF alpha-0.6 !

* Changes since alpha 0.5:
read more

PhpDeliciousClient, a php cli client to administer accounts

PhpDeliciousClient is a console based client for doing maintenance on accounts.
I wrote it because - to my knowledge - there currently is no good program (including the personalized web page itself) that lets you make changes to your data in a powerful, productive manner. (with data I primarily mean tags. Posts and bundles are considered less important).

You probably are familiar with the fact that a Delicious account (or any tag based meta data organizing system, for that matter) can soon become bloated: It gets filled with way too many tags. Among those tags several of them mean the same (fun, funny, humor, ...) or include the other (humor, jokes, ...) You can group them in bundles but even then you need to add all the tags to a post if you want it to appear in the results for that tag. Not very convenient. Also, if you have your bookmarks available in Firefox, you'd have a menu with several hundreds of entries (one for each tag), each menu containing usually just a few (or worse: just one) entry.

When I got in this situation I tried to fix it, but it was a hell of a task to do this on the Delicious webpage itself, and I although I found some other tools they were far to basic, outdated, dependent on other stuff or just not meant for this kind of task, so I decided to write my own.

The result is a php command line program called PhpDeliciousClient (as you can see, I added it to the menu on the left too), which uses the PhpDelicious library to access the api.

The primary focus of the program is to help you to bring your tags in balance, in an as efficient way as possible. Other stuff, which can be done just fine on the delicious page (editing single posts, changing your password, ...) is not implemented.

It's a bit hacky, I don't give any guarantees but I can tell I used it to edit my own page, going from about 400 tags to about 80 without any problems.

That said, head over to the PhpDeliciousClient project page for some more information, and to download it ;-)

My favorite bash tricks

Hello everyone.
This post is about bash, the shell providing so many users easy access to the underlying power of their system.
(not bash the quote database, although i really like that website too ;-) )
Most people know the basics, but getting to know it better can really increase your productivity. And when that happens, you might start loving bash as much as I do ;-)

I assume you have a basic knowledge of bash, the history mechanism, and ~/.bash* files.
So here they are, my favorite tricks, key combo's and some bonus stuff:


  • "cd -" takes you back to the previous directory, wherever that was. Press again to cycle back.
  • putting arguments between braces {like,so} will execute the command multiple times, once for each "argument". Bash will make the cartesian product when doing it multiple times in 1 expression. Some less-obvious tricks with this method are mentioned here
  • HISTIGNORE : with this variable you have control over which things are being saved in your history. Here is a nice explication. Especially the space trick is very useful imho.
  • CD_PATH : Here is a great explanation ;-)
  • readline (library used by bash) trick: put this in your ~/.inputrc (or /etc/inputrc) :

    This way you can do *something*+pageup/pagedown to cycle through your history for commands starting with *something*
    You can use the up/down arrows too, their codes are "\e[A" and "\e[B"

  • for more "natural" history saving behavior (when having several terminals open): put this in .bash_profile:

    (write each command separately in a new entry, instead of all at shell exit).
    And type

    to append instead of overwrite. (this might be default on some distro's. I think it was on Gentoo)


  • ctrl+r : search through your history. Repeatedly press ctrl+r to cycle through hits.
  • ctrl-u : cut everything on the current line before the cursor.
  • ctrl-y : paste text that was cut using ctrl-u. (starting at the cursor)
  • !$: equals the last word of the previous command. (great when performing several operations on the same file)

Bonus material

  • Bash completion, an add-on for bash which adds completion for arguments of your commands. It's pretty smart and configurable. (it's in portage, and probably in repos of all popular distros out there)
  • This script provides you an interface to the rafb pastebin!
  • Recursively delete all .svn folders in this folder, and in folders below. "find . -name .svn -print0 | xargs -0 rm -rf"
  • Recursively count number of files in a directory: "find -not -type d | wc -l"


Those were all important tricks I'm currently using. On the web you'll find lots more useful tips :-).
If that still isn't enough, there is also man bash :o

With aliases and scripts (and involving tools like sed or awk) the possibilities become pretty much endless. But for that I refer to and your favorite web search engine.

Jobhunt over.

What better way to launch the new year then starting to work as a System Engineer/Developer for a consulting firm where everyone breathes Linux and Open Source?
Next week I'll start at Kangaroot. Woohoo.

DDM v0.4 released

DDM v0.4 has been released.
Since the last release many, many things have been changed/fixed/added.

read more

Rsyncbench, an rsync benchmarking tool

Background info:
I'm currently in the process of evaluating (V)PS hosting providers and backup solutions. The idea being: I want a (V)PS to run my stuff, which doesn't need much disk space,
but in the meantime it might be a good idea to look for online backup solutions (oops did I say "online"? I meant "cloud"), like on the (V)PS itself, or maybe as a separate solution.
But I've got some diverse amount of data (my personal data is mostly a lot of small plaintext files, my mom has a windows VM for which I considered syncing the entire vdi file)
At this point the biggest contenders are Linode (which offers quite some flexibility and management tools, but becomes expensive when you want extra disk space (2$/month*GB), Rackspace backup gives you 10GB for 5$/month, but they have nice backup tools so I could only backup the important files from within the windows VM (~200MB), and then there's Hetzner, which offers powerful physical private servers with a lot of storage (160GB) for 29eur/month, but less flexibility (I.e. kvm-over-ip costs an extra 15eur/month)

Another issue, given the limited capacity of Belgian internet connections, I needed to figure out how much bandwith rsync really needs, so I can calculate if the duration of a backup run including syncing the full vdi file is still reasonable.

I couldn't find an rsync benchmarking tool, so I wrote my own.


  • simple
  • non invasive: you specify the target and destination hosts (just localhost is fine too), and file locations
  • measures time spent, bytes sent (measured with tcpdump), and data sent (rsync's statistics which takes compression into account)
  • supports plugins
  • generates png graphs using Gnuplot
  • two current plugins: one using files of various sizes, both randomly generated (/dev/urandom) and easily compressable (/dev/zero), does some use cases like initial sync, second sync (no-op), and syncing with a data block appended and prepended. The other plugin collects vdi files from rsnapshot directories and measures the rsyncing from each image to the next

read more

Snip: a dead-simple but quite powerful text expander and more

Inspired by Snippy and snippits, I wrote a simple tool called snip.
It helps you to automatically fill in text for you (which can be dynamically created) and/or to perform custom keypresses and operations.

read more

An rss2email fork that sucks less

Rss2email is a great tool. I like getting all my news messages in my mailbox and using smtp to make the "news delivery" process more robust makes sense.
However, there are some things I didn't like about it so I made a github repo where I maintain an alternative version which (imho) contains several useful improvements, both for end users and for developers/downstreams.
Also, this was a nice opportunity for me to improve my python skills :)

Here is how it compares:

read more

Requirements for the perfect GTD tool

I've been reading GTD lately and it's absolutely a great and inspiring book.
Having made my home office space into a real Zen I want to start implementing GTD in my digital life but it seems very hard to find a good GTD tool that fully implements GTD. (even though there are a lot of tools out there)

The most interesting ones (each for different reasons) I've looked at so far are Thinkingrock, tracks and yagtd (the latter requiring most work before it does everything I need, but it's also the most easy to dive into the code base). I'm keeping my eyes open because there are certainly more things to discover.

Even though there are probably no applications out there that can do everything I want, I just wanted to share my feature-wishlist. These are the requirements I find that a really good tool should comply with:

read more

What the open source community can learn from Devops

Being active as both a developer and ops person in the professional life, and both an open source developer and packager in my spare time, I noticed some common ground between both worlds, and I think the open source community can learn from the Devops movement which is solving problems in the professional tech world.

For the sake of getting a point across, I'll simplify some things.

First, a crash course on Devops...

read more

I just became a "System & Network Architect"

I just signed my contract at Incrowd, the company behind sites such as redbox and facebox.

I will be working there in a team of all young, enthusiastic people. Among those, some people are already familiar to me: my old friend Lieven (we've played in a band together but kept in touch afterwards) and my ex-classmate Jurriaan. Both of them love their jobs btw :-).

My official title is "System & Network architect".
Things I will be doing there is
read more

CakePHP and a paradigm shift to a code generation based approach?

At my new job, I'm writing a quite full-featured web application.
I've choosen to use CakePHP.
Why? Well, it may be 2 years since I last used it, but I've followed the project and it's planet, and it seems to have matured and gained even more monumentum.
I want to use something that is widely used so there is plenty of stuff available for it, it's RAD, it's flexible and powerful.
I noticed things such as CLI support and documentation have improved tremendously too.

However, I find that still, the recommended (or at least "most commonly used") practices are not as efficient as they could be, and that emphasis is placed on the wrong aspects.
See, even though the bake tool has come a long way since I last used it, it's still used to "generate some standard models/controllers/views" and the developer can take it from there [further editing the resulting files himself].
Finetuning generated code by editing the templates (in fact, only views have templates; the php code of models and controllers is hardcoded in the scripts that generate them), is still an obscure practice...
Also, there are very few commandline switches (Right now you can choose your app dir, whether you want to bake a model,controller or view, and it's name.)
All other things (validation rules, associatons, index/view/edit/add actions/views, which components, overwrite yes/no etc) are all handled interactively.
There are also some smaller enoyances such as when you specify one option like the name of the model, it assumes you don't want interactivity and produces a model containing nothing more then the class definition and the membervariable $name, which is usually worthless.
One thing that is pretty neat though, If you update $this->recursive in a model, the baked views will contain stuff for the associated things. But so much more could be done...

read more

Fosdem 2010

I'll be at fosdem - 10th edition - again this year.
I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I'll be presenting a lightning talk about uzbl.
Also, Arch Linux guys Roman, JGC, Thomas and me will hang out at the distro miniconf. We might join the infrastructure round-table panel, but there is no concrete information yet.

More stuff I'm looking forward to:
read more

DDM : a Distributed Data Manager

UPDATE: this information is outdated. See for latest information.


If you have multiple sets of data (e.g.: music, images, documents, movies, ...) and you use these on more then one system ( e.g. a laptop and a file server) then you probably also have some 'rules' on how you use these on your systems. For example after capturing new images you maybe put them on your laptop first but you like to sync them to your file server frequently. On the other hand you also want all your high-res images (stored on the server) available for editing on the laptop, and to make it more complicated you might have the same images in a smaller format on your server (for gallery programs etc.) and want these (or a select few albums of them) available on the road.

The more different types of data you have and the more you have specific work flows the harder it becomes to keep your data as up to date as possible and consistent on your boxes. You could manually rsync/(s)cp your data but you end up in having a mess (at least that's how it turned out on my boxes). Putting everything under version control is great for text files and such, but it's not an option for bigger (binary) files.

I wanted to keep all my stuff neatly organised in my home directories and I want to create good work flows with as minimum hassle as possible, so I decided to write DDM: the Distributed Data Manager.
read more

Poor mans pickle implementations benchmark

Hello world!

Finally, my own website...
I already wanted to get this up for a long time. My initial idea was writing (and styling) it all from scratch using the marvelous CakePHP framework along with an authentication system i wrote, dAuth.
However, due to my lack of time I decided to use the excellent drupal platform, of which I'm quite sure will get the job done equally well, while drastically liberating my time, so I can invest it in other projects :-)
Dries Buytaert's talk on fosdem this year really helped on making that decision ;-)

So, what will this site be about?

  • me
  • my interests
    • Free & Opensource software, and the thoughts/ideals behind it
    • PHP scripting/programming (I like C(++), bash and j2se too but I'm not as good at it as I am at php)
    • Audio recording/mixing/production
    • Drumming, one of my greatest hobbies
    • Music, bands, movies,... I like
    • productivity (TDD, automation scripts, shell/DE tweaks, ...)
    • ethics and philosophy, these aspects are really important in my life
    • Jeugdhuis SjaTOo, our local youth club

Now let's get started ;-)

The perfect GTK music player: can Exaile replace Amarok?

I've always liked Amarok: it does everything I always wanted, and more. It looks perfect in every way ...
But .. it uses the QT library, and although there are tricks to make QT applications more fit in with your gtk desktop/theme it will never fit in perfectly, not only graphically but also because you still need to load the qt libraries when you want to listen to some music and it is built to interact with the KDE desktop environment.

So, I've been looking for an alternative, a GTK application strong enough to actually be able to replace Amarok, the king of all software music players.
read more

Zenoss & Mysql monitoring

I've been playing with Zenoss (2.4) for the first time. Here are my thoughts:
read more

I'm not going to Fosdem 2008

I wish I could put this on my webpage :

I’m going to FOSDEM, the Free and Open Source Software Developers’ European Meeting

read more

Restoring ssh connections on resume

I use pm-utils for hibernation support.
It has a hooks system which can execute stuff upon hibernate/suspend/thaw/resume/..., but they run as root.
If you want to run stuff as a regular user you could do something like


..but these commands have no access to your user environment.
In my user environment I have a variable which I need access to, namely SSH_AUTH_SOCK, which points to my agent which has some unlocked ssh keys. Obviously you don't want to reenter your ssh key passwords everytime you resume.
(In fact, I started using hibernate/resume because I got tired of having to enter 4 passwords on boot. - 1 for dm_crypt, 1 for login, 2 for ssh keys, not because it is much faster)

The solution is very simple. Use this:

This way, will be executed when you resume, after the complete environment has been restored.
Ideal to kill old ssh processes, and setup tunnels and ssh connections again.
I'm probably gonna integrate this into my microDE

the "Community Contributions" section on the Arch Linux forums is a goldmine

The Community contributions subforum of the Arch Linux forums is awesome.
It is the birthplace of many applications, most of them not Arch Linux specific.
File managers, media players, browsers, window managers, text editors, todo managers, and so on. Many shell scripts, urxvt extensions and dwm patches aswell.
Most of the apps are designed after suckless/KISS principles, but there are also some GUI programs.

If you like to discover new apps and tools, check it out.

Where are the new Arch Linux release images?

Open source en softwarepatenten vanuit een ethisch perspectief

Voor school moest ik een ethisch verslag schrijven.
Hier moesten 3 elementen in voorkomen:

  • Inleidende tekst / Persoonlijke reflectie over beroeps- en bedrijfsethiek
  • Ethische beschouwingen bij het eindwerk
  • Uitgebreidere verhandeling met ethische beschouwingen over een vraagstuk naar keuze uit het domein van wetenschap, techniek, beroeps- en bedrijfsleven. In mijn geval is dit vraagstuk naar keuze "open source en softwarepatenten" geworden

Zoals sommigen onder jullie weten vind ik ethiek een heel belangrijk aspect in het leven en ik ben dan ook blij dat ik het issue van open source en softwarepatenten verder heb kunnen uitdiepen, want dit is iets waar ik in geinteresseerd ben.

Mijn werkje kan je hier downloaden. Lees het gerust door en laat me weten wat je ervan vindt!

Een heel interessant artikel dat ik ben tegengekomen (wel, eigenlijk mailde de docent van Ethiek het me door ;-) is Community Ethics and Challenges to Intellectual Property geschreven door Kaido Kikkas. Neem ook dit zeker door want die kerel slaat de nagel op de kop!
Bovenstaande tekst heeft ook het levensverhaal van Edward Howard Armstrong aangehaald. Toen ik hierover las heb ik direct zijn hele biografie op wikipedia doorgenomen. In het kort komt het er op neer dat die persoon enkele geniale uitvindingen heeft gedaan, maar het bedrijf waarvoor hij werkte heeft via dure rechtzaken hem in de zak gezet en geruineerd. Uiteindelijk ging zijn huwelijk en zijn hele leven eraan kapot. En dan heeft hij zelfmoord gepleegd.
Ongelooflijk hoe ver het egoisme van een bedrijf kan gaan. Laten we dan ook leren uit de geschiedenis en kritisch staan tegenover de nieuwe elementen waar we tegenover staan zoals intellectuele eigendommen en patenten. We willen toch niet dezelfde fouten maken?

In mijn zoektocht naar meer info over deze man stootte ik op deze pagina: doomed engineers. Neem deze even door!

Last but not least wil ik bij deze ook iedereen de film Revolution OS aanraden. Het is al lang geleden dat ik hem gezien heb, maar hij geeft een prima beeld over de opgang van vrije software (en Linux in het bijzonder), en je leert er vooral ook bij over enkele onethische praktijken van softwaregiganten, you know which one I mean...

Tweaking Lighttpd stat() performance with fcgi-stat-accel

If you serve lots of (small) files with Lighttpd you might notice you're not getting the throughput you would expect. Other factors (such as latencies because of the random read patterns ) aside, a real show stopper is the stat() system call, which is a blocking system call ( no parallelism ). Some clever guys thought of a way to solve this : a fastcgi program that does a stat(), so when it returns Lighty doesn't have to wait because the stat information will be in the Linux cache. And in the meanwhile your Lighty thread can do other stuff.
read more

Uzbl. A browser that adheres to the unix philosophy.

I need a browser that is fast, not bloated, stores my data (bookmarks, history, account settings, preferences, ...) in simple text files that I can keep under version control, something that does not reinvent the wheel, something that I can control.

Well, I could not find it.
So I started the uzbl browser project.
read more

Mysql status variables caveats

While setting up Zenoss and reading Mysql documentation about status variables I learned:

  • All select_* variables ("Select statistics" graph in Zenoss) are actually about joins, not (all) selects. This also explains why there is no clear relation to com_select (which shows the amount of selects). ("Command statistics:selects" graph in Zenoss)
  • Com_select does not denote all incoming select commands. If you have a hit on your query cache, com_select is not incremented. So I thought we were doing less qps while in fact we were just getting more cache hits. Qcache_hits gets incremented on cache hits (but is not monitored by Zenoss)

Arch Linux 2009.08 & Froscon 2009

So, the Arch Linux 2009.08 release is now behind us, nicely on schedule.
I hope people will like AIF because it was a lot of work and we didn't receive much feedback. I personally like it to apply my fancy backup restoration approach.
But I'm sure if more people would look at the code we would find quite some design and implementation things that could be improved. (With uzbl I was amazed how much difference it can make if many people all have ideas and opinions about every little detail)

Later this week I'm off to the Counting Cows festival in France, and the week after that (august 22-23) I'm going to FrOSCon in Germany where I will meet some of my Arch Linux colleagues in real life, which I'm really looking forward to.

If anyone wants a ride to froscon let me know. But note I'll try to maximize my time there (leave saturday early and come back late on sunday. I even took a day off on monday so I might stay a day longer if I find more interested people to hang out there)

My projects are on github now

I've put my (somewhat interesting) projects on GitHub.
Git is a damn cool VCS for distributed development, and I think Github integrates with it really nicely, adding some useful aspects for following and collaborating on projects.
The projects I have migrated to my GitHub profile are:
read more

Nagios monitoring in your desktop panel aka Xfce Genmon panel plugin rules!

FOSS is written by users, for users, and what I've been doing/experiencing this afternoon is a perfect example of that.
read more