Wed, 24 Dec 2008

Jobhunt over.

What better way to launch the new year then starting to work as a System Engineer/Developer for a consulting firm where everyone breathes Linux and Open Source?
Next week I'll start at Kangaroot. Woohoo.

Sun, 08 Jul 2007

The perfect GTK music player: can Exaile replace Amarok?

I've always liked Amarok: it does everything I always wanted, and more. It looks perfect in every way ...
But .. it uses the QT library, and although there are tricks to make QT applications more fit in with your gtk desktop/theme it will never fit in perfectly, not only graphically but also because you still need to load the qt libraries when you want to listen to some music and it is built to interact with the KDE desktop environment.

So, I've been looking for an alternative, a GTK application strong enough to actually be able to replace Amarok, the king of all software music players.

::Read more

Tue, 29 May 2007

Open source en softwarepatenten vanuit een ethisch perspectief

Voor school moest ik een ethisch verslag schrijven.
Hier moesten 3 elementen in voorkomen:

  • Inleidende tekst / Persoonlijke reflectie over beroeps- en bedrijfsethiek
  • Ethische beschouwingen bij het eindwerk
  • Uitgebreidere verhandeling met ethische beschouwingen over een vraagstuk naar keuze uit het domein van wetenschap, techniek, beroeps- en bedrijfsleven. In mijn geval is dit vraagstuk naar keuze "open source en softwarepatenten" geworden

Zoals sommigen onder jullie weten vind ik ethiek een heel belangrijk aspect in het leven en ik ben dan ook blij dat ik het issue van open source en softwarepatenten verder heb kunnen uitdiepen, want dit is iets waar ik in geinteresseerd ben.

Mijn werkje kan je hier downloaden. Lees het gerust door en laat me weten wat je ervan vindt!

Een heel interessant artikel dat ik ben tegengekomen (wel, eigenlijk mailde de docent van Ethiek het me door ;-) is Community Ethics and Challenges to Intellectual Property geschreven door Kaido Kikkas. Neem ook dit zeker door want die kerel slaat de nagel op de kop!
Bovenstaande tekst heeft ook het levensverhaal van Edward Howard Armstrong aangehaald. Toen ik hierover las heb ik direct zijn hele biografie op wikipedia doorgenomen. In het kort komt het er op neer dat die persoon enkele geniale uitvindingen heeft gedaan, maar het bedrijf waarvoor hij werkte heeft via dure rechtzaken hem in de zak gezet en geruineerd. Uiteindelijk ging zijn huwelijk en zijn hele leven eraan kapot. En dan heeft hij zelfmoord gepleegd.
Ongelooflijk hoe ver het egoisme van een bedrijf kan gaan. Laten we dan ook leren uit de geschiedenis en kritisch staan tegenover de nieuwe elementen waar we tegenover staan zoals intellectuele eigendommen en patenten. We willen toch niet dezelfde fouten maken?

In mijn zoektocht naar meer info over deze man stootte ik op deze pagina: doomed engineers. Neem deze even door!

Last but not least wil ik bij deze ook iedereen de film Revolution OS aanraden. Het is al lang geleden dat ik hem gezien heb, maar hij geeft een prima beeld over de opgang van vrije software (en Linux in het bijzonder), en je leert er vooral ook bij over enkele onethische praktijken van softwaregiganten, you know which one I mean...

Wed, 14 Mar 2007

My favorite bash tricks

Hello everyone.
This post is about bash, the shell providing so many users easy access to the underlying power of their system.
(not bash the quote database, although i really like that website too ;-) )
Most people know the basics, but getting to know it better can really increase your productivity. And when that happens, you might start loving bash as much as I do ;-)

I assume you have a basic knowledge of bash, the history mechanism, and ~/.bash* files.
So here they are, my favorite tricks, key combo's and some bonus stuff:

Tricks

  • "cd -" takes you back to the previous directory, wherever that was. Press again to cycle back.
  • putting arguments between braces {like,so} will execute the command multiple times, once for each "argument". Bash will make the cartesian product when doing it multiple times in 1 expression. Some less-obvious tricks with this method are mentioned here
  • HISTIGNORE : with this variable you have control over which things are being saved in your history. Here is a nice explication. Especially the space trick is very useful imho.
  • CD_PATH : Here is a great explanation ;-)
  • readline (library used by bash) trick: put this in your ~/.inputrc (or /etc/inputrc) :
    "\e[5~": history-search-backward
    "\e[6~": history-search-forward
    

    This way you can do *something*+pageup/pagedown to cycle through your history for commands starting with *something*
    You can use the up/down arrows too, their codes are "\e[A" and "\e[B"

  • for more "natural" history saving behavior (when having several terminals open): put this in .bash_profile:

    PROMPT_COMMAND='history -a'
    

    (write each command separately in a new entry, instead of all at shell exit).
    And type

    shopt -s histappend
    

    to append instead of overwrite. (this might be default on some distro's. I think it was on Gentoo)

Shortcuts/keycombos

  • ctrl+r : search through your history. Repeatedly press ctrl+r to cycle through hits.
  • ctrl-u : cut everything on the current line before the cursor.
  • ctrl-y : paste text that was cut using ctrl-u. (starting at the cursor)
  • !$: equals the last word of the previous command. (great when performing several operations on the same file)

Bonus material

  • Bash completion, an add-on for bash which adds completion for arguments of your commands. It's pretty smart and configurable. (it's in portage, and probably in repos of all popular distros out there)
  • This script provides you an interface to the rafb pastebin!
  • Recursively delete all .svn folders in this folder, and in folders below. "find . -name .svn -print0 | xargs -0 rm -rf"
  • Recursively count number of files in a directory: "find -not -type d | wc -l"

Conclusion

Those were all important tricks I'm currently using. On the web you'll find lots more useful tips :-).
If that still isn't enough, there is also man bash :o

With aliases and scripts (and involving tools like sed or awk) the possibilities become pretty much endless. But for that I refer to tldp.org and your favorite web search engine.

Sun, 04 Mar 2007

Hello world!

Finally, my own website...
I already wanted to get this up for a long time. My initial idea was writing (and styling) it all from scratch using the marvelous CakePHP framework along with an authentication system i wrote, dAuth.
However, due to my lack of time I decided to use the excellent drupal platform, of which I'm quite sure will get the job done equally well, while drastically liberating my time, so I can invest it in other projects :-)
Dries Buytaert's talk on fosdem this year really helped on making that decision ;-)

So, what will this site be about?

  • me
  • my interests
    • Free & Opensource software, and the thoughts/ideals behind it
    • PHP scripting/programming (I like C(++), bash and j2se too but I'm not as good at it as I am at php)
    • Audio recording/mixing/production
    • Drumming, one of my greatest hobbies
    • Music, bands, movies,... I like
    • productivity (TDD, automation scripts, shell/DE tweaks, ...)
    • ethics and philosophy, these aspects are really important in my life
    • Jeugdhuis SjaTOo, our local youth club

Now let's get started ;-)

Sat, 24 May 2008

Announcing the Netlog Developer Pages

At work, we've setup the Netlog Developer Pages

It is the place where you can/will find all information around our OpenSocial implementation, our own API, skin development, sample code and so on.
We've also launched a group where you can communicate with fellow developers and Netlog employees.
The page also features a blog where you can follow what is going on in the Netlog Tech team.

PS: We've also updated our jobs page

Mon, 21 Jul 2008

Rethinking the backup paradigm: a higher-level approach

In this post I explain my vision on the concepts of backups and how several common practices are in my opinion suboptimal and become unnecessary or at least can be done more easily by managing data on a higher level by employing other patterns such as versioning important directories and distributed data management.

::Read more

Thu, 15 Mar 2007

Fosdem 2007 review

Every year, during a special weekend in February, the University Libre of Brussels suddenly becomes a little more geeky.
It's that time of the year when many European (and some inter-continental) colleagues join us at
Fosdem: the Free and Open source Software Developers' European Meeting (more info here).



::Read more

Sun, 01 Nov 2009

ext3 logical partition resizing

You probably know you can resize primary partitions by deleting them and recreating them, keeping the starting block the same but using a higher block as ending point. You can then increase the filesystem.
But what about logical partitions? A while back I had to resize an ext3 logical partition which ended at the end of the last logical partition. I learned some usefull stuff but I only made some quick scratch notes and I don't remember all details so:
Do not expect a nice tutorial here, it's more of a commented dump of my scratch notes and some vague memories.
The information in this post is not 100% accurate

I wondered if I could just drop and recreate the extended partition (and if needed, recreating all contained logical partitions, the last one being bigger of course) but nowhere I could find information about that.


::Read more

Wed, 16 Jun 2010

Restoring ssh connections on resume

I use pm-utils for hibernation support.
It has a hooks system which can execute stuff upon hibernate/suspend/thaw/resume/..., but they run as root.
If you want to run stuff as a regular user you could do something like

su $user -c <command>

..but these commands have no access to your user environment.
In my user environment I have a variable which I need access to, namely SSH_AUTH_SOCK, which points to my agent which has some unlocked ssh keys. Obviously you don't want to reenter your ssh key passwords everytime you resume.
(In fact, I started using hibernate/resume because I got tired of having to enter 4 passwords on boot. - 1 for dm_crypt, 1 for login, 2 for ssh keys, not because it is much faster)

The solution is very simple. Use this:

sudo pm-hibernate && do-my-stuff.sh

This way, do-my-stuff.sh will be executed when you resume, after the complete environment has been restored.
Ideal to kill old ssh processes, and setup tunnels and ssh connections again.
I'm probably gonna integrate this into my microDE

Sun, 25 Mar 2012

Lighttpd socket Arch Linux /var/run tmpfs tmpfiles.d

On Arch Linux, and probably many other distros /run is a new tmpfs, and /var/run symlinks to it. With Lighttpd you might have a fastcgi socket defined something like "/var/run/lighttpd/sockets/mywebsite.sock". This won't work anymore as after each reboot /var/run is an empty directory and lighttpd won't start, /var/log/lighttpd/error.log will tell you:
2012-03-16 09:21:34: (log.c.166) server started 
2012-03-16 09:21:34: (mod_fastcgi.c.977) bind failed for: unix:/var/run/lighttpd/sockets/mywebsite.sock-0 No such file or directory 
2012-03-16 09:21:34: (mod_fastcgi.c.1397) [ERROR]: spawning fcgi failed. 
2012-03-16 09:21:34: (server.c.945) Configuration of plugins failed. Going down.
That's where this new tool tmpfiles.d comes in. It creates files and directories as described in the configs, and gets invoked on boot. Like so:
$ cat /etc/tmpfiles.d/lighttpd.conf 
d /run/lighttpd/sockets 0700 http http

Tue, 10 Apr 2007

Debian changing route: the end of the perfect server linux distribution?

From the very little experience I have with Debian, and from the stuff I've been reading about it, I think I can safely say Debian has always been a special distribution: packages always take very long to get into the stable tree, because Debian wanted to be a rock solid system where packages go through a lot of testing. ("We release it when it's done") The end result is a distro where you don't have the latest software, neither as much flexibility as, say Gentoo or Arch: You'd many times need to adapt your way of doing things to the "Debian way" (or be prepared to look for help in really obscure places and probably break things) but the end result is a stable distro where everything works very decently. That, combined with no licensing fees (unlike for example Red hat), make it the perfect choice for a server in small companies, where money is more important then features such as professional support or official certifications.

However, it seems like Debian is taking a route that will make it lose it's advantages over other distributions in the server market:

::Read more

Tue, 17 Apr 2007

I just became a "System & Network Architect"

I just signed my contract at Incrowd, the company behind sites such as redbox and facebox.

I will be working there in a team of all young, enthusiastic people. Among those, some people are already familiar to me: my old friend Lieven (we've played in a band together but kept in touch afterwards) and my ex-classmate Jurriaan. Both of them love their jobs btw :-).

My official title is "System & Network architect".
Things I will be doing there is keeping the "lower level" (hardware, network, databases) secure, stable and performing well.

::Read more

Mon, 17 Nov 2008

AIF: the brand new Arch Linux Installation Framework

Recently I started thinking about writing my own automatic installer that would set up my system exactly the way I want.
(See rethinking_the_backup_paradigm_a_higher-level...)

I looked at the official Arch install scripts to see if I could reuse parts of their code, but unfortunately the code was just one big chunk of bash code with the main program and "flow control" (you must first do this step, then that), UI-code (dialogs etc) and backend logic (create filesystems, ...) all mangled up and mixed very closely together.
Functionality-wise the installer works fine, but I guess the code behind it is the result of years of adding features and quick fixes without refactoring, making it impossible to reuse any of the code.

So I started to write AIF: the Arch Linux Installation Framework (actually it had another name until recently), with these 3 goals in mind:

::Read more

Mon, 24 Nov 2008

Looking for a new job

The adventure at Netlog didn't work out entirely, so I'm looking for a new challenge!

My new ideal (slightly utopic) job would be:

  • Conceptual engineering while still being close to the technical side as well, most notably system engineering and development.
  • Innovative: go where no one has gone before.
  • Integrated in the open-source world. (Bonus points for companies where open source is key in their business model)

To get a detailed overview of my interests and skills, I refer to:

Tue, 17 May 2011

Where are the new Arch Linux release images?

This is a question I get asked a lot recently. The latest official images are a year old. This is not inherently bad, unless you pick the wrong mirror from the outdated mirrorlist during a netinstall, or are using hardware which is not supported by the year old kernel/drivers. A core install will yield a system that needs drastic updating, which is a bit cumbersome. There are probably some other problems I'm not aware of. Many of these problems can be worked around (with 'pacman -Sy mirrorlist' on the install cd for example), but it's not exactly convenient.

Over the past years (the spare time in between the band, my search for an apartment in Ghent and a bunch of other things) I've worked towards fully refactoring and overthrowing how releases are being done. Most of that is visible in the releng build environment repository. Every 3 days, the following happens automatically:

  • packages to build images (archiso) and some of which are included on the images (aif and libui-sh) get rebuilt. They are actually git versions, the latter two have a separate develop branch which is used. Normal packages get updated the normal way.
  • the images are rebuilt, and the dual images get generated
  • the images, the packages and their sources are synced to the public on http://releng.archlinux.org
Actually things are bit more involved but this is the gist of it. All of this is now run on a dedicated VPS donated by airVM.

I never really completed the aif automatic test suite, somewhere along the way I decided to focus on crowdsourcing test results. The weight of testing images (and all possible combinations of features) has always been huge, and trying to script tasks would either get way complicated or insufficient. So the new approach is largely inspired by the core and testing repositories: we automatically build testing images, people report feedback, and if there is sufficient feedback for a certain set of images (or a bunch of similar sets of images) that allows us to conclude we have some good material, we can promote the set to official media.
The latest piece of the puzzle is the new releng feedback application which Tom Willemsen contributed. (again: outsourcing FTW). It is still fairly basic, but should already be useful enough. It lists pretty much all features you can use with archiso/AIF based images and automatically updates the list of isos based on what it sees appearing online, so I think it will be a good indicator on what works and what doesn't, and that for each known set of isos.

So there. Bleeding edge images for everyone, and for those who want some quality assurance: the more you contribute, the more likely you'll see official releases.

While contributing feedback is now certainly very easy, don't think that only providing feedback is sufficient, it takes time to maintain and improve aif and archiso as well and contributions in that department are still very welcome. I don't think we'll get to the original plan of official archiso releases for each stable kernel version, that seems like a lot of work despite all the above.

As for what is new: again too much to list, here is a changelog but I stopped updating it at some point. I guess the most visible interesting stuff is friendlier package dialogs (with package descriptions), support for nilfs, btrfs and syslinux (thanks Matthew Gyurgyik), and an issues reporting tool. Under the hood we refactored quite a bit, mostly blockdevice related stuff, config generation and the "execution plan" (like, how each function calls each other and how failures are tracked) in AIF has been simplified considerably.

Sat, 14 Jul 2007

Assymetric keys instead of passwords for SSH authentication to increase security and convenience

I've been using OpenSSH already for a while and although I've seen mentions of "public key authentication" and "RSA encryption" several times in it's config files, I never decided to figure out what it did exactly, and stuck to password authentication. But now the guys at work explained how it works and after reading more about it, I'm totally hooked on it!

It's a feature in ssh protocol version 2 (thus it's around for already a while, e.g. we can all use it without updating something) which essentially comes down to this: you generate an asymmetric key pair and distribute the public key to all remote hosts. When logging in to that host, the host will encrypt a random integer, which only you can decrypt (using the private key) and hence prove your identity. Too secure your private key you'll store it encrypted with a password. Ssh-agent (which is bundled with openssh) is the tool that interacts with ssh to perform this task: when logging in to a host, ssh-agent will open the private key for you automatically if it can decrypt it with the password it receives from you But the problem is you'll have to load (enter your password and decrypt the key) each time again.

This is where keychain comes in, or you can use SSH Agent (don't confuse this with the ssh agent that comes with openssh) if you're a Mac user and like gui's. These tools basically ask you your passwords for all private keys you wish to use in a session (with session I mean "the whole time of using your computer"), decrypt the encrypted key on your hard disk and cache the decrypted key in ram, so it can be used in all terminals you open.

For more information:
OpenSSH key management, Part 1: Understanding RSA/DSA authentication
OpenSSH key management, Part 2: Introducing ssh-agent and keychain
OpenSSH key management, Part 3: Agent forwarding and keychain improvements (freaks only ;-))

Have fun