Bash script averaging fio field

Whenever I’m doing fio experiments, it good to run the experiments multiple times to account for variances. In the past, I’ve used a spreadsheet to handle the averages, manually inserting and moving rows around to do the appropriate calculations. This is very efficient for studying the data and know how you want to present them. However, after you know what numbers you need, it’s good to automate the work. As you might change the underlying kernel or system and need to rerun the experiments.

For that, I made this script that runs a fio script (this case using libaio, iodepth 1, 1MB reads, change to your needs). fio is capable of running without a configuration file and export the results to a comma-separated row. In the script, we execute the same fio script ITER number of times, and then takes the generated files and summarize the read field of the returned fio result row. Leaving a nice averaged number to use.

If you’re doing writes, you want to make sure the right column is selected (currently column 7 -> $7).

I know that this isn’t a very flexible way to do it. However, the script came out of a competition on the easiest way to get the averaged numbers.

Systor 2013 Presentation of the Linux multi-queue block layer

Just arrived home from Systor 2013 in Israel. The conference was well-organized and had a great feel to it. The people attending where split between locals and international researchers. It made a great environment for discussing ideas and meeting new people.

The presentations were great. Each had some very thoughtful questions asked. The questions ranged from understudying a specific part of the presentation, to “Shut up and take my money!” questions. All in all very pleasant experience.

I presented our work on the Linux multi-queue block layer. The presentation went very well. There was a lot of questions after the presentation, both very detailed toward the implementation, and questions about how the work can be extended with an multi-queue aware IO scheduler.

The slides from the presentations can be downloaded here.

Paper accepted (Multi-queue) and when do we see it in mainline?

Yay! Our multi-queue paper (Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems) was accepted to Systor. I’ll be going to present the work late June. It’s going to be a blast.

We’re working on getting it upstream as soon as possible. It is mostly ready to be posted to the kernel list, but minor stuff still needs to be done.

One part is driver support, block device drivers have to be rewritten to the new layer. Dependent on how the device driver is implemented, this can be more or less troublesome. If the driver implements a request-based model, it can easily be replaced with the multi-queue initialization code paths, while if the device driver is based on the raw architecture (i.e. implements its own block layer), then this will have to let I/O be handled by the multi-queue block layer. Specifically how the mapping of hardware queues, and how tagging is implemented. For inspiration how to do this, please see either the null driver or the mtip (Micron p320) driver, of which we already implemented the necessary logic.

Currently I work on removing the dependency of libscsi from libata. This could seem like a bad idea, as libata benefit from the libscsi framework, using the shared device initialization and error handling. However, ata must implement a translation layer from scsi to ata, this directly translate to overhead for each I/O submitted though the driver and also have scalability issues in locking and accessing memory in a non numa friendly way. Thus, cutting the ties to libscsi from libata, will allow us to remove some of these issues and optimize it.

Another part is missing from the multi-queue scheduler is an I/O scheduler. Feel free to write one. Previous work have shown that SSDs can benefit from it. It is however a good chunk of work, that requires some great engineering efforts to be implemented.

Paper accepted!

We got our “The Necessary Death of the Block Device Interface” paper accepted at CIDR 2013 for their Outrageous Ideas and Visions track. Exciting times!

On that note, I’ll be in California and attending the conference.

Patch for FusionIO’s IOMemory VSL on Linux kernel 2.6.39 and 3.0+

(Short story. Apply this patch to the source code. Long story follows)

Today I decided that it was time to get our FusionIO ioDrive up and running as we needed it for some benchmarks.

I installed the device in my workstation, and as my workstation runs the latest version of Ubuntu (11.10 as of this writing), I needed to get the fusion io VSL drivers that fitted with that version. As I went to look for the drivers at FusionIO’s site, I noticed that the latest supported was Ubuntu 10.10. I knew that this properly meant that I would be up for a treat (Wee :D).

I downloaded the newest driver source. As it had been a few months since we used the device, I also upgraded the firmware to the latest version. After that, I went to the README of the VSL driver and went with creating the driver as a debian package.

On compiling, I first got the error

  CC [M] iomemory-vsl-
iomemory-vsl- In function 
iomemory-vsl- error: 
implicit declaration of function "path_lookup"

Great I thought. That’s easy to fix. I went into the code and found that that part was only needed for older kernels. Commented that away.

Then I was met with the next error:

  CC [M]  iomemory-vsl-
iomemory-vsl- In function 'kfio_get_gd_in_flight':
iomemory-vsl- warning: return makes integer 
from pointer without a cast [enabled by default]

It was a little bit more tricky. The kernel gendisk structure had been changed to have inflight data defined as an atomic_t structure instead of an int. That was fixed my packing the “before” into atomic_t and then passed to the in_flight structure.

Then I had these bundles of joy.

iomemory-vsl- In function 'kfio_set_gd_in_flight':
iomemory-vsl- error: incompatible types 
when assigning to type 'struct atomic_t[2]' from type 'int'
iomemory-vsl- In function 'kfio_alloc_queue':
iomemory-vsl- error: 'struct request_queue' 
has no member named 'unplug_fn'
iomemory-vsl- In function 'kfio_unplug':
iomemory-vsl- error: implicit declaration
 of function 'blk_remove_plug' [-Werror=implicit-function-declaration]
iomemory-vsl- In function 'kfio_make_request':
iomemory-vsl- error: implicit declaration
 of function 'blk_queue_plugged' [-Werror=implicit-function-declaration]
iomemory-vsl- In function 'kfio_blk_plug':
iomemory-vsl- error: implicit declaration
 of function 'blk_plug_device' [-Werror=implicit-function-declaration]

We see that they touch the plugging mechanism for block devices. Jens Axboe (which is maintainer of the IO layer in the Linux kernel) had made plugging explicit since the 2.6.39 release, and on the journey changed the interface. Axboe documented the change here and also said that code that used the plugging could mostly be removed. So I went to remove the hooks and changed the interfaces where necessary.

That’s about it, the driver compiled and afterwards installed. Remember, if you apply this patch, you are on your own. The driver compile, but both the changes I did and changes that I’m not aware of, might break when you use the device. You may want to wait for FusionIO to release a new set of drivers or install an older Ubuntu version with support from FusionIO.

Setup of authentication of FTP users using WordPress database

This article describes how Pure-FTPd can use a WordPress database with phpass password hashing. The work is part of our work on repeatability of research papers in computer science.

It is motivated by WordPress users that like archival of their files to a FTP storage. Normally, this would be as easy as pointing Pure-FTPd to the database with user and passwords and tell it which table and attributes that it had to access. Unfortunately, which you properly already guessed since I had to write a blog-post about it. It is not as simple. WordPress uses a framework for hashing its passwords, and the approach is not compatible with the way Pure-FTPd supports hashing techniques.

The framework, PHP password hashing framework (phpass), is used to perform hashing. It have different fallback modes, and thus, if we wanted to implement our own hashing technique, we would have to support the same fall-back techniques in Pure-FTPd, as this would be somewhat wasted time, we instead implement an authentication method for Pure-FTPd, which uses phpass as the hashing back-end.

Pure-FTPd supports a custom authentication model, of which it is possible for an external application to communicate if a FTP user may access the FTP server. The model exposes several parameters as environmental parameters on user login. Username, password, local ip address, etc. When a user login, an external application is executed and can then use the variable values to perform authentication. As this model relies on us accessing the WordPress database to fetch the password hash, we also thought about another approach. i.e. extend Pure-FTPd with phpass hashing.

The latter solution offered a simple approach to authenticate a hash from the database (as we may use the existing database plugin for Pure-FTPd). However, the cost is having to update the source-code for each new release. Heck, I know this is bad practice, but what is the fun in life, if we can’t break the rules once in a while…

So we went along with the approach. First, we had to get phpass up and running as a simple application/script. This was archived by researching how WordPress stored and authenticated its users, as we wanted to mimic this approach, it was here we could find how they implemented it. We then created a simple script, that may be called with two arguments (the cleartext password and hashed password from the database) and return either 1 or 2 depending if the authentication was successful.

The source-code is as follows (which you may put in /usr/bin/phpass-wrapper.php and chmod +x it)


$PHPASS_PATH = "/path/to/your/wordpress/wp-includes/class-phpass.php";

if (count($argv) < 3)
    return 0;

$ph = new PasswordHash(8, TRUE);

$clear = base64_decode($argv[1]);
$hash = base64_decode($argv[2]);

$return = $ph->CheckPassword($clear, $hash);

exit($return +1);


The wrapper alone can stand alone and work for both previous mentioned solution.

The next step is to connect Pure-FTPd and the phpass wrapper. In our latter approach, this is accomplished by adding the phpass authentication method.

diff -rup pure-ftpd-1.0.36/src/crypto.c pure-ftpd-1.0.36-phpass/src/crypto.c
--- pure-ftpd-1.0.36/src/crypto.c 2011-04-17 08:05:54.000000000 -0700
+++ pure-ftpd-1.0.36-phpass/src/crypto.c 2012-04-20 02:07:08.288870553 -0700
@@ -47,6 +47,78 @@ static char *hexify(char * const result,
     return result;
+ * characters used for Base64 encoding
+ */
+const char *BASE64_CHARS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+ * encode three bytes using base64 (RFC 3548)
+ *
+ * @param triple three bytes that should be encoded
+ * @param result buffer of four characters where the result is stored
+ */
+void _base64_encode_triple(unsigned char triple[3], char result[4])
+ int tripleValue, i;
+ tripleValue = triple[0];
+ tripleValue *= 256;
+ tripleValue += triple[1];
+ tripleValue *= 256;
+ tripleValue += triple[2];
+ for (i=0; i<4; i++)
+ {
+ result[3-i] = BASE64_CHARS[tripleValue%64];
+ tripleValue /= 64;
+ }
+ * encode an array of bytes using Base64 (RFC 3548)
+ *
+ * @param source the source buffer
+ * @param sourcelen the length of the source buffer
+ * @param target the target buffer
+ * @param targetlen the length of the target buffer
+ * @return 1 on success, 0 otherwise
+ */
+int base64_encode(unsigned char *source, size_t sourcelen, char *target, size_t targetlen)
+ /* check if the result will fit in the target buffer */
+ if ((sourcelen+2)/3*4 > targetlen-1)
+ return 0;
+ /* encode all full triples */
+ while (sourcelen >= 3)
+ {
+ _base64_encode_triple(source, target);
+ sourcelen -= 3;
+ source += 3;
+ target += 4;
+ }
+ /* encode the last one or two characters */
+ if (sourcelen > 0)
+ {
+ unsigned char temp[3];
+ memset(temp, 0, sizeof(temp));
+ memcpy(temp, source, sourcelen);
+ _base64_encode_triple(temp, target);
+ target[3] = '=';
+ if (sourcelen == 1)
+ target[2] = '=';
+ target += 4;
+ }
+ /* terminate the string */
+ target[0] = 0;
+ return 1;
 /* Encode a buffer to Base64 */
 static char *base64ify(char * const result, const unsigned char *digest,
@@ -167,7 +239,6 @@ char *crypto_hash_sha1(const char *strin
     return hexify(result, digest, sizeof result, sizeof digest);
 /* Compute a simple hex MD5 digest of a C-string */
 char *crypto_hash_md5(const char *string, const int hex)
Only in pure-ftpd-1.0.36-phpass/src: .deps
diff -rup pure-ftpd-1.0.36/src/log_mysql.c pure-ftpd-1.0.36-phpass/src/log_mysql.c
--- pure-ftpd-1.0.36/src/log_mysql.c 2012-03-15 18:01:37.000000000 -0700
+++ pure-ftpd-1.0.36-phpass/src/log_mysql.c 2012-04-20 02:25:53.020895435 -0700
@@ -324,7 +324,7 @@ void pw_mysql_check(AuthResult * const r
     char *escaped_decimal_ip = NULL;
     int committed = 1;
     int crypto_crypt = 0, crypto_mysql = 0, crypto_md5 = 0, crypto_sha1 = 0,
- crypto_plain = 0;
+ crypto_plain = 0, crypto_phpass = 0;
     unsigned long decimal_ip_num = 0UL;
     char decimal_ip[42];
     char hbuf[NI_MAXHOST];
@@ -419,6 +419,7 @@ void pw_mysql_check(AuthResult * const r
+ crypto_phpass++;
     } else if (strcasecmp(crypto, PASSWD_SQL_CRYPT) == 0) {
     } else if (strcasecmp(crypto, PASSWD_SQL_MYSQL) == 0) {
@@ -427,6 +428,8 @@ void pw_mysql_check(AuthResult * const r
     } else if (strcasecmp(crypto, PASSWD_SQL_SHA1) == 0) {
+ } else if (strcasecmp(crypto, PASSWD_SQL_PHPASS) == 0) {
+ crypto_phpass++;
     } else { /* default to plaintext */
@@ -484,6 +487,25 @@ void pw_mysql_check(AuthResult * const r
             goto auth_ok;
+ if (crypto_phpass != 0) {
+ char str_clear_base64[512];
+ char str_hashe_base64[512];
+ base64_encode(password, strlen(password), str_clear_base64, 512 );
+ base64_encode(spwd, strlen(spwd), str_hashe_base64, 512);
+ char cmd[512];
+ sprintf(cmd, "phpass-wrapper.php "%s" "%s"", str_clear_base64, str_hashe_base64);
+ int r = system (cmd);
+ if (r == 256)
+ goto bye;
+ else if (r == 512)
+ goto auth_ok;
+ else
+ goto bye;
+ }
     if (crypto_plain != 0) {
         if (*password != 0 && /* refuse null cleartext passwords */
             strcmp(password, spwd) == 0) {
diff -rup pure-ftpd-1.0.36/src/log_mysql.h pure-ftpd-1.0.36-phpass/src/log_mysql.h
--- pure-ftpd-1.0.36/src/log_mysql.h 2011-05-01 18:22:54.000000000 -0700
+++ pure-ftpd-1.0.36-phpass/src/log_mysql.h 2012-04-20 01:59:00.248859759 -0700
@@ -6,6 +6,7 @@
 #define PASSWD_SQL_MYSQL "password"
 #define PASSWD_SQL_MD5 "md5"
 #define PASSWD_SQL_SHA1 "sha1"
+#define PASSWD_SQL_PHPASS "phpass"
 #define PASSWD_SQL_ANY "any"
 #define MYSQL_DEFAULT_SERVER "localhost"
 #define MYSQL_DEFAULT_PORT 3306

The source expects the phpass-wrapper.php file to be accessible from the $PATH variable on execution. Also note that communication between the two processed is handled by encoding the password and username using base64 encoding. This is done to omit escaping special characters from the variables and thus make it safe to pass the parameters. The variables is decoded in the wrapper. The Pure-FTPd approach to put it in environment variables is also a possibility, but this is better, as it doesn’t require any global state before the wrapper is executed.

Now we have the baseline, the next step is to compile the Pure-FTPd project and install the new binary. If you’re on Ubuntu, you might want to execute configure with the following configuration parameters: ./configure –with-mysql –with-rfc2640 –with-cookie –with-altlog. Install the binary by either creating a new debian package or simply copying over an existing applied binary (hackisk).

At last, we are ready for the final setup. In /etc/pure-ftpd/db/mysql.conf (or your favorite place for your Pure-FTPd configuration), we add the following:

MYSQLCrypt phpass
MYSQLGetPW SELECT user_pass FROM wp_users WHERE user_login=’\L’
MYSQLDefaultUID 1000 // Favorite user id or update the uid query.
MYSQLDefaultGID 1000 // Fovorite group id or update the group query.

Then restart Pure-FTPs and enjoy your newly configured FTP server with support for phpass hashed passwords.

Laziness at its best!

A short guide on how to set up PulseAudio to stream sound to Windows from two Linux workstations.


So, at work I have three computers that I use interchangeably. Two of them is stationary workstations (sitting at the floor, hidden away), which both runs Linux. While a laptop  (sitting next to me) has Windows installed.

Having three computers give me a huge first-world problem… How do I connect all the sound sources to one place? I usually take my headphones with me home when I leave work. Thus, they have to be disconnected. As I mainly use my linux machines, I have to crawl onto the floor and unplug the headphones and look like a crawling maniac two times a day. After a couple of days of this act, I became tired of it.

I remembered that the audio stack in Linux work though pipes and file descriptors. Aha, so that meant that there might be a way to stream it to how to get my sound from the two linux machines into the windows machine, which had an easily accessible audio port. Luckily, I used PulseAudio on both my linux machines. As it is a sound system for POSIX compatible OSs, there had to be a windows port also.

And yes, there was. The company Cendio provides Windows Binaries for the hungry PulseAudio user. The binaries can be found here

Get PulseAudio to work on Windows

We want to play the audio from the two linux boxes through the Windows audio server. First, download the binaries for Windows and extract them to your favorite place. Then we have to configure the PulseAudio daemon to accept connections from the network.

To do this, you create a file (in the same directory as PulseAudio) called In it, you put the following lines

load-module module-native-protocol-tcp listen= auth-ip-acl=;
load-module module-waveout

The first line make the daemon listen on all interfaces and allow connections from localhost and the subnet. The next line, specify our output. We leave that to the default waveout module.

Then you just run pulseaudio.exe. That sums it all up. You’re now ready to configure the two linux workstations. ( To save you a couple of minutes… Remember to allow connections to you windows machine in the firewall. PulseAudio uses TCP port 4713 if you need it.)

Get PulseAudio on Linux to connect to the Windows PulseAudio server

This is the easiest part. I assume that you use a Ubuntu-based distribution. Open a terminal and go into the /etc/pulse/ directory. In it, you’ll find various configurations files ( client.conf, daemon.conf,, etc. ). The one we are interested in are client.conf. Open the file with your favorite editor (remember to either sudo or be root when editing).

In the file you insert the following line

default-server = tcp:ip-address:4713

Where ip-address is substituted with the ip address of the Windows machine.

As PulseAudio run on a per-session basis in Ubuntu, it is not enough to just restart the PulseAudio service. Instead  you should log out of your current X session and login again to let it stream the audio to the Windows machine.

That’s about it. Really easy, but it did take me a half hour to figure it all out. Therefore I wrote this short guide on how to archive it. Have fun!

My knowledge tools

It is important for any good students, that they maintain their time properly and have good tools for maintaining the information that they gather thoughout their career.

This both helps reaching higher and make it a lot easier when you want to put all your hard work into a paper. I usually switch between three states. Deep thinking, remembering and structuring.


For deep thinking, nothing beats a simple use a notebook and pen. It helps sketch our ideas quickly and gives you a visualization of what you are working on. Furthermore, you can use it as a history of the work you have done throughout the year.

However, not all information is great on paper. It can’t be efficiently searched without a good index, and it is not something that you usually have around your scratch book. A lot of today’s information are tracked though web pages and articles. To efficiently track what you have, you might want to maintain bookmarks, etc. in your browser. However, links may go dead, you want to attach a summary of the article/webpage etc. You may want to put it in context to some other information. Bookmarks are missing these crucial metadata to be really effective. Instead I use two different tools. Mendeley and Evernote.

Mendeley is a tool to maintain references in a social context. It support teams and public lists. Articles, papers, etc. can be shared both public and between a small team. This is effective for keeping everyone updated on the current papers. Whenever you want to write the paper, you can handpick the relevant papers and export them to your bibliography of the paper, mendeley can even continuously update the bibliography if needed. It furthermore supports highlighting of PDF documents. Which is very helpful, however, there is not yet an Android/iPhone application that allow you to read it on your device, and then highlight it text more naturally.

Evernote is the other tool that I use. Whenever an article on the web has been found interesting and useful for what I’m doing. I upload it to evernote. From there I can highlight interesting text, attach a summary, files, etc. and categorize them with tags.

At first, it is a lot of work to maintain a good list of knowledge, and in the start it might feel like a lot of work. But as you continuously add information, you gain a structured approach to your thinking and you can easily find information when you need it.

The Evernote / Mendelay Combo

The combo become even more powerful when used to write papers. By adapting the Cal Newport approach to create paper research wiki. We may structure our research as a hierarchy.

We maintain three levels for a paper. Each level may only reference to predefined information.

  • The top level (Primary sources) holds all the information that is relevant for your paper. This includes other authors papers, benchmarks, your own results, etc.
  • Your next level is Ideas (Secondary sources). In it, describe the idea that is the core of your paper. Remember that the ideas must only build on your information in your primary sources.
  • At last, you define the structure of you paper and begin writing the abstract, context, related work, your work, results and conclusion. When you have all the information stored, you are ready to put it into the template of the place you will summit your paper.

This process of creating a three level structure is very effective for quickly structuring the knowledge and is better than having a pile of annotated papers in the corner. Furthermore, when you have to structure the information as a hierarchy, it force you to think in more high-level terms, which help you clarify your paper.