Blog Archives

Accessing Linux ext2/ext3 partitions from MS Windows
July 2nd 2009

Accessing both Windows FAT and NTFS file systems from Linux is quite easy, with tools like NTFS-3G. However (following with the MS tradition of making itself incompatible with everything else, to thwart competition), doing the opposite (accessing Linux file systems from Windows) is more complicated. One would have to guess why (and how!) closed and proprietary and technically inferior file systems can be read by free software tools, whereas proprietary software with such a big corporation behind is incapable (or unwilling) to interact with superior and free software file systems. Why should Windows users be deprived of the choice over JFS, XFS or ReiserFS, when they are free? MS techs are too dumb to implement them? Or too evil to give their users the choice? Or, maybe, too scared that if choice is possible, their users will dump NTFS? Neither explanation makes one feel much love for MS, does it?

This stupid inability of Windows to read any of the many formats Linux can use gives rise to problems for not only Windows users, but also Linux users. For example, when I format my external hard disks or pendrives, I end up wondering if I should reserve some space for a FAT partition, so I could put there data to share with hypothetical Windows users I could lend the disk to. And, seriously, I abhor wasting my hardware with such lousy file systems, when I could use Linux ones.

Anyway, there are some third-party tools to help us which such a task. I found at least two:

I have used the first one, but as some blogs point out (e.g. BloggUccio), ext2fsd is required if the inode size is bigger than 128 B (256 B in some modern Linux distros).

Getting Ext2IFS

It is a simple exe file you can download from Installing it consists on the typical windows next-next-finish click-dance. In principle the defaults are OK. It will ask you about activating "read-only" (which I declined. It's less safe, but I would like to be able to write too), and something about large file support (which I accepted, because it's only an issue with Linux kernels older than 2.2... Middle Age stuff).

Formatting the hard drive

In principle, Ext2IFS can read ext2/ext3 partitions with no problem. In practice, if the partition was created with an inode size of more than 128 bytes, Ext2IFS won't read it. To create a "compatible" partition, you can mkfs it with the -I flag, as follows:

# mkfs.ext3 -I 128 /dev/whatever

I found out about the 128 B inode thing from this forum thread [es].

Practical use

What I have done, and tested, is what follows: I format my external drives with almost all of it as ext3, as described, leaving a couple of gigabytes (you could cut down to a couple of megabytes if you really want to) for a FAT partition. Then copy the Ext2IFS_1_11a.exe executable to that partition.

Whenever you want to use that drive, Linux will see two partitions (the ext3 and the FAT one), the second one of which you can ignore. From Windows, you will see only a 2GB FAT partition. However, you will be able to open it, find the exe, double-click, and install Ext2IFS. After that, you can unplug the drive and plug it voilà, you will see the ext3 partition just fine.

Tags: , , , , , , , , , , , ,


LWD - June 2009
June 2nd 2009

This is a continuation post for my Linux World Domination project, started in this May 2008 post. You can read the previous post in the series here.

In the following data T2D means "time to domination" (the expected time for Windows/Linux shares to cross, counting from the present date). DT2D means difference (increase/decrease) in T2D, with respect to last report. CLP means "current Linux Percent", as given by last logged data, and DD means domination day (in YYYY-MM-DD format).

For the first time, data for PrimeGrid is included.

Project T2D DT2D DD CLP Confidence %
Einstein 4.5 months +3.5 months 2009-10-14 44.51 (+2.42) 6.4
MalariaControl >10 years - - 12.64 (+0.09) -
POEM >10 years - - 10.66 (+0.19) -
PrimeGrid 75 months - 2015-07-22 9.61 1.3
Rosetta >10 years - - 8.37 (+0.28) -
QMC >10 years - - 7.92 (+0.05) -
SETI >10 years - - 8.00 (+0.06) -
Spinhenge >10 years - - 3.87 (+0.28) -

Mmm, the numbers seem quite discouraging, but the data is what it is. On the bright side, all CLPs have gone up, some almost a 0.3% in 3 months. The Linux tide seems unstoppable, however its forward speed is not necessarily high.

As promised, today I'm showing the plots for PrimeGrid, in next issue QMC@home.

Number of hosts percent evolution for PrimeGrid (click to enlarge)

Accumulated credit percent evolution for PrimeGrid (click to enlarge)

Tags: , , , , , , , , ,

No Comments yet »

John maddog Hall and OpenMoko at DebConf9 in Cáceres, Spain
May 15th 2009

The annual Debian developers meeting, DebConf is being held this year in Cáceres (Spain), from July 23 to 30. Apart from just promoting the event, I am posting this to mention that the Spanish OpenMoko distributor Tuxbrain will participate, and sell discounted Neo FreeRunner phones. As a masochistic proud owner of one such phone, I feel compelled to spread the word (and help infect other people with FLOSS virii).

You can read a post about it in the debconf-announce and debian-devel-announce lists, made by Martin Krafft. Also, Tuxbrain responsible David Samblas uploaded a video of maddog Hall promoting the event:

Tags: , , , , , , ,

No Comments yet »

Poor Intel graphics performance in Ubuntu Jaunty Jackalope, and a fix for it
April 29th 2009

Update: read second comment

I recently upgraded to Ubuntu Jaunty Jackalope, and have experienced a much slower response of my desktop since. The problem seems to be with Intel GMA chips, as my computer has. The reason for the poor performance is that Canonical Ltd. decided not to include the UXA acceleration in Jaunty, for stability reasons (read more at Phoronix).

The issue is discussed at the Ubuntu wiki, along with some solutions. For me, the fix involved just making use UXA, by including the following in the xorg.conf file, as they recommend in the wiki:

Section "Device"
        Identifier    "Configured Video Device"
        # ...
        Option        "AccelMethod" "uxa"
Tags: , , , , , , , , ,


Brief MoinMoin howto
April 19th 2009

I recently started looking for some system/format to dump personal stuff on. I checked my own comparison of wiki software, and chose MoinMoin.

I have already installed some MediaWiki wikis for personal use, and I consider it a really nice wiki system. However, one of its strengths is also a drawback for me: the backend is a database. I want to be able to migrate the wiki painlessly, and with MediaWiki this is not possible. There is no end to the files and database dumps one has to move around, and then it is never clear if there is still something missing (like edit history or some setting). I want to have a single dir with all the data required to replicate the wiki, and I want to rsync just this dir to another computer to have an instant clone of the wiki elsewhere. MoinMoin provides just that (I think, I might have to change my mind when I use it more).

So here you are the steps I took to have MM up and running in my Ubuntu 8.10 PC.


Ubuntu has packages for MM, so you can just install them:

% aptitude install python-moinmoin moinmoin-common


Create a dir to put your wiki. For example, if you want to build a wiki called wikiname:

% mkdir -p ~/MoinMoin/wikiname

We made it a subdir of a global dir "MoinMoin", so we can create a wiki farm in the future.

Next you have to copy some files over:

% cd ~/MoinMoin/wikiname
% cp -vr /usr/share/moin/data .
% cp -vr /usr/share/moin/underlay .
% cp /usr/share/moin/config/ .
% cp /usr/share/moin/server/ .

If installing a wiki farm, you could be interested in the contents of /usr/share/moin/config/wikifarm/, but this is out of the scope of this post.

The next step is to edit to our liking. The following lines could be of interest:

sitename = u'Untitled Wiki'
logo_string = u'MoinMoin Logo'
page_front_page = u"MyStartingPage"
data_dir = './data/'
data_underlay_dir = './underlay/'
superuser = [u"yourusername", ]
acl_rights_before = u"iyourusername:read,write,delete,revert,admin"


You just need to run, there is no need to have Apache running or anything (like with, e.g., MediaWiki):

% cd ~/MoinMoin/wikiname/
% python &

Then open your favourite browser and go to http://localhost:8080, and you will be greeted by the starting page.

Tags: , , , , , ,


Temperature and fan speed control on the Asus Eee PC
March 15th 2009

I noticed that after my second eeebuntu install (see a previous post for a why to this reinstall), my Eee PC was a wee bit more noisy. Most probably it has always been like that, but I just noticed after the reinstall.

I put some sensor output in my Xfce panel, and noticed that the CPU temperature hovered around 55 degrees C, and the fan would continuously spin at around 1200 rpm. I searched the web about it, and found out that usually fans are stopped at computer boot, then start spinning when temperature goes up. This is logic. The small catch is that when the temperature in the Eee PC goes down, the fan does not stop automatically. This means that the fans are almost always spinning in the long run.

I searched for methods to fix that, and I read this post at From there I took the idea of taking over the control of the fans, and making them spin according to the current temperature. For that, I wrote the following script:



# Get temperature:

# Choose fan speed:
if [ $TEMP -gt 65 ]
elif [ $TEMP -gt 60 ]
elif [ $TEMP -gt 55 ]

# Impose fan speed:
echo 1 > $MANFILE

The file /proc/eee/fan_manual controls whether fans are under manual (file contains a "1") or automatic (file contains a "0") control. File /proc/eee/fan_speed must contain an integer number from 0 to 100 (a percent of max fan speed).

I am running this script every minute with cron, and thus far it works OK.

Tags: , , , , , , ,


LWD - March 2009
March 12th 2009

Did I say "bimonthly" in my last report? Mmm, that was 3 months ago... You can read an intro for my Linux World Domination project in this May 2008 post.

As usual D2D means "days to domination" (the expected time for Windows/Linux shares to cross, counting from the present date), and DD2D means difference (increase/decrease) in D2D, with respect to last report. CLP means "current Linux Percent", as given by last logged data, and DD means domination day (in YYYY-MM-DD format).

Project D2D DD2D DD CLP Confidence %
Einstein 107 -144 2009-06-26 42.09 (+4.61) 17.3
MalariaControl >10k - - 12.55 (+0.10) -
POEM 5345 +325 2023-10-30 10.47 (+0.42) 2.5
Rosetta >10k - - 8.09 (+0.10) -
QMC >10k - - 7.87 (-0.04) -
SETI >10k - - 7.94 (+0.06) -
Spinhenge >10k - - 3.59 (+0.24) -

As promised, today I'm showing the plots for POEM@home, in next issue Prime@home.

Number of hosts percent evolution for POEM@home (click to enlarge)

Accumulated credit percent evolution for POEM@home (click to enlarge)

Tags: , , , , , , , , ,

No Comments yet »

Free software woes
March 11th 2009

Yes, FLOSS also has its quirks and problems, and I am going to rant about some of them, that I run into the last week.

Problem 1: fsck on laptops

The reader might know that Linux comes with a collection of file system checkers/fixers, under the name fsck.* (where * = ext2/3, reiserfs, jfs, xfs...). When one formats a new partition (or tunes an existing one), some parameters are set, as for example in what circumstances fsck should be run automatically (you can always run it by hand). The typical setting is to run the command on each partition (just before mounting it) every N times it is mounted, or every M days.

It is also set that if a filesystem is not shut down cleanly (e.g., by crashing the computer or directly unplugging it), fsck will be run automatically on next boot (hey, that's so nice!).

However, here's the catch: on laptops, and with the aim of saving power, fsck will (tipically) not run automatically when on batteries. This seems a great idea, but you can imagine an scenario where it fails: shut down the laptop uncleanly, then power it up on batteries, and... voilà, you are presented with a system that seems to boot, but gives a lot of problems, the X don't work... because the disk was corrupt, and wasn't fixed on boot.

When this happened to me, I fixed it by booting while plugged. In principle you could also boot on single user mode, then chose "Check the filesystem" in the menu you will be presented (I'm talking about Ubuntu here), and fix the problem, even on batteries. But still, it's annoying. IMHO fsck should run after unclean shutdowns, no matter being plugged or on batteries.

Problem 2: failed hibernate can seriously screw your system

I tried hibernating my laptop (a feature I keep finding problems with), but it was taking too long, and I was forced to shut it down using the power button. This, in itself, is a serious issue, but I could live with it.

But what I can't live with is that after the event, I had no way of booting back! I tried all I could, and finally had to reinstall the OS. I am the one whom it happened to, and I still find it hard to believe: Linux so fucked up that you have to reinstall. I thought reinstalling belonged to the Windows Dark Ages!

Problem 3: faulty SD card

Since the problems tend to come together, it's no surprise that I came across this error when trying to reinstall the machine borked with previous problem. The thing is that I was using a SD card as installation media, burning the ISO into it with UNetbootin. The burning didn't burp any error, but the installation failed, usually (but not always) on the same point.

After minutes (hours?) of going crazy, I burned the ISO into another SD card, and it worked like a charm.

My complain is not that the SD was faulty, which I can understand (hardware fails). What I am angry at is the fact that I checked (with the aforementioned fsck command) the FS in the card many times, and I reformatted it (with mkfs) many more times, and Linux would always say that the formatting had been correct, and that all checks where fine. I understand that things are sometimes OK, sometimes KO. I just want to know when is which!

Tags: , , , ,


Miniblogging from Catania
March 3rd 2009

Right now I'm in the 4th EGEE User Forum/OGF25 conference being held in Catania, Sicily.

I have some random thought to write down, and my lately little-attended blog seems the right place to do so.

Random thought of the moment: everyone, I mean every boy and girl and their pets, has a laptop. Everyone listens to talks with a laptop in their knees. Also, an amazing fraction of these (from 1 in 4 to 1 in 3, maybe) are Macs. The Linux machines are also relatively abundant, although a sad majority of laptops seems to run Windows.

Might this mean that techies favor Apple? Maybe it just means that geeks can also be posh, as shown by the equally high amount of iPhones I've seen around.

Tags: , , , , , ,

No Comments yet »

Save HD space by using compressed files directly
January 14th 2009

Maybe the constant increases in hard disk capacity provide us with more space we can waste with our files, but there is always a situation in which we would like to squeeze as much data in as little space as possible. Besides, it is always a good practice to keep disk usage as low as possible, just for tidiness.

The first and most important advice for saving space: for $GOD's sake, delete the stuff you don't need!

Now, assuming you want to keep all you presently have, the second tool is data compression. Linux users have long time friends in the gzip and bzip2 commands. One would use the former for fast (and reasonably good) compression, and the latter for when saving space is really vital (although bzip2 is really slow). A more recent entry in the "perfect compression tool" contest would be Lempel-Ziv-Markov chain algorithm (LZMA). This one can compress even more than bzip2, being usually faster (although never as fast as gzip).

One problem with compression is that it is a good way of storing files, but they usually have to be uncompressed to modify, and then re-compressed, and this is very slow. However, we have some tools to interact with the compressed files directly (internally decompressing "on the fly" only the part that we need to edit). I would like to just mention them here:

Shell commands

We can use zcat, zgrep and zdiff as replacements for cat, grep and diff, but for gzipped files. These account for a huge fraction of all the interaction I do with text files from the command line. If you are like me, they can save you tons of time.


Vim can be instructed to open some files making use of some decompression tool, to show the contents of the file and work on them transparently. Once we :wq out of the file, we will get the original compressed file. The speed to do this cycle is incredibly fast: almost as fast as opening the uncompressed file, and nowhere near as slow as gunzipping, viming and gzipping sequentially.

You can add the following to your .vimrc config file for the above:

" Only do this part when compiled with support for autocommands.
if has("autocmd")

 augroup gzip
  " Remove all gzip autocommands

  " Enable editing of gzipped files
  " set binary mode before reading the file
  autocmd BufReadPre,FileReadPre	*.gz,*.bz2,*.lz set bin

  autocmd BufReadPost,FileReadPost	*.gz call GZIP_read("gunzip")
  autocmd BufReadPost,FileReadPost	*.bz2 call GZIP_read("bunzip2")
  autocmd BufReadPost,FileReadPost	*.lz call GZIP_read("unlzma -S .lz")

  autocmd BufWritePost,FileWritePost	*.gz call GZIP_write("gzip")
  autocmd BufWritePost,FileWritePost	*.bz2 call GZIP_write("bzip2")
  autocmd BufWritePost,FileWritePost	*.lz call GZIP_write("lzma -S .lz")

  autocmd FileAppendPre			*.gz call GZIP_appre("gunzip")
  autocmd FileAppendPre			*.bz2 call GZIP_appre("bunzip2")
  autocmd FileAppendPre			*.lz call GZIP_appre("unlzma -S .lz")

  autocmd FileAppendPost		*.gz call GZIP_write("gzip")
  autocmd FileAppendPost		*.bz2 call GZIP_write("bzip2")
  autocmd FileAppendPost		*.lz call GZIP_write("lzma -S .lz")

  " After reading compressed file: Uncompress text in buffer with "cmd"
  fun! GZIP_read(cmd)
    let ch_save = &ch
    set ch=2
    execute "'[,']!" . a:cmd
    set nobin
    let &ch = ch_save
    execute ":doautocmd BufReadPost " . expand("%:r")

  " After writing compressed file: Compress written file with "cmd"
  fun! GZIP_write(cmd)
    if rename(expand(""), expand(":r")) == 0
      execute "!" . a:cmd . " :r"

  " Before appending to compressed file: Uncompress file with "cmd"
  fun! GZIP_appre(cmd)
    execute "!" . a:cmd . " "
    call rename(expand(":r"), expand(""))

 augroup END
endif " has("autocmd")

I first found the above in my (default) .vimrc file, allowing gzipped and bzipped files to be edited. I added the "support" for LZMAed files quite trivially, as can be seen in the lines containign "lz" in the code above (I use .lz as termination for LZMAed files, instead of the default .lzma. See man lzma for more info).

Non-plaintext files

Other files that I have been able to successfully use in compressed form are PostScript and PDF. Granted, PDFs are already quite compact, but sometimes gzipping them saves space. In general, PS and EPS files save a lot of space by gzipping.

As far as I have tried, the Evince document viewer can read gzipped PS, EPS and PDF files with no problem (probably DVI files as well).

Tags: , , , , , , , ,


« Prev - Next »

  • The contents of this blog are under a Creative Commons License.

    Creative Commons License

  • Meta