Hardware compatibility is better with Windows... not
January 3rd 2010

One of the (few, but legitimate) reasons given by some Windows users to not switch to Linux is that many pieces of hardware are not recognized by the latter. Sure enough, 99.9%, if not all, of the devices sold in shops are "Windows compatible". The manufacturers of devices make damn sure their device, be it a pendrive or a printer, a computer screen or a keyboard, will work on any PC running Windows. They will even ship a CD with the drivers in the same package, so that installation of the device is as smooth as possible in Microsoft's platform. Linux compatibility? Well, they usually just don't care. Those hackers will make it work anyway, so why bother? And their market share is too small to take them into account.

Now, let's pass to some personal experience with a webcam. I bought a webcam for my girlfriend's laptop, which doesn't have one integrated. The webcam was a cheap Logitech USB one, with "Designed for Skype" and "Windows compatible" written all around on the box. It even came with a CD, marked prominently as "Windows drivers". My girlfriend's laptop runs Windows Vista, and I decided to give it a chance, and plugged the webcam without further consideration. A message from our beloved OS informed me that a new device had been plugged (brilliant!) but Windows lacked the necessary drivers to make it work (bummer!). OK, no problem. We had the drivers, right? I unplugged the camera, inserted the CD, and followed the instructions to get the drivers installed. Everything went fine, except that the progress bar with the installation percent went on for more than 12 minutes (checked on the watch) before reaching 100%. After installation, Windows informed me that a system reboot was necessary, and so I did. After reboot, the camera would work.

As I had my Asus Eee at hand, I decided to try the webcam on it. I plugged it, and nothing happened. I just saw the green light on the camera turn on. Well, maybe it worked... I opened Cheese, a Linux program to show the output of webcams. I was a bit wary, because the Eee has an integrated webcam, so maybe there would be some interference or something. Not so. Cheese showed me immediately the output of the webcam I had just plugged, and offered me a menu with two entries (USB webcam and integrated one), so I could choose. That's it. No CD with drivers, no 12-minute installation, no reboot, no nothing. Just plug and play.

Perhaps it is worth mentioning that the next time I tried to use the webcam on the Vista laptop, it would ask me for driver installation again! I don't know why... I must have done something wrong in the first installation... With Windows, who knows?

Tags: , , , , , , , , , , ,


Accessing Linux ext2/ext3 partitions from MS Windows
July 2nd 2009

Accessing both Windows FAT and NTFS file systems from Linux is quite easy, with tools like NTFS-3G. However (following with the MS tradition of making itself incompatible with everything else, to thwart competition), doing the opposite (accessing Linux file systems from Windows) is more complicated. One would have to guess why (and how!) closed and proprietary and technically inferior file systems can be read by free software tools, whereas proprietary software with such a big corporation behind is incapable (or unwilling) to interact with superior and free software file systems. Why should Windows users be deprived of the choice over JFS, XFS or ReiserFS, when they are free? MS techs are too dumb to implement them? Or too evil to give their users the choice? Or, maybe, too scared that if choice is possible, their users will dump NTFS? Neither explanation makes one feel much love for MS, does it?

This stupid inability of Windows to read any of the many formats Linux can use gives rise to problems for not only Windows users, but also Linux users. For example, when I format my external hard disks or pendrives, I end up wondering if I should reserve some space for a FAT partition, so I could put there data to share with hypothetical Windows users I could lend the disk to. And, seriously, I abhor wasting my hardware with such lousy file systems, when I could use Linux ones.

Anyway, there are some third-party tools to help us which such a task. I found at least two:

I have used the first one, but as some blogs point out (e.g. BloggUccio), ext2fsd is required if the inode size is bigger than 128 B (256 B in some modern Linux distros).

Getting Ext2IFS

It is a simple exe file you can download from fs-driver.org. Installing it consists on the typical windows next-next-finish click-dance. In principle the defaults are OK. It will ask you about activating "read-only" (which I declined. It's less safe, but I would like to be able to write too), and something about large file support (which I accepted, because it's only an issue with Linux kernels older than 2.2... Middle Age stuff).

Formatting the hard drive

In principle, Ext2IFS can read ext2/ext3 partitions with no problem. In practice, if the partition was created with an inode size of more than 128 bytes, Ext2IFS won't read it. To create a "compatible" partition, you can mkfs it with the -I flag, as follows:

# mkfs.ext3 -I 128 /dev/whatever

I found out about the 128 B inode thing from this forum thread [es].

Practical use

What I have done, and tested, is what follows: I format my external drives with almost all of it as ext3, as described, leaving a couple of gigabytes (you could cut down to a couple of megabytes if you really want to) for a FAT partition. Then copy the Ext2IFS_1_11a.exe executable to that partition.

Whenever you want to use that drive, Linux will see two partitions (the ext3 and the FAT one), the second one of which you can ignore. From Windows, you will see only a 2GB FAT partition. However, you will be able to open it, find the exe, double-click, and install Ext2IFS. After that, you can unplug the drive and plug it again...et voilà, you will see the ext3 partition just fine.

Tags: , , , , , , , , , , , ,


Microsoft produces crap, AMD eats it
June 16th 2009

It's old news, but I just read about in in the Wikipedia article for the Phenom II processor.

Apparently Phenom processors had the ability to scale the CPU frequency independently for each core in multicore systems. Now, Phenom II processors lack this feature: the CPU frequency can be scaled, but all cores must share the same frequency.

Did this happen because of technical reasons? AMD thought it was better to do it? No. As Wikipedia says:

Another change from the original Phenom is that Cool 'n Quiet is now applied to the processor as a whole, rather than on a per-core basis. This was done in order to address the mishandling of threads by Windows Vista, which can cause single-threaded applications to run on a core that is idling at half-speed.

The situation is explained in an article in anandtech.com, where the author mistakes an error on Vista's account with an error in the Phenom processor (bolding of text is mine):

In theory, the AMD design made sense. If you were running a single threaded application, the core that your thread was active on would run at full speed, while the remaining three cores would run at a much lower speed. AMD included this functionality under the Cool 'n' Quiet umbrella. In practice however, Phenom's Cool 'n' Quiet was quite flawed. Vista has a nasty habit of bouncing threads around from one core to the next, which could result in the following phenomenon (no pun intended): when running a single-threaded application, the thread would run on a single core which would tell Vista that it needed to run at full speed. Vista would then move the thread to the next core, which was running at half-speed; now the thread is running on a core that's half the speed as the original core it started out on.

Phenom II fixes this by not allowing individual cores to run at clock speeds independently of one another; if one core must run at 3.0GHz, then all four cores will run at 3.0GHz. In practice this is a much better option as you don't run into the situations where Phenom performance is about half what it should be thanks to your applications running on cores that are operating at half speed. In the past you couldn't leave CnQ enabled on a Phenom system and watch an HD movie, but this is no longer true with Phenom II.

Recall how the brilliant author ascribes the "flaw" to CnQ, instead of to Vista, and how it was AMD who "fixed" the problem!

The plain truth is that AMD developed a technology (independent core scaling) that would save energy (which means money and ecology) with zero-effects on performance (since the cores actually running jobs run at full speed), and MS Vista being a pile of crap forced them to revert it.

Now, if you have a computer with 4 or 8 cores, and watch a HD movie (which needs a full-speed core to decode it, but only one core), the full 8 cores will be running at full speed, wasting power, producing CO2, and making you get charged money at a rate 8 times that actually required!

The obvious right solution would be to fix Vista so that threads don't dance from core to core unnecessarily, so that AMD's CnQ technology could be used to full extent. AMD's movement with Phenom II just fixed the performance problem, by basically destroying the whole point of CnQ.

Now take a second to reflex how the monstrous domination of MS over the OS market leads to problems like this one. In a really competitive market, if a stupid OS provider gets it wrong and their OS does not support something like CnQ properly, the customers will migrate to other OSs, and the rogue provider will be forced to fix their OS. The dominance of MS (plus their stupidity), just held back precious technological advances!

Tags: , , , , , , , , , , ,


John maddog Hall and OpenMoko at DebConf9 in Cáceres, Spain
May 15th 2009

The annual Debian developers meeting, DebConf is being held this year in Cáceres (Spain), from July 23 to 30. Apart from just promoting the event, I am posting this to mention that the Spanish OpenMoko distributor Tuxbrain will participate, and sell discounted Neo FreeRunner phones. As a masochistic proud owner of one such phone, I feel compelled to spread the word (and help infect other people with FLOSS virii).

You can read a post about it in the debconf-announce and debian-devel-announce lists, made by Martin Krafft. Also, Tuxbrain responsible David Samblas uploaded a video of maddog Hall promoting the event:

Tags: , , , , , , ,

No Comments yet »

Poor Intel graphics performance in Ubuntu Jaunty Jackalope, and a fix for it
April 29th 2009

Update: read second comment

I recently upgraded to Ubuntu Jaunty Jackalope, and have experienced a much slower response of my desktop since. The problem seems to be with Intel GMA chips, as my computer has. The reason for the poor performance is that Canonical Ltd. decided not to include the UXA acceleration in Jaunty, for stability reasons (read more at Phoronix).

The issue is discussed at the Ubuntu wiki, along with some solutions. For me, the fix involved just making X.org use UXA, by including the following in the xorg.conf file, as they recommend in the wiki:

Section "Device"
        Identifier    "Configured Video Device"
        # ...
        Option        "AccelMethod" "uxa"
Tags: , , , , , , , , ,


Temperature and fan speed control on the Asus Eee PC
March 15th 2009

I noticed that after my second eeebuntu install (see a previous post for a why to this reinstall), my Eee PC was a wee bit more noisy. Most probably it has always been like that, but I just noticed after the reinstall.

I put some sensor output in my Xfce panel, and noticed that the CPU temperature hovered around 55 degrees C, and the fan would continuously spin at around 1200 rpm. I searched the web about it, and found out that usually fans are stopped at computer boot, then start spinning when temperature goes up. This is logic. The small catch is that when the temperature in the Eee PC goes down, the fan does not stop automatically. This means that the fans are almost always spinning in the long run.

I searched for methods to fix that, and I read this post at hartvig.de. From there I took the idea of taking over the control of the fans, and making them spin according to the current temperature. For that, I wrote the following script:



# Get temperature:

# Choose fan speed:
if [ $TEMP -gt 65 ]
elif [ $TEMP -gt 60 ]
elif [ $TEMP -gt 55 ]

# Impose fan speed:
echo 1 > $MANFILE

The file /proc/eee/fan_manual controls whether fans are under manual (file contains a "1") or automatic (file contains a "0") control. File /proc/eee/fan_speed must contain an integer number from 0 to 100 (a percent of max fan speed).

I am running this script every minute with cron, and thus far it works OK.

Tags: , , , , , , ,


Free software woes
March 11th 2009

Yes, FLOSS also has its quirks and problems, and I am going to rant about some of them, that I run into the last week.

Problem 1: fsck on laptops

The reader might know that Linux comes with a collection of file system checkers/fixers, under the name fsck.* (where * = ext2/3, reiserfs, jfs, xfs...). When one formats a new partition (or tunes an existing one), some parameters are set, as for example in what circumstances fsck should be run automatically (you can always run it by hand). The typical setting is to run the command on each partition (just before mounting it) every N times it is mounted, or every M days.

It is also set that if a filesystem is not shut down cleanly (e.g., by crashing the computer or directly unplugging it), fsck will be run automatically on next boot (hey, that's so nice!).

However, here's the catch: on laptops, and with the aim of saving power, fsck will (tipically) not run automatically when on batteries. This seems a great idea, but you can imagine an scenario where it fails: shut down the laptop uncleanly, then power it up on batteries, and... voilà, you are presented with a system that seems to boot, but gives a lot of problems, the X don't work... because the disk was corrupt, and wasn't fixed on boot.

When this happened to me, I fixed it by booting while plugged. In principle you could also boot on single user mode, then chose "Check the filesystem" in the menu you will be presented (I'm talking about Ubuntu here), and fix the problem, even on batteries. But still, it's annoying. IMHO fsck should run after unclean shutdowns, no matter being plugged or on batteries.

Problem 2: failed hibernate can seriously screw your system

I tried hibernating my laptop (a feature I keep finding problems with), but it was taking too long, and I was forced to shut it down using the power button. This, in itself, is a serious issue, but I could live with it.

But what I can't live with is that after the event, I had no way of booting back! I tried all I could, and finally had to reinstall the OS. I am the one whom it happened to, and I still find it hard to believe: Linux so fucked up that you have to reinstall. I thought reinstalling belonged to the Windows Dark Ages!

Problem 3: faulty SD card

Since the problems tend to come together, it's no surprise that I came across this error when trying to reinstall the machine borked with previous problem. The thing is that I was using a SD card as installation media, burning the ISO into it with UNetbootin. The burning didn't burp any error, but the installation failed, usually (but not always) on the same point.

After minutes (hours?) of going crazy, I burned the ISO into another SD card, and it worked like a charm.

My complain is not that the SD was faulty, which I can understand (hardware fails). What I am angry at is the fact that I checked (with the aforementioned fsck command) the FS in the card many times, and I reformatted it (with mkfs) many more times, and Linux would always say that the formatting had been correct, and that all checks where fine. I understand that things are sometimes OK, sometimes KO. I just want to know when is which!

Tags: , , , ,


My eeePC at the EGEE UF4
March 3rd 2009

I just posted about the abundance of laptops in the conference I am attending this week. Now I feel like comenting about my experience with the Asus eeePC 901 I acquired some weeks ago.

I have seen a couple other eeePCs, a black 9xx one, and a 7xx one. Apart from these, most other computers are laptops, not netbooks. I actually expected to find more, and for a plethora or reasons. There are some pretty small Vaios around, but only they compete in terms of size and weight with the eeePC. Not even the Macs. Not even the MacBook Airs that I have seen. Yes, the screen of the eeePC is tiny, but I would hate carrying around those monsters just to have a big screen on the road.

Secondly, my battery can last for 6h of work. Since I only use it during the breaks, and intermittently during the talks (closing the lid to suspend it when not in use), I can easily use it the whole day without plugging it at all. Other people can't live w/o plugs. In 3.5h this morning, I spent less than 30% of the battery.

Thirdly, there is the price. I would expect that the Vaios I mention above cost easily 5-6 times more than my sub-300-euro jewel. The other laptops are probably cheaper, but still in the range 2-3x the price of my laptop. This is not negligible! I have no functionality missing, I can do everything the others do, but at a fraction of the price, a fraction of the space in my bag, and at a fraction of the weight on my back when transporting, and knees on using.

Tags: , , , , ,

No Comments yet »

Miniblogging from Catania
March 3rd 2009

Right now I'm in the 4th EGEE User Forum/OGF25 conference being held in Catania, Sicily.

I have some random thought to write down, and my lately little-attended blog seems the right place to do so.

Random thought of the moment: everyone, I mean every boy and girl and their pets, has a laptop. Everyone listens to talks with a laptop in their knees. Also, an amazing fraction of these (from 1 in 4 to 1 in 3, maybe) are Macs. The Linux machines are also relatively abundant, although a sad majority of laptops seems to run Windows.

Might this mean that techies favor Apple? Maybe it just means that geeks can also be posh, as shown by the equally high amount of iPhones I've seen around.

Tags: , , , , , ,

No Comments yet »

First impressions on a Neo FreeRunner
January 13th 2009

Yes, as the title implies, I am the fortunate owner of a Neo FreeRunner. For those not on the know, the NFR is a kind of mobile phone/PDA running free software, and aimed at being open, both from software and hardware perspective.

I bought it last week, and I already have things that I love, and others that I don't love that much. First thing that sucks: my 128kB Movistar SIM card is not supported, so I can't use the NFR to make calls! Apparently older versions of the SIM card are supported, so I will try to get hold of one (by the way, the simyo card I posted about some time ago works perfectly).

Another thing that is not so good is the stability of the software. However, I expected that, and I have no problem with it. Being open source, the software will evolve day by day, and I will love to see the evolution.

On the bright side: it is really great to be able to install different distros in your phone! I tried OpenMoko, FDOM, QtExtended (formerly Qtopia) and SHR, and all of them have good and bad things. It is like going back to when I tried different distros for my computers (now I mostly stick to Ubuntu or Debian). By the way, you can install Debian in the NFR (haven't tried it yet, because you have to install it in the microSD card, not in the main memory (it's too big for it). You can even try Google's Android, if you so wish.

But the really nice thing about it is that you can create your own apps for it. You can install Perl or Python interpreters, and then use the Command-line interface (yes, it does have command line) to run scripts. Or create icons on the desktop and link them to an action. For example, I created an icon that switches from portrait to landscape orientation when pressing it, and then back when pressing it again. I created another icon that launches mplayer when pressed, so I can watch a video in it by just pressing the icon.

I expect to blog more about the gadget, so stay tuned.

Tags: , , , , , , , , ,


Next »

  • The contents of this blog are under a Creative Commons License.

    Creative Commons License

  • Meta