Malicious BitTorrent clients

Another post stressing the fact that freeware is not free software.

A while ago I warned about Browsezila (a freeware web broser, infected with malware), and now I warn about Bitroll and Torrent101. They are freeware, but, since they are proprietary, and closed source, no-one can read the code behind them. Is this important? Does someone actually read the code of free software programs? Well, it seems it is important, and it seems that free software programs do get read, because I am yet to see these problems in free BitTorrent clients.

Comments

PDF exploits for all readers and platforms?

I have read in Kriptopolis some posts about new PDF exploits (in Spanish). The articles say that web broser PDF plugins are vulnerable, dedicated PDF readers are also vulnerable, and new exploits may be created. The Kriptopolis site keeps on talking about new vulnerabilities in PDF documents, and how they affect all platforms. Do they?

If you go to the SecurityFocus site, where they cover the new, you can download an example PDF, that exploits this vulnerability. If you open it with any (vulnerable) PDF reader, the program will freeze, and the CPU usage will go over the roof.

Well, bold as I am, I did the test. I opened it with Acroread 7.0 for GNU/Linux and… it froze, and… the CPU usage hit the roof. I could not Ctrl-C the beast, and a kill would not kill it. Fortunately, a kill -9 did the job :^(

Now, I tried Evince:


Heracles[~/Downloads]: evince MOAB-06-01-2007.pdf
Error (3659): Illegal character ')'
Error (0): PDF file is damaged - attempting to reconstruct xref table...
Segmentation fault

and Xpdf:


Heracles[~/Downloads]: xpdf MOAB-06-01-2007.pdf
Error (3659): Illegal character ')'
Error (0): PDF file is damaged - attempting to reconstruct xref table...
Segmentation fault

Ta-chan!! Yes, they crash, but refusing to open the damned thing! They both complain, and don’t fall for it.

Perhaps it’s worth reminding the reader that Evince and Xpdf are free software, whereas Acroread is not. Acroread is merely free of charge, but not free as in freedom.

Comments

My backups with rsync

In previous posts I have introduced the use of rsync for making incremental backups, and then mentioned an event of making use of such backups. However, I have realized that I haven’t actually explained my backup scheme! Let’s go for it:

Backup plan

I make a backup of my $home directory, say /home/isilanes. Each “backup” will be a set of 18 directories:

  • Current (last day)
  • 7 daily
  • 4 weekly
  • 6 monthly

Each such dir has an apparent complete copy of how /home/isilanes looked like at the moment of making the backup. However, making use of hard links, only the new bits of info are actually written. All the parts that are redundant are written once on disk, and then linked from all the places referring to it.

Result: a 18 copies of a $home of 3.8 GB in a total of 8.7 GB (14% of the apparent size of 63 GB, and 13% of 18x the info size, 68,4 GB).

Perl script for making the backup

Update (Jun 5, 2008): You can find a much refined version of the script here. It no longer requires certain auxiliary script to be installed in the remote machine, and is “better” in general (or it should be!)

Below is the commented Perl script I use. Machine names, directories and IPs are invented. Bart is the name of my computer.


#!/usr/bin/perl -w

use strict;

my $rsync = "rsync -a -e ssh --delete --delete-excluded";
my $home = "/home/isilanes";
my $logfile = "$home/.LOGs/backup_log";

#
# $where -> where to make the backup
#
# $often -> whether this is a daily, weekly or monthly backup
#
my $where = $ARGV[0] || 'none';
my $often = $ARGV[1] || 'none';

my ($source,$remote,$destdir,$excluded,$to,$from);

# Possible "$where"s:
my @wheres = qw /machine1 machine2/;

# Possible "$often"s:
my @oftens = qw /daily weekly monthly/;

# Check remote machine:
my $pass = 0;
foreach my $w (@whats) { $pass = 1 if ($what eq $w) };
die "$what is an incorrect option for \"what\"!\n" unless $pass;

# Check how-often:
$pass = 0;
foreach my $o (@oftens) { $pass = 1 if ($often eq $o) };
die "$often is an incorrect option for \"often\"!\n" unless $pass;

# Set variables:
if ($what eq 'machine1')
{
# Defaults:
$source = $home;
$remote = '0.0.0.1';
$destdir = '/disk2/backup/isilanes/bart.home.current';
$excluded = "--exclude-from $home/.LOGs/excludes_backup.dat";
$to = 'machine1';
$from = 'bart';
}
elsif ($what eq 'machine2')
{
# Defaults:
$source = $home;
$remote = '0.0.0.2';
$destdir = '/scratch/backup/isilanes/bart.home.current';
$excluded = "--exclude-from $home/.LOGs/excludes_backup.dat";
$to = 'machine2';
$from = 'bart';
}

# Do the job:
unless ($what eq 'none')
{
unless ($often eq 'none')
{
# Connect to the remote machine, and run ANOTHER script there, making a rotation
# of the backup dirs:
system "ssh $remote \"/home/isilanes/MyTools/rotate_backups.pl $often\"";

# Actually make the backup:
system "$rsync $excluded $source/ $remote:$destdir/";

# "touch" the backup dir, to give it present timestamp:
system "ssh $remote \"touch $destdir\"";

# Enter a line in the log file defined above ($logfile):
&writelog($from,$often,$to);
};
};

sub writelog
{
my $from = ucfirst($_[0]);
my $often = $_[1];
my $to = uc($_[2]);
my $date = `date`;

open(LOG,">>$logfile");
printf LOG "home@%-10s %-7s backup at %-10s on %1s",$from,$often,$to,$date;
close(LOG);
};

As can be seen, this script relies on the remote machine having a rotate_backups.pl Perl script, located at /home/isilanes/MyTools/. That script makes the rotation of the 18 backups (moving current to yesterday, yesterday to 2-days-ago, 2-days-ago to 3-days-ago and so on). The code for that:


#!/usr/bin/perl -w

use strict;

# Whether daily, weekly or monthly:
my $type = $ARGV[0] || 'daily';

# Backup directory:
my $bdir = '/disk4/backup/isilanes/bart.home';

# Max number of copies:
my %nmax = ( 'daily' => 7,
'weekly' => 4,
'monthly' => 6 );

# Choose one of the above:
my $nmax = $nmax{$type} || 7;

# Rotate N->tmp, N-1->N, ..., 1->2, current->1:
system "mv $bdir.$type.$nmax $bdir.tmp" if (-d "$bdir.$type.$nmax");

my $i;
for ($i=$nmax-1;$i>0;$i--)
{
my $j = $i+1;
system "mv $bdir.$type.$i $bdir.$type.$j" if (-d "$bdir.$type.$i");
};

system "mv $bdir.current $bdir.$type.1" if (-d "$bdir.current");

# Restore last (tmp) backup, and then refresh it:
system "mv $bdir.tmp $bdir.current" if (-d "$bdir.tmp");
system "cp -alf --reply=yes $bdir.$type.1/. $bdir.current/" if (-d "$bdir.$type.1");

Comments

¿Popularización de Software Libre acentúa explotación de bugs?

[This entry is also available in English|English PDF|PDF en castellano]

Los excépticos del movimiento FLOSS suelen decir que el Software Libre tiene, en general, menos bugs explotados que el software privativo, solamente porque es menos popular que este. Argumentan que, como el FLOSS tiene menos usuarios, los crackers estarán menos interesados en malgastar su tiempo intentando explotar los bugs que pudiera tener. La mayor base de usuarios del software privativo daría además, según ellos, una mayor publicidad a sus bugs, y sus modos de explotación se difundirían más rápido. El corolario a esta teoría sería que la popularización de aplicaciones FLOSS (p.e. Firefox), llevería a un incremento en el número de bugs descubiertos y explotados, llegando eventualmente a un estado similar al del software privativo actual (p.e. “Cuando Firefox sea tan popular como Internet Explorer, tendrá tantos bugs como Internet Explorer.”).

El objetivo de este artículo es demostrar matemáticamente la total insensatez de tal teoría. Específicamente, argumentaré que un aumento en el tamaño de la comunidad de un proyecto FLOSS lo mejora de al menos 3 formas:

  1. Desarrollo acelerado
  2. Menor vida media de bugs abiertos
  3. Menor vida media de bugs explotados

Una explicación más extensa está disponible en formato PDF. Lamento que las fórmulas matemáticas en HTML sean de calidad francamente pobre, pero no tengo ni tiempo ni habilidad para mejorarlas. Si el lector está interesado en fórmulas bonitas, recomiendo acudir al PDF.

Este blog, y el PDF que enlazo, están liberados bajo la siguiente licencia:


Creative Commons License

Esta obra está bajo una licencia de Creative Commons.

Lo que esto significa, básicamente, es que eres libre para copiar y/o modificar este trabajo como gustes, y redistribuirlo cuanto quieras, con dos únicas limitaciones: que no le des uso comercial, y que cites a su autor (o al menos enlaces a este blog).

1 – Proposiciones y derivación

Tenemos un proyecto FLOSS P, nuevas versiones del cual se liberan cada T tiempo. Cada versión se asume que incorpora G nuevos bugs, y las sucesivas versiones serán liberadas cuando todos los bugs de la anterior sean parcheados. En un momento determinado habrá B bugs abiertos (de los G originales).

Asumo que la velocidad de parcheo es proporcional al tamaño de la comunidad de usuarios (U):

dB/dt=-KpU

1.1 – Desarrollo acelerado

De arriba, la dependencia temporal del número de bugs abiertos:

B = G – Kp U t

El tiempo entre versiones (T), de B = 0:

T = G/KpU

De manera que el tiempo entre versiones se acorta para U creciente.

1.2 – Menor vida media de bugs abiertos

En un período de tiempo dt, se parchean (-dB/dt)dt bugs, siendo su edad t. Si llamamos τ a la vida media de los bugs, tenemos la definición:

τ = (∫t(-dB/dt)dt)/(∫dB)

De ahí se deduce:

Ï„ = T/2

Esto es: la vida media de los bugs es siempre la mitad del tiempo entre versiones, el cual (como se ha mencionado) tiene una proporcionalidad inversa con U.

1.3 – Fracción de bugs explotados antes de ser parcheados

Definimos la siguiente velocidad de explotación de bugs, donde Bx es el total de bugs explotados, Kx es la “eficiencia de explotación” de los crackers (cuya cantidad se asume proporcional a U), y Bou es la cantidad de bugs abiertos y sin explotar:

dBx/dt = Kx U Bou

También definimos α = Box/B, donde Box es la cantidad de bugs abiertos y explotados.

Se puede derivar la evolución temporal de α:

α(t) = 1 – exp(-Kx U t)

Tras ello definimos γ = Kp/KxG, y derivamos la fracción de los bugs G que terminan siendo explotados para un tiempo t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Resolviendo para t=T, y tomando en cuenta que T=G/KpU, obtenemos la fracción de los bugs totales que son explotados en algún momento durante el período entre versiones (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Nótese que Fx es independiente de U, esto es, aunque aumente el tamaño de la comunidad de usuarios, no aumenta la fracción del total de bugs que acaban siendo explotados (aunque un crecimiento de la comunidad traiga asociado un aumento colateral del número de crackers).

1.4 – Menor vida media de bugs explotados

Deseamos saber cuánto tiempo permanecen sin parchear los bug explotados, y llamamos a este tiempo τx. Tras un desarrollo ligeramente complejo, pero derivando siempre de equaciones previamente definidas (ver la versión PDF), obtenemos una expresión realmente simple para τx:

τx = Fxτ

Esto es, el tiempo medio de explotación de los bugs explotados es proporcional a τ, el cual a su vez es proporcional a T, o inversamente proporcional a U.

2 – Conclusiones

El eslogan “Más popularidad = más bugs” es un non sequitur. De acuerdo con el simple modelo que bosquejo aquí, cuanto más amplia sea la comunidad de usuarios de un programa FLOSS, más rápido serán parcheados los bugs, incluso admitiendo que a más usuarios, más crackers entre ellos, dispuestos a acabar con él. Incluso para crackers que sean más efectivos en su desempeño que los usarios de buena fe en su trabajo de parcheo (Kx >>> Kp), aumentar el tamaño de la comunidad reduce el tiempo que los bugs permanecen abiertos y también cuánto tiempo tardan en parchearse los bugs ya explotados. No importa cuán torpes sean los usuarios, y cuán rapaces los crackers, el modelo libre (por medio del cual se da acceso al código a los usuarios, dándoles así poder para contribuir al programa) asegura que la popularización es positiva, tanto para el programa como para la propia comunidad.

Comparemos esto con un modelo cerrado, en el que una base de usuarios mayor puede incrementar el número de crackers atacando a un programa, pero ciertamente añade poco o nada a la velocidad con que el código es parcheado y corregido. Es de hecho el software privativo el que debe temer su popularización. Es fácil de ver que cuando una pieza determinada de software privativo alcanza una cierta “masa crítica” de usuarios, los crackers pueden potencialmente desmoronar su evolución (digamos, haciendo Ï„x = T, Fx = 1), porque (a diferencia del FLOSS), G, P, y por lo tanto T, son constantes (ya que dependen únicamente de los vendedores del software).

Comments (1)

Popularity of Free Software generating bug exploitation?

[Esta entrada está también disponible en castellano|English PDF|PDF en castellano]

It is often said (by FLOSS-skeptics), that Free Software has less exploited bugs than the Proprietary Software because it is less popular. They argue that, since less people uses FLOSS, the crackers are less inclined to waste their time exploiting the bugs it could have. The greater user base of the proprietary software would also, in their words, make bugs more prominent, and their exploits spread faster. The corollary of this theory would be that popularization of FLOSS applications (e.g. Firefox), would lead to an increase in the number of bugs discovered and exploited, eventually reaching a proprietary-like state (e.g. “Firefox will have as many bugs as IE, when Firefox is as popular as IE”).

In this blog entry I will try to outline a mathematical model, proposed to demonstrate the utter nonsense of this theory. Specifically, I will argue that an increase in community size benefits a FLOSS project in at least 3 ways:

  1. Faster development
  2. Shorter average life of open bugs
  3. Shorter average life of exploited bugs

A more thorough explanation is available in PDF format. Recall that the math display in HTML is generally poor (much more so when I don’t have the time nor skills to tune it). If you like pretty formulas, the PDF is for you.

Both this blog entry and the linked PDF are released under the following license:

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License.

What this basically means is that you are free to copy and/or modify this work, and redistribute it freely. The only limitations are that you can not make it for profit, and that you have to cite its original author (or at least link to this blog).

1 – Propositions and derivations

We have a FLOSS project P, new versions being released every T time, and each version incorporating G new bugs. Each new version will be released when all bugs from the previous release have been patched. At any point in time, there will be B open bugs (remaining from G).

The patching speed is assumed proportional to the size of the comunity of users (U):

dB/dt=-KpU

1.1 – Faster development

From above, the time dependency of open bugs:

B = G – Kp U t

The inter-release period (T), from B = 0:

T = G/KpU

So the inter-release time (T) is shortened for growing U.

1.2 – Shorter average life of open bugs

In a dt time period, (-dB/dt)dt bugs are patched, their age being t. If we call Ï„ the average lifetime of bugs, we have the definition:

τ = (∫t(-dB/dt)dt)/(∫dB)

From that it follows:

Ï„ = T/2

So, the average life of open bugs equals half the inter-release time, which (as stated above) has an inverse proportionality with U.

1.3 – Fraction of bugs exploited before being patched

We define the following bug exploitation speed, where Bx is the total amount of exploited bugs, Kx is the “exploiting efficiency” of the crackers (whose amount will be proportional to U), and Bou is the amount of open and unexploited bugs:

dBx/dt = Kx U Bou

We also define α = Box/B, where Box is the amount of open and exploited bugs.

It can be derived the evolution of α with time:

α(t) = 1 – exp(-Kx U t)

We then define γ = Kp/KxG, and derive the fraction of G bugs that end up exploited by time t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Solving for t=T, and taking into account that T=G/KpU, we get the fraction of total bugs that gets exploited ever, during the inter-release period (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Recall that Fx is independent of U, that is, increasing the size of the user community does not increase the fraction of total bugs that get exploited ever, even though the amount of crackers is increased along with the user base.

1.4 – Shorter average life of exploited bugs

We want to find out how long exploited bugs stay unpatched, calling this time Ï„x. After some slightly complex algebra, but always deriving from the previously defined equations (see PDF version), we obtain a fairly simple expresion for Ï„x:

τx = Fxτ

That is, the average exploitation time of exploited bugs is proportional to Ï„, which is to say it is proportional to T, or inversely proportional to U.

2 – Conclusions

The “Increasing popularity = Increasing bugginess” motto is a non sequitur. According to the simple model outlined here, the broader the user community of a FLOSS program, the faster bugs will be patched, even admitting that an increase in user base brings an equal increase in the number of crackers committed to doom it. Even for crackers that are more effective in their cracking work than the bona fide users in their patching work (Kx >>> Kp), increasing the community size does reduce how long the bugs stay unpatched and also how long the exploited bugs stay unpatched. No matter how clumsy the users, and how rapacious the crackers, the free model (whereby the users are granted access to the code, and thus empowered to contribute to the program), ensures that popularization is positive, for both the program and the community itself.

Compare that with a closed model, in which an increased user base may boost the number or crackers attacking a program, but certainly adds little, if anything, to the code patching and correcting speed. It is actually proprietary software that should fear popularization. It is easy to see that when a particular proprietary software piece grows over a certain “critical mass” of users, the crackers could potentially disrupt its evolution (say, Ï„x = T, Fx = 1), because G, P and thus T, are kept constant (depend only on the sellers of the code).

Comments (1)

Article in Science

I have just read a rather interesting article in Science about the economics of information security (R. Anderson and T. Moore, Science, 2006, 314, 610), and I would like to comment some quotes of it:

There has been a vigorous debate between software vendors and security researchers over whether actively seeking and disclosing vulnerabilities is socially desirable. Rescorla has argued that for software with many latent vulnerabilities (e.g. Windows), removing one bug makes little difference to the likelihood of an attacker finding another one later[1].

Quite interesting! First, even a paper on Science not only regards Windows as a piece of software with a virtually endless reservoir of internal errors, but it even uses it as a paradigmatic example of such a case. Second, it deems such software as not worth patching, and bugs not worth being disclosed (security through obscurity), because they are so many.

[…] [Rescorla] argued against disclosure and frequent patching unless the same vulnerabilities are likely to be rediscovered later. Ozment found that for FreeBSD[2] […] vulnerabilities are indeed likely to be rediscovered[3]. Ozment and Schecher also found that the rate at which unique vulnerabilities were disclosed for the core and unchanged FreeBSD operating system has decreased over a 6-year period[4]. These findings suggest that vulnerability disclosure can improve system security over the long term.

I have read [1] and [3] very briefly, and Ozment seems very critical of Rescorla’s results. However, the comparison between Windows and FreeBSD (I think they mean OpenBSD), which is FLOSS, is quite nice. Windows is so buggy that patching it is hopeless. FreeBSD has seen a decline in the number of disclosed bugs (remember that, being FLOSS, all the bugs found by developers, mantainers and users are disclosed), related to the fact that each bug fixed actually means a reduced probability of finding new bugs (because the total is not endless).

The bottom line is that, for a good piece of software (one that is not so bug-ridden that crackers never “rediscover” an old bug, because there are sooo many new ones to discover), disclosing the bugs is better. It is so because it speeds the patching rate, which in turn reduces the amount of exploitable bugs, which in turn improves the security. The connection between patching bugs and reducing significantly the amount of exploitable bugs can be made when the amount of bugs is small enough that new crackers are likely to rediscover old bugs, and then it would have paid to patch those bugs. Notice also that this is an auto-catalytic (self-accelerated) process: the more bugs disclosed, and more bugs patched, the less bugs remain, so the more it pays to further disclose and patch the remaining bugs, because the less bugs, the relatively more it pays to patch.

Vulnerability disclosure also helps to give vendors an incentive to fix bugs in subsequent product releases[5]. Arora et al. have shown through quantitative analysis that public disclosure made vendors respond with fixes more quickly; the number of attacks increased, but the number of reported vulnerabilities declined over time[6]

Good point! Not only disclosing the bugs is good for the consumers because it directly increases its quality, but also because it helps enforce a better behavior of the vendors. This is a key idea in the article, which delves in the fact that security policies are best when the one enforcing them is the one suffering from their errors. However, nowadays there is little pressure on the vendors to produce more secure software, because the buyer has little knowledge to judge this aspect of the quality, and ends up favoring a product for its looks or the alleged features, regardless of stability or security. Disclosing the bugs helps the buyer to assess the security of a program, thus making a better-balanced choice when buying. This, in return, leads to a more secure software in general, because vendors will have a big incentive to make their products more secure (which they don’t really have now).

[1] E. Rescorla, paper presented in the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[2] I suspect the authors are mistaking OpenBSD for FreeBSD
[3] A. Ozment, paper presented at the Fourth Workshop on the Economics of Information Security, Cambridge, MA, 2 to 3 June 2005 (PDF)
[4] A. Ozment, S.E. Schechter, paper presented at the 15th USENIX Security Symposium, Vancouver, 31 July to 4 August 2006 (HTML).
[5] A. Arora, R. Telang, H. Xu, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[6] A. Arora, R. Krishnan, A. Nandkumar, R. Telang, Y. Yang, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)

Comments

SSH connection without password

[Update (24/03/2007): see new post on subject]

Following Txema’s wonderful explanations, and translating from Basque a Dec 2, 2002 e-mail, here they go the instructions to connect from computer A to B via SSH, without computer B ever asking for our password.

Notice that it is not a security breach, because we are allowing a certain computer A (and user) to connect to B. Of course, if A is somehow compromised, then applying this recipe would give the attacker hability to connect from A to B with no hassle. If you fear computer A being compromised, then don’t do it.

On the other hand, it can actually be a hardening of the security of computer B. If only a certain user of A is allowed to connect to B without password, and then remote passwords are deactivated (making that, if you need to input a password, you can not connect), then a cracker breaking into A would have to first break into the account of that certain user to access B. Otherwise, no other user is allowed to try to connect to B from A.

Whatever…. Let’s get going:

In computer A, generate a DSA key for that machine (and account):

ssh-keygen -t dsa

This creates the following file at ~/.ssh/:

id_dsa.pub

The contents of such file should be copy-pasted (beware line-breaking, because it is a single, very long, line) into B, namely into a file called (create if doesn’t exist, append to it if it exists) ~/.ssh/authorized_keys2.

Now, the A user in whose ~/.ssh/ resides the id_dsa.pub, will be able to connect without password to the B computer account of the user in whose ~/.ssh/ is the authorized_keys2 file.

Comments (2)

Browsezilla: when freeware comes at a price

Just a week after Stallman’s talk, I read at Kriptopolis (Spanish) about a (alleged) malware piece, hidden into some freeware by the name Browsezilla. This is a perfect example of something free of cost not being half as good as a free/libre thing. This Browsezilla might be zero-cost to the user (freeware), but a piece of shit all the same, which stresses the fact that it is the FREEDOM of the Free Software that makes it great, not the PRICE.

It seems that the computer security company Panda Software warned about the freeware internet browser Browsezilla “visiting” porn sites in the background, fact unknown to the unsuspecting user. Its aim would be to increase the number of hits for those pages (and thus have them obtain higher revenues from advertising).

The lame idiots at browsezilla.org seem to be defending themselves, in such a bad english that makes it hard to take them seriously.

Now, both sides can be flaming each other until the end of times. Maybe in this case the issue is clear. Panda is not expected to spread FUD for the sake of it, whereas Browsezilla’s credibility is thin at best. However, imagine a security company not being completely honest, a freeware producer being apparently serious, and a bug/malware being veeery subtle to spot… endless debate, never fully establishing the complete truth.

On the other hand, were this Browsezilla free software, inspection of the code would settle the matter within minutes.

Stuff malware, stuff freeware, and stuff all non-free software.

Comments

OpenOffice.org vulnerable?

A couple of weeks ago I sent a new to Menéame (collaborative news site in Spanish), and today I said to myself: WTF? I am running out of good ideas for blog entries (if I ever had any), so I could as well copy-paste that new here :^)

Basically, it follows the line of my first post in this blog (and a later one), dismantling stupid accusations of “vulnerabilities” of FLOSS programs (then, Firefox, now, OpenOffice.org).

Kaspersky Labs announced some time ago that OpenOffice.org was vulnerable to a malicious script attack, something they tagged as “virus”, but which is definitenly not. The answer from OOo can be read at LinuxWeeklyNews.net.

This is not to say FLOSS is devoid of bugs and vulneravilities. I only want to point out lame FUD campaigns, no doubt sponsored by commercial software companies (you know who). The only aim of these misinformation campaigns is to make the average user think that FLOSS is not so good, after all, and that, if Linux doesn’t even have this invulnerability they speak of, then, what good is it?

Now, how lame is that? Instead of putting themselves together and fixing their pathetic crap of OS, they spend their money throwing shit to the FLOSS, in the hope that both will be regarded as rubbish, instead of none.

Comments

Bug wars: FLOSS vs Proprietary

I read in Kriptópolis, via a Basque blog that the companies Coverity and Symantec, along with the Stanford University, have made a study regarding the number of bugs in both free and proprietary software. This study has been funded by the North-American Homeland Security agency.

The study has focused on comparing the number of bugs per line of code of similar free/non-free programs one-to-one. Many previous (non-independent, Microsoft-funded) studies before, simply counted the number of total reported bugs in, say, Windows XP and a given Linux distro. This method is clearly biased against the particular Linux distro studied, because there are many different programs in any Linux distro that perform the same task (being able to choose is important for the FLOSS hippies, you know), and adding up the bugs of all those programs seems unfair.

The results of the study give the FLOSS an appalling victory (surprised?). Firstly, of the 32 program pairs, the free partners showed an average of 0.43 bugs per 1000 lines of code. The non-free ones turned up to have a shameful average of 20 to 30 bugs per 1000 lines (45 times more).

Secondly, not only the number of bugs was lower in FLOSS programs, but also the speed to fix them was found to be much faster. As an example, Amanda (a FLOSS backup program), was found to have 1.22 bugs per 1000 lines of code (the highest of all the FLOSS programs in the study, still much lower than any non-free program in the study). Apparently, the Amanda developers read the study, got ashamed, and one week later they had fixed most of the aforementioned bugs, going from the most bug-ridden FLOSS program of the study to the less bug-ridden one! Apparently pointing out where the errors are is veeery healthy for any FLOSS project.

Comments

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »