¿Popularización de Software Libre acentúa explotación de bugs?

[This entry is also available in English|English PDF|PDF en castellano]

Los excépticos del movimiento FLOSS suelen decir que el Software Libre tiene, en general, menos bugs explotados que el software privativo, solamente porque es menos popular que este. Argumentan que, como el FLOSS tiene menos usuarios, los crackers estarán menos interesados en malgastar su tiempo intentando explotar los bugs que pudiera tener. La mayor base de usuarios del software privativo daría además, según ellos, una mayor publicidad a sus bugs, y sus modos de explotación se difundirían más rápido. El corolario a esta teoría sería que la popularización de aplicaciones FLOSS (p.e. Firefox), llevería a un incremento en el número de bugs descubiertos y explotados, llegando eventualmente a un estado similar al del software privativo actual (p.e. “Cuando Firefox sea tan popular como Internet Explorer, tendrá tantos bugs como Internet Explorer.”).

El objetivo de este artículo es demostrar matemáticamente la total insensatez de tal teoría. Específicamente, argumentaré que un aumento en el tamaño de la comunidad de un proyecto FLOSS lo mejora de al menos 3 formas:

  1. Desarrollo acelerado
  2. Menor vida media de bugs abiertos
  3. Menor vida media de bugs explotados

Una explicación más extensa está disponible en formato PDF. Lamento que las fórmulas matemáticas en HTML sean de calidad francamente pobre, pero no tengo ni tiempo ni habilidad para mejorarlas. Si el lector está interesado en fórmulas bonitas, recomiendo acudir al PDF.

Este blog, y el PDF que enlazo, están liberados bajo la siguiente licencia:


Creative Commons License

Esta obra está bajo una licencia de Creative Commons.

Lo que esto significa, básicamente, es que eres libre para copiar y/o modificar este trabajo como gustes, y redistribuirlo cuanto quieras, con dos únicas limitaciones: que no le des uso comercial, y que cites a su autor (o al menos enlaces a este blog).

1 – Proposiciones y derivación

Tenemos un proyecto FLOSS P, nuevas versiones del cual se liberan cada T tiempo. Cada versión se asume que incorpora G nuevos bugs, y las sucesivas versiones serán liberadas cuando todos los bugs de la anterior sean parcheados. En un momento determinado habrá B bugs abiertos (de los G originales).

Asumo que la velocidad de parcheo es proporcional al tamaño de la comunidad de usuarios (U):

dB/dt=-KpU

1.1 – Desarrollo acelerado

De arriba, la dependencia temporal del número de bugs abiertos:

B = G – Kp U t

El tiempo entre versiones (T), de B = 0:

T = G/KpU

De manera que el tiempo entre versiones se acorta para U creciente.

1.2 – Menor vida media de bugs abiertos

En un período de tiempo dt, se parchean (-dB/dt)dt bugs, siendo su edad t. Si llamamos τ a la vida media de los bugs, tenemos la definición:

τ = (∫t(-dB/dt)dt)/(∫dB)

De ahí se deduce:

Ï„ = T/2

Esto es: la vida media de los bugs es siempre la mitad del tiempo entre versiones, el cual (como se ha mencionado) tiene una proporcionalidad inversa con U.

1.3 – Fracción de bugs explotados antes de ser parcheados

Definimos la siguiente velocidad de explotación de bugs, donde Bx es el total de bugs explotados, Kx es la “eficiencia de explotación” de los crackers (cuya cantidad se asume proporcional a U), y Bou es la cantidad de bugs abiertos y sin explotar:

dBx/dt = Kx U Bou

También definimos α = Box/B, donde Box es la cantidad de bugs abiertos y explotados.

Se puede derivar la evolución temporal de α:

α(t) = 1 – exp(-Kx U t)

Tras ello definimos γ = Kp/KxG, y derivamos la fracción de los bugs G que terminan siendo explotados para un tiempo t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Resolviendo para t=T, y tomando en cuenta que T=G/KpU, obtenemos la fracción de los bugs totales que son explotados en algún momento durante el período entre versiones (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Nótese que Fx es independiente de U, esto es, aunque aumente el tamaño de la comunidad de usuarios, no aumenta la fracción del total de bugs que acaban siendo explotados (aunque un crecimiento de la comunidad traiga asociado un aumento colateral del número de crackers).

1.4 – Menor vida media de bugs explotados

Deseamos saber cuánto tiempo permanecen sin parchear los bug explotados, y llamamos a este tiempo τx. Tras un desarrollo ligeramente complejo, pero derivando siempre de equaciones previamente definidas (ver la versión PDF), obtenemos una expresión realmente simple para τx:

τx = Fxτ

Esto es, el tiempo medio de explotación de los bugs explotados es proporcional a τ, el cual a su vez es proporcional a T, o inversamente proporcional a U.

2 – Conclusiones

El eslogan “Más popularidad = más bugs” es un non sequitur. De acuerdo con el simple modelo que bosquejo aquí, cuanto más amplia sea la comunidad de usuarios de un programa FLOSS, más rápido serán parcheados los bugs, incluso admitiendo que a más usuarios, más crackers entre ellos, dispuestos a acabar con él. Incluso para crackers que sean más efectivos en su desempeño que los usarios de buena fe en su trabajo de parcheo (Kx >>> Kp), aumentar el tamaño de la comunidad reduce el tiempo que los bugs permanecen abiertos y también cuánto tiempo tardan en parchearse los bugs ya explotados. No importa cuán torpes sean los usuarios, y cuán rapaces los crackers, el modelo libre (por medio del cual se da acceso al código a los usuarios, dándoles así poder para contribuir al programa) asegura que la popularización es positiva, tanto para el programa como para la propia comunidad.

Comparemos esto con un modelo cerrado, en el que una base de usuarios mayor puede incrementar el número de crackers atacando a un programa, pero ciertamente añade poco o nada a la velocidad con que el código es parcheado y corregido. Es de hecho el software privativo el que debe temer su popularización. Es fácil de ver que cuando una pieza determinada de software privativo alcanza una cierta “masa crítica” de usuarios, los crackers pueden potencialmente desmoronar su evolución (digamos, haciendo Ï„x = T, Fx = 1), porque (a diferencia del FLOSS), G, P, y por lo tanto T, son constantes (ya que dependen únicamente de los vendedores del software).

Comments (1)

Popularity of Free Software generating bug exploitation?

[Esta entrada está también disponible en castellano|English PDF|PDF en castellano]

It is often said (by FLOSS-skeptics), that Free Software has less exploited bugs than the Proprietary Software because it is less popular. They argue that, since less people uses FLOSS, the crackers are less inclined to waste their time exploiting the bugs it could have. The greater user base of the proprietary software would also, in their words, make bugs more prominent, and their exploits spread faster. The corollary of this theory would be that popularization of FLOSS applications (e.g. Firefox), would lead to an increase in the number of bugs discovered and exploited, eventually reaching a proprietary-like state (e.g. “Firefox will have as many bugs as IE, when Firefox is as popular as IE”).

In this blog entry I will try to outline a mathematical model, proposed to demonstrate the utter nonsense of this theory. Specifically, I will argue that an increase in community size benefits a FLOSS project in at least 3 ways:

  1. Faster development
  2. Shorter average life of open bugs
  3. Shorter average life of exploited bugs

A more thorough explanation is available in PDF format. Recall that the math display in HTML is generally poor (much more so when I don’t have the time nor skills to tune it). If you like pretty formulas, the PDF is for you.

Both this blog entry and the linked PDF are released under the following license:

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License.

What this basically means is that you are free to copy and/or modify this work, and redistribute it freely. The only limitations are that you can not make it for profit, and that you have to cite its original author (or at least link to this blog).

1 – Propositions and derivations

We have a FLOSS project P, new versions being released every T time, and each version incorporating G new bugs. Each new version will be released when all bugs from the previous release have been patched. At any point in time, there will be B open bugs (remaining from G).

The patching speed is assumed proportional to the size of the comunity of users (U):

dB/dt=-KpU

1.1 – Faster development

From above, the time dependency of open bugs:

B = G – Kp U t

The inter-release period (T), from B = 0:

T = G/KpU

So the inter-release time (T) is shortened for growing U.

1.2 – Shorter average life of open bugs

In a dt time period, (-dB/dt)dt bugs are patched, their age being t. If we call Ï„ the average lifetime of bugs, we have the definition:

τ = (∫t(-dB/dt)dt)/(∫dB)

From that it follows:

Ï„ = T/2

So, the average life of open bugs equals half the inter-release time, which (as stated above) has an inverse proportionality with U.

1.3 – Fraction of bugs exploited before being patched

We define the following bug exploitation speed, where Bx is the total amount of exploited bugs, Kx is the “exploiting efficiency” of the crackers (whose amount will be proportional to U), and Bou is the amount of open and unexploited bugs:

dBx/dt = Kx U Bou

We also define α = Box/B, where Box is the amount of open and exploited bugs.

It can be derived the evolution of α with time:

α(t) = 1 – exp(-Kx U t)

We then define γ = Kp/KxG, and derive the fraction of G bugs that end up exploited by time t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Solving for t=T, and taking into account that T=G/KpU, we get the fraction of total bugs that gets exploited ever, during the inter-release period (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Recall that Fx is independent of U, that is, increasing the size of the user community does not increase the fraction of total bugs that get exploited ever, even though the amount of crackers is increased along with the user base.

1.4 – Shorter average life of exploited bugs

We want to find out how long exploited bugs stay unpatched, calling this time τx. After some slightly complex algebra, but always deriving from the previously defined equations (see PDF version), we obtain a fairly simple expresion for τx:

τx = Fxτ

That is, the average exploitation time of exploited bugs is proportional to Ï„, which is to say it is proportional to T, or inversely proportional to U.

2 – Conclusions

The “Increasing popularity = Increasing bugginess” motto is a non sequitur. According to the simple model outlined here, the broader the user community of a FLOSS program, the faster bugs will be patched, even admitting that an increase in user base brings an equal increase in the number of crackers committed to doom it. Even for crackers that are more effective in their cracking work than the bona fide users in their patching work (Kx >>> Kp), increasing the community size does reduce how long the bugs stay unpatched and also how long the exploited bugs stay unpatched. No matter how clumsy the users, and how rapacious the crackers, the free model (whereby the users are granted access to the code, and thus empowered to contribute to the program), ensures that popularization is positive, for both the program and the community itself.

Compare that with a closed model, in which an increased user base may boost the number or crackers attacking a program, but certainly adds little, if anything, to the code patching and correcting speed. It is actually proprietary software that should fear popularization. It is easy to see that when a particular proprietary software piece grows over a certain “critical mass” of users, the crackers could potentially disrupt its evolution (say, Ï„x = T, Fx = 1), because G, P and thus T, are kept constant (depend only on the sellers of the code).

Comments (1)

Article in Science

I have just read a rather interesting article in Science about the economics of information security (R. Anderson and T. Moore, Science, 2006, 314, 610), and I would like to comment some quotes of it:

There has been a vigorous debate between software vendors and security researchers over whether actively seeking and disclosing vulnerabilities is socially desirable. Rescorla has argued that for software with many latent vulnerabilities (e.g. Windows), removing one bug makes little difference to the likelihood of an attacker finding another one later[1].

Quite interesting! First, even a paper on Science not only regards Windows as a piece of software with a virtually endless reservoir of internal errors, but it even uses it as a paradigmatic example of such a case. Second, it deems such software as not worth patching, and bugs not worth being disclosed (security through obscurity), because they are so many.

[…] [Rescorla] argued against disclosure and frequent patching unless the same vulnerabilities are likely to be rediscovered later. Ozment found that for FreeBSD[2] […] vulnerabilities are indeed likely to be rediscovered[3]. Ozment and Schecher also found that the rate at which unique vulnerabilities were disclosed for the core and unchanged FreeBSD operating system has decreased over a 6-year period[4]. These findings suggest that vulnerability disclosure can improve system security over the long term.

I have read [1] and [3] very briefly, and Ozment seems very critical of Rescorla’s results. However, the comparison between Windows and FreeBSD (I think they mean OpenBSD), which is FLOSS, is quite nice. Windows is so buggy that patching it is hopeless. FreeBSD has seen a decline in the number of disclosed bugs (remember that, being FLOSS, all the bugs found by developers, mantainers and users are disclosed), related to the fact that each bug fixed actually means a reduced probability of finding new bugs (because the total is not endless).

The bottom line is that, for a good piece of software (one that is not so bug-ridden that crackers never “rediscover” an old bug, because there are sooo many new ones to discover), disclosing the bugs is better. It is so because it speeds the patching rate, which in turn reduces the amount of exploitable bugs, which in turn improves the security. The connection between patching bugs and reducing significantly the amount of exploitable bugs can be made when the amount of bugs is small enough that new crackers are likely to rediscover old bugs, and then it would have paid to patch those bugs. Notice also that this is an auto-catalytic (self-accelerated) process: the more bugs disclosed, and more bugs patched, the less bugs remain, so the more it pays to further disclose and patch the remaining bugs, because the less bugs, the relatively more it pays to patch.

Vulnerability disclosure also helps to give vendors an incentive to fix bugs in subsequent product releases[5]. Arora et al. have shown through quantitative analysis that public disclosure made vendors respond with fixes more quickly; the number of attacks increased, but the number of reported vulnerabilities declined over time[6]

Good point! Not only disclosing the bugs is good for the consumers because it directly increases its quality, but also because it helps enforce a better behavior of the vendors. This is a key idea in the article, which delves in the fact that security policies are best when the one enforcing them is the one suffering from their errors. However, nowadays there is little pressure on the vendors to produce more secure software, because the buyer has little knowledge to judge this aspect of the quality, and ends up favoring a product for its looks or the alleged features, regardless of stability or security. Disclosing the bugs helps the buyer to assess the security of a program, thus making a better-balanced choice when buying. This, in return, leads to a more secure software in general, because vendors will have a big incentive to make their products more secure (which they don’t really have now).

[1] E. Rescorla, paper presented in the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[2] I suspect the authors are mistaking OpenBSD for FreeBSD
[3] A. Ozment, paper presented at the Fourth Workshop on the Economics of Information Security, Cambridge, MA, 2 to 3 June 2005 (PDF)
[4] A. Ozment, S.E. Schechter, paper presented at the 15th USENIX Security Symposium, Vancouver, 31 July to 4 August 2006 (HTML).
[5] A. Arora, R. Telang, H. Xu, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[6] A. Arora, R. Krishnan, A. Nandkumar, R. Telang, Y. Yang, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)

Comments

Quit Windows, my friend

Bruce Lee, on Windows (based on his Be water, my friend speech):

Empty your hard disk.

Don’t be pointless, hopeless… like Windows.

You put Windows in a bottle… it’s still a bottle,
you put it in a computer… it becomes a teapot!

Windows can crawl, or it can crash

Quit Windows, my friend.

¿Te gusta reiniciar?

Comments

Comparison of Wiki software

I am working out a Wiki page for a small sized group of users of a supercomputer at the UPV/EHU.

You might find this comparison useful.

My impressions so far:

[[MoinMoin]]

See more comprehensive HowTo at this more recent post

To install it, create a directory for it (e.g., in your /home), then copy some files to it (after installing the python-moinmoin and moinmoin-common packages, in Debian):

mkdir my_moinmoin_dir
cp /usr/share/moin/config/wikiconfig.py my_moinmoin_dir
cp /usr/share/moin/server/moin.py my_moinmoin_dir
cp /etc/moin/mywiki.py my_moinmoin_dir

Then, edit the files (mainly wikiconfig.py, and run my_moinmoin_dir/moin.py to start up the server.

If you want to make a single Wiki (not a “farm”), then remove (or better, just rename) the file /etc/moin/farmconfig.py (so that it is not read).

This one was easy to install, but has a “small” drawback: the CamelCase internal links. How freaking silly is that? First off, it makes writing CamelCase words that are not links impossible. Second, how can one make a link that displays a text X, but points to page Y?. If only CamelCase generates links, ThisText will link to the page called ThisText. This means that there is no way to put a custom string as link, pointing to a custom page. This is frustrating at the very least. Third, how does one make a one-word link?

These three concerns are taken care of, fortunately. A custom string (not CamelCase) can be used as link like that:

["Custom string here"] (links to page called Custom string here”)

The text of the link can differ from the title of the refered page like this:

[:The Refered:The Text] (displays “The Text”, while pointing to page “The Refered”)

I found out about this workaround after I started to write this page, so sue me for complaining.

It is also problematic (for a dumbass like me) to make the Wiki accesible to machines other than localhost. That is, over the Intra- or Internet. I’m working on in.

DidiWiki

Pros: it is very simple. It is a breeze to install and run. Under Debian, just aptitude install didiwiki, then run didiwiki -p 8080, open a web browser, and put http://localhost:8080 at the location bar. The default port is 8000 (if you run just didiwiki), but for me it fails. The -p can be used to attach DidiWiki to any port.

Contras: it is very simple. Editing is very easy, but… there is no preview! Is there a way to hack a preview into it? I do not know, and the project having made no progress since 2004 smells like there will never be such an upgrade.

More important: there is no “history” of the edits into a page. You can see a list of “recently edited pages”, but no such a list for each single page, or a diff between to arbitrary versions, or reversion capabilities.

On the brigth side, it is immediate to access the Wiki from any other computer… I just don’t know if this is a feature or a security hole :^)

DokuWiki

To install under Debian, do aptitude install dokuwiki, answer the questions it makes, then run dpkg-reconfigure dokuwiki to see if it asks for some more options (e.g. if Wiki will be accesible from localhost, a subnet, or the whole Internet, or what directory to put it under). Then, restart the web server (if you are running Apache2: /etc/inid.d/apache2 restart), and you are done! Now, simply point your broser to http://localhost/dokuwiki/, and you can start using it (replace dokuwiki/ with whatever dir you chose when configuring, if you changed the default).

At first sight is looks good. However, it keeps giving me errors when saving a page. The page gets full of the following:

Warning: preg_match() [function.preg-match]: Compilation failed: repeated subpattern is too long at offset 17093 in /usr/share/dokuwiki/inc/common.php on line 391

It actually saves the page… but the error is annoying at least, dangerous at worst. I suppose I could try to read the source code and fix it (it is PHP, and I have spotted the line with the error… hehehe, line 391, that is), but I do not have the programing skills, I fear.

I’ll give it a try…

Okay, I might have corrected the first bug of a FLOSS program in my life: the 391th line giving an error reads:

if( preg_match('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {

I read the PHP manual for preg_match, and found out that this function chokes for long strings. They say that you can use substr instead, if you are only using the function to find out if some substring exists inside some string (substr is faster and more efficient than preg_match). So I commented out the line above, and wrote instead:

/*if( preg_match('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {*/
if( strpos('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {

Now it works (or pretend it does) like a charm!! UPDATE: the above is rubbish :^( Find a better solution at the DokuWiki bugtracker.

Appart from that, DokuWiki seems to have a decent page edit history, and you can compare different versions with the current one. Pity it doesn’t seem to be possible to compare different old edits between them, as it is with MediaWiki (the engine behind Wikipedia). DokuWiki also looks a bit ugly, but I guess one can correct that with CSS, skins or whatever.

It also looks more difficult to configure than MoinMoin, for example I do not see an easy way to create users. Probably I should just RTFM, as there must be an easy explanation for all that… but I’m too lazy, and MoinMoin is more intuitive on this account (and looks prettier).

ErfurtWiki

It goes under the name ewiki, as a Debian package. However, when I tried to install it, it requested PHP4 (I have PHP5 installed), so I refused to downgrade my PHP and ewiki would not install.

Kwiki

It needs to run as a Perl cgi module. After installing, add the following to your /etc/apache2/apache2.conf:

ScriptAlias /kwiki/ /var/www/kwiki/

Options +ExecCGI

The ScriptAlias makes the browser go to /var/www/kwiki/ when pointed to /kwiki. The Directory block lets Apache execute scripts in that dir.

One then needs to install modules, from CPAN.

All in all, not too easy, a bit annoying, and a bit buggy. Didn’t work well for me.

MediaWiki

Time to give the engine behind Wikipedia itself a try. Probably it is a big overkill for my needs, but what the heck…

First: it is a breeze to install under Debian. First, aptitude install mediawiki, which will automatically install mediawiki1.7, php5-cli and php5-mysql, plus PHP5 and MySQL, if you don’t have them installed. It is also suggested to install memcached.

After aptitude installation, steer your browser to http://localhost/mediawiki/config/index.php (as the README.Debian.gz file says), and fill the required data. After everything is correctly set, copy the Settings file to its final location (also said in the README):

cp /var/lib/mediawiki1.7/config/LocalSettings.php /etc/mediawiki1.7/

URL beautification HowTo

Following instructions at the WikiMedia site.

The default MediaWiki URL is:

http://mywiki.site.tld/wiki/wiki/index.php?title=Article_name

This could be rewriten as:

http://mywiki.site.tld/wiki/Article_name

To achieve that, add the following to /etc/apache2/httpd.conf:

AcceptPathInfo On

#These must come last, and in this order!
Alias /wiki /usr/share/mediawiki1.7/index.php
Alias /index.php /usr/share/mediawiki1.7/index.php

Then the following to /var/lib/mediawiki1.7/LocalSettings.php:

$wgScriptPath = "/mediawiki";
$wgArticlePath = "/wiki/$1";

Then enable the rewrite PHP module, and reload Apache:

ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load
/etc/init.d/apache2 reload

Now just point your broser to http://localhost/wiki/, and you are done.

Comments (1)

Gaussian shared memory

If you are running Gaussian in shared memory mode (in parallel in a multi-CPU computer, for example), you might get the following error (last line of output file):

shmget failed

It means that it was not possible to get the amount of shared memory required by the input. This can mean that the computer does not have so much physical RAM, but usually it is just a somewhat silly system setting.

Check the file /proc/sys/kernel/shmmax. Inside it there should be a single number, namely the amount of permited shared memory use (in bytes). If you need more (in my computer it was like 32MB… puaff), just echo xxx > /proc/sys/kernel/shmmax, where xxx is the desired amount of bytes (e.g. 500000000).

Comments

SSH connection without password

[Update (24/03/2007): see new post on subject]

Following Txema’s wonderful explanations, and translating from Basque a Dec 2, 2002 e-mail, here they go the instructions to connect from computer A to B via SSH, without computer B ever asking for our password.

Notice that it is not a security breach, because we are allowing a certain computer A (and user) to connect to B. Of course, if A is somehow compromised, then applying this recipe would give the attacker hability to connect from A to B with no hassle. If you fear computer A being compromised, then don’t do it.

On the other hand, it can actually be a hardening of the security of computer B. If only a certain user of A is allowed to connect to B without password, and then remote passwords are deactivated (making that, if you need to input a password, you can not connect), then a cracker breaking into A would have to first break into the account of that certain user to access B. Otherwise, no other user is allowed to try to connect to B from A.

Whatever…. Let’s get going:

In computer A, generate a DSA key for that machine (and account):

ssh-keygen -t dsa

This creates the following file at ~/.ssh/:

id_dsa.pub

The contents of such file should be copy-pasted (beware line-breaking, because it is a single, very long, line) into B, namely into a file called (create if doesn’t exist, append to it if it exists) ~/.ssh/authorized_keys2.

Now, the A user in whose ~/.ssh/ resides the id_dsa.pub, will be able to connect without password to the B computer account of the user in whose ~/.ssh/ is the authorized_keys2 file.

Comments (2)

Xgl with Xfce

I previously posted about running Xgl under GNOME. Well, it seems that the Xgl/Beryl duo can be run smoothly under any other desktop environment, e.g. Xfce.

To attain that (after you have GNOME/Xgl running), just create two files:

/usr/local/bin/startxgl_xfce

Xgl -fullscreen :1 -ac -accel glx:pbuffer -accel xv:pbuffer & sleep 2 && DISPLAY=:1
# Start Xfce
exec xfce4-session

and

/usr/share/xsessions/xfce4-xgl.desktop

[Desktop Entry]
Encoding=UTF-8
Name=Xfce-Xgl
Exec=/usr/local/bin/startxgl_xfce
Icon=
Type=Application

The latter inserts a “Xfce-Xgl” entry in the GDM xsession list, which will call the former. That one actually startx Xgl and opens Xfce. Nice, uh?

Comments

Private networks for dummies

Maybe you have two computers at home, with no router, and no wireless, and want both to share an Internet connection. Or maybe you want to set up a home LAN with non-public addresses. If so, read on.

I have set up my laptop to use my desktop computer as a NAT to connect to the Internet.

Requirements and setup

Your NAT computer needs two Ethernet cards: one to connect it to the Internet, and another one to connect it to the laptop. You also need a crossover cable, to connect the laptop to the desktop computer.

The physical setup is easy: make the connections as below:

Desktop computer setup

You need to set eth1 (second NIC) of your comp to connect to the LAN, and leave the eth0 as it was (to connect to the internet). Edit /etc/network/interfaces (for Debian. Other distros, edit the corresponding file), and add (suposing the network you want to create is 192.168.10.0):

iface eth1 inet static
  address 192.168.10.1
  netmask 255.255.255.0
  broadcast 192.168.10.255
  gateway 192.168.10.1
  post-up route del -net 0.0.0.0 gw 192.168.10.1 eth1
  post-up route add -net 192.168.10.0 netmask 255.255.255.0 gw 192.168.10.1 eth1

The last two lines (man interfaces) remove the default routing path that bringing up eth1 sets, because we want eth0 to still be the default, with only signals going to 192.168.10.0 network being routed through eth1. The latter is set by the last line.

Laptop setup

We have to modify /etc/network/interfaces, too. Here it’s eth0 that we set up:

iface eth0 inet static
  address 192.168.10.2
  netmask 255.255.255.0
  broadcast 192.168.10.255
  gateway 192.168.10.1

Comments

What I’ve done to my laptop

OK, this entry is just a reminder for myself.

Install ATI drivers

I followed the instructions at this wiki. For the record, I used method 1, and it worked.

Update: The link above seems dead. Read a a more recent post about Compiz Fusion under Debian Lenny for info on ATI drivers instalation.

Install a SMP kernel

My CPU is an Intel Core 2 Duo T7200… I want a SMP kernel, otherwise I am wasting one of the two cores!

Problem is, the friggin Ubuntu has no 2.6 kernels labeled “SMP”. Why, oh why!? OK, I found out: all 2.6.*-686 kernels are actually SMP, even if they don’t say anything. If you have 1 CPU, fine. If you have more, they’ll be detected at boot time. No more “-smp” in the kernel names.

Wireless with 686 kernel

The default 2.6.15-686 supports the wireless just fine, but installing a 686 kernel (required for SMP, see above) seems to break the wireless. However, the solution is easy. As stated in this Ubuntu forum thread, one just needs to install the “restricted” kernel modules corresponding to her kernel (in my case 2.6.15-27-686):

% aptitude install linux-restricted-modules-2.6.15-27-686

After that, reboot. I guess that the new module is loadable (try modprobe ipw3945), without having to reboot… dunno. Also, if you want to have the restricted modules package upgrade automatically, install linux-restricted-modules-686.

WPA encription for WiFi

Update: Read a more recent article: WPA under Ubuntu/Debian.

Install a 64-bit kernel

OK, installing the mainstream 32-bit Ubuntu was a success. Now I have given Ubuntu amd64 a try (amd64 is for both EM64T (Intel) and AMD64 (AMD)).

Everything went smooth, except installing the ATI drivers (as explained above): the screen froze black when loading GDM. To solve this, I read the troubleshooting section in the link above, and found out that I could either add:

Load "extmod"

or:

SubSection "extmod"
  Option "omit XVideo"
  Option "omit XVideo-MotionCompensation"
  Option "omit XFree86-VidModeExtension"
EndSubSection

to the Section "Modules" of /etc/X11/xorg.conf (beware, it’s one OR the other, not both). For me the Load "extmod" did not work, but the SubSection "extmod" did.

Now, for the Xgl thing in 64-bits…

Xgl for 64-bits

I followed the instructions in a previous post, but I found out that some packages were missing, so I manually downloaded them from the Xgl.compiz site. Namely, I downloaded them from the “Edgy” section. However, it didn’t work for me :^(

Update: Compiz Fusion under Debian Lenny in a more recent post.

Comments (2)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »