Archive for Free software and related beasts

Popularity of Free Software generating bug exploitation?

[Esta entrada está también disponible en castellano|English PDF|PDF en castellano]

It is often said (by FLOSS-skeptics), that Free Software has less exploited bugs than the Proprietary Software because it is less popular. They argue that, since less people uses FLOSS, the crackers are less inclined to waste their time exploiting the bugs it could have. The greater user base of the proprietary software would also, in their words, make bugs more prominent, and their exploits spread faster. The corollary of this theory would be that popularization of FLOSS applications (e.g. Firefox), would lead to an increase in the number of bugs discovered and exploited, eventually reaching a proprietary-like state (e.g. “Firefox will have as many bugs as IE, when Firefox is as popular as IE”).

In this blog entry I will try to outline a mathematical model, proposed to demonstrate the utter nonsense of this theory. Specifically, I will argue that an increase in community size benefits a FLOSS project in at least 3 ways:

  1. Faster development
  2. Shorter average life of open bugs
  3. Shorter average life of exploited bugs

A more thorough explanation is available in PDF format. Recall that the math display in HTML is generally poor (much more so when I don’t have the time nor skills to tune it). If you like pretty formulas, the PDF is for you.

Both this blog entry and the linked PDF are released under the following license:

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.5 License.

What this basically means is that you are free to copy and/or modify this work, and redistribute it freely. The only limitations are that you can not make it for profit, and that you have to cite its original author (or at least link to this blog).

1 – Propositions and derivations

We have a FLOSS project P, new versions being released every T time, and each version incorporating G new bugs. Each new version will be released when all bugs from the previous release have been patched. At any point in time, there will be B open bugs (remaining from G).

The patching speed is assumed proportional to the size of the comunity of users (U):

dB/dt=-KpU

1.1 – Faster development

From above, the time dependency of open bugs:

B = G – Kp U t

The inter-release period (T), from B = 0:

T = G/KpU

So the inter-release time (T) is shortened for growing U.

1.2 – Shorter average life of open bugs

In a dt time period, (-dB/dt)dt bugs are patched, their age being t. If we call Ï„ the average lifetime of bugs, we have the definition:

τ = (∫t(-dB/dt)dt)/(∫dB)

From that it follows:

Ï„ = T/2

So, the average life of open bugs equals half the inter-release time, which (as stated above) has an inverse proportionality with U.

1.3 – Fraction of bugs exploited before being patched

We define the following bug exploitation speed, where Bx is the total amount of exploited bugs, Kx is the “exploiting efficiency” of the crackers (whose amount will be proportional to U), and Bou is the amount of open and unexploited bugs:

dBx/dt = Kx U Bou

We also define α = Box/B, where Box is the amount of open and exploited bugs.

It can be derived the evolution of α with time:

α(t) = 1 – exp(-Kx U t)

We then define γ = Kp/KxG, and derive the fraction of G bugs that end up exploited by time t:

Bx(t)/G = 1 – γ + (γ – 1 + t/T)exp(-Kx U t)

Solving for t=T, and taking into account that T=G/KpU, we get the fraction of total bugs that gets exploited ever, during the inter-release period (Fx):

Fx = 1 – γ + γ exp(-1/γ)

Recall that Fx is independent of U, that is, increasing the size of the user community does not increase the fraction of total bugs that get exploited ever, even though the amount of crackers is increased along with the user base.

1.4 – Shorter average life of exploited bugs

We want to find out how long exploited bugs stay unpatched, calling this time τx. After some slightly complex algebra, but always deriving from the previously defined equations (see PDF version), we obtain a fairly simple expresion for τx:

τx = Fxτ

That is, the average exploitation time of exploited bugs is proportional to Ï„, which is to say it is proportional to T, or inversely proportional to U.

2 – Conclusions

The “Increasing popularity = Increasing bugginess” motto is a non sequitur. According to the simple model outlined here, the broader the user community of a FLOSS program, the faster bugs will be patched, even admitting that an increase in user base brings an equal increase in the number of crackers committed to doom it. Even for crackers that are more effective in their cracking work than the bona fide users in their patching work (Kx >>> Kp), increasing the community size does reduce how long the bugs stay unpatched and also how long the exploited bugs stay unpatched. No matter how clumsy the users, and how rapacious the crackers, the free model (whereby the users are granted access to the code, and thus empowered to contribute to the program), ensures that popularization is positive, for both the program and the community itself.

Compare that with a closed model, in which an increased user base may boost the number or crackers attacking a program, but certainly adds little, if anything, to the code patching and correcting speed. It is actually proprietary software that should fear popularization. It is easy to see that when a particular proprietary software piece grows over a certain “critical mass” of users, the crackers could potentially disrupt its evolution (say, Ï„x = T, Fx = 1), because G, P and thus T, are kept constant (depend only on the sellers of the code).

Comments (1)

Article in Science

I have just read a rather interesting article in Science about the economics of information security (R. Anderson and T. Moore, Science, 2006, 314, 610), and I would like to comment some quotes of it:

There has been a vigorous debate between software vendors and security researchers over whether actively seeking and disclosing vulnerabilities is socially desirable. Rescorla has argued that for software with many latent vulnerabilities (e.g. Windows), removing one bug makes little difference to the likelihood of an attacker finding another one later[1].

Quite interesting! First, even a paper on Science not only regards Windows as a piece of software with a virtually endless reservoir of internal errors, but it even uses it as a paradigmatic example of such a case. Second, it deems such software as not worth patching, and bugs not worth being disclosed (security through obscurity), because they are so many.

[…] [Rescorla] argued against disclosure and frequent patching unless the same vulnerabilities are likely to be rediscovered later. Ozment found that for FreeBSD[2] […] vulnerabilities are indeed likely to be rediscovered[3]. Ozment and Schecher also found that the rate at which unique vulnerabilities were disclosed for the core and unchanged FreeBSD operating system has decreased over a 6-year period[4]. These findings suggest that vulnerability disclosure can improve system security over the long term.

I have read [1] and [3] very briefly, and Ozment seems very critical of Rescorla’s results. However, the comparison between Windows and FreeBSD (I think they mean OpenBSD), which is FLOSS, is quite nice. Windows is so buggy that patching it is hopeless. FreeBSD has seen a decline in the number of disclosed bugs (remember that, being FLOSS, all the bugs found by developers, mantainers and users are disclosed), related to the fact that each bug fixed actually means a reduced probability of finding new bugs (because the total is not endless).

The bottom line is that, for a good piece of software (one that is not so bug-ridden that crackers never “rediscover” an old bug, because there are sooo many new ones to discover), disclosing the bugs is better. It is so because it speeds the patching rate, which in turn reduces the amount of exploitable bugs, which in turn improves the security. The connection between patching bugs and reducing significantly the amount of exploitable bugs can be made when the amount of bugs is small enough that new crackers are likely to rediscover old bugs, and then it would have paid to patch those bugs. Notice also that this is an auto-catalytic (self-accelerated) process: the more bugs disclosed, and more bugs patched, the less bugs remain, so the more it pays to further disclose and patch the remaining bugs, because the less bugs, the relatively more it pays to patch.

Vulnerability disclosure also helps to give vendors an incentive to fix bugs in subsequent product releases[5]. Arora et al. have shown through quantitative analysis that public disclosure made vendors respond with fixes more quickly; the number of attacks increased, but the number of reported vulnerabilities declined over time[6]

Good point! Not only disclosing the bugs is good for the consumers because it directly increases its quality, but also because it helps enforce a better behavior of the vendors. This is a key idea in the article, which delves in the fact that security policies are best when the one enforcing them is the one suffering from their errors. However, nowadays there is little pressure on the vendors to produce more secure software, because the buyer has little knowledge to judge this aspect of the quality, and ends up favoring a product for its looks or the alleged features, regardless of stability or security. Disclosing the bugs helps the buyer to assess the security of a program, thus making a better-balanced choice when buying. This, in return, leads to a more secure software in general, because vendors will have a big incentive to make their products more secure (which they don’t really have now).

[1] E. Rescorla, paper presented in the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[2] I suspect the authors are mistaking OpenBSD for FreeBSD
[3] A. Ozment, paper presented at the Fourth Workshop on the Economics of Information Security, Cambridge, MA, 2 to 3 June 2005 (PDF)
[4] A. Ozment, S.E. Schechter, paper presented at the 15th USENIX Security Symposium, Vancouver, 31 July to 4 August 2006 (HTML).
[5] A. Arora, R. Telang, H. Xu, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)
[6] A. Arora, R. Krishnan, A. Nandkumar, R. Telang, Y. Yang, paper presented at the Third Workshop on the Economics of Information Security, Minneapolis, 13 to 14 May 2004 (PDF)

Comments

Comparison of Wiki software

I am working out a Wiki page for a small sized group of users of a supercomputer at the UPV/EHU.

You might find this comparison useful.

My impressions so far:

[[MoinMoin]]

See more comprehensive HowTo at this more recent post

To install it, create a directory for it (e.g., in your /home), then copy some files to it (after installing the python-moinmoin and moinmoin-common packages, in Debian):

mkdir my_moinmoin_dir
cp /usr/share/moin/config/wikiconfig.py my_moinmoin_dir
cp /usr/share/moin/server/moin.py my_moinmoin_dir
cp /etc/moin/mywiki.py my_moinmoin_dir

Then, edit the files (mainly wikiconfig.py, and run my_moinmoin_dir/moin.py to start up the server.

If you want to make a single Wiki (not a “farm”), then remove (or better, just rename) the file /etc/moin/farmconfig.py (so that it is not read).

This one was easy to install, but has a “small” drawback: the CamelCase internal links. How freaking silly is that? First off, it makes writing CamelCase words that are not links impossible. Second, how can one make a link that displays a text X, but points to page Y?. If only CamelCase generates links, ThisText will link to the page called ThisText. This means that there is no way to put a custom string as link, pointing to a custom page. This is frustrating at the very least. Third, how does one make a one-word link?

These three concerns are taken care of, fortunately. A custom string (not CamelCase) can be used as link like that:

["Custom string here"] (links to page called Custom string here”)

The text of the link can differ from the title of the refered page like this:

[:The Refered:The Text] (displays “The Text”, while pointing to page “The Refered”)

I found out about this workaround after I started to write this page, so sue me for complaining.

It is also problematic (for a dumbass like me) to make the Wiki accesible to machines other than localhost. That is, over the Intra- or Internet. I’m working on in.

DidiWiki

Pros: it is very simple. It is a breeze to install and run. Under Debian, just aptitude install didiwiki, then run didiwiki -p 8080, open a web browser, and put http://localhost:8080 at the location bar. The default port is 8000 (if you run just didiwiki), but for me it fails. The -p can be used to attach DidiWiki to any port.

Contras: it is very simple. Editing is very easy, but… there is no preview! Is there a way to hack a preview into it? I do not know, and the project having made no progress since 2004 smells like there will never be such an upgrade.

More important: there is no “history” of the edits into a page. You can see a list of “recently edited pages”, but no such a list for each single page, or a diff between to arbitrary versions, or reversion capabilities.

On the brigth side, it is immediate to access the Wiki from any other computer… I just don’t know if this is a feature or a security hole :^)

DokuWiki

To install under Debian, do aptitude install dokuwiki, answer the questions it makes, then run dpkg-reconfigure dokuwiki to see if it asks for some more options (e.g. if Wiki will be accesible from localhost, a subnet, or the whole Internet, or what directory to put it under). Then, restart the web server (if you are running Apache2: /etc/inid.d/apache2 restart), and you are done! Now, simply point your broser to http://localhost/dokuwiki/, and you can start using it (replace dokuwiki/ with whatever dir you chose when configuring, if you changed the default).

At first sight is looks good. However, it keeps giving me errors when saving a page. The page gets full of the following:

Warning: preg_match() [function.preg-match]: Compilation failed: repeated subpattern is too long at offset 17093 in /usr/share/dokuwiki/inc/common.php on line 391

It actually saves the page… but the error is annoying at least, dangerous at worst. I suppose I could try to read the source code and fix it (it is PHP, and I have spotted the line with the error… hehehe, line 391, that is), but I do not have the programing skills, I fear.

I’ll give it a try…

Okay, I might have corrected the first bug of a FLOSS program in my life: the 391th line giving an error reads:

if( preg_match('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {

I read the PHP manual for preg_match, and found out that this function chokes for long strings. They say that you can use substr instead, if you are only using the function to find out if some substring exists inside some string (substr is faster and more efficient than preg_match). So I commented out the line above, and wrote instead:

/*if( preg_match('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {*/
if( strpos('#('.join('|',$re).')#si',$TEXT, $match=array()) ) {

Now it works (or pretend it does) like a charm!! UPDATE: the above is rubbish :^( Find a better solution at the DokuWiki bugtracker.

Appart from that, DokuWiki seems to have a decent page edit history, and you can compare different versions with the current one. Pity it doesn’t seem to be possible to compare different old edits between them, as it is with MediaWiki (the engine behind Wikipedia). DokuWiki also looks a bit ugly, but I guess one can correct that with CSS, skins or whatever.

It also looks more difficult to configure than MoinMoin, for example I do not see an easy way to create users. Probably I should just RTFM, as there must be an easy explanation for all that… but I’m too lazy, and MoinMoin is more intuitive on this account (and looks prettier).

ErfurtWiki

It goes under the name ewiki, as a Debian package. However, when I tried to install it, it requested PHP4 (I have PHP5 installed), so I refused to downgrade my PHP and ewiki would not install.

Kwiki

It needs to run as a Perl cgi module. After installing, add the following to your /etc/apache2/apache2.conf:

ScriptAlias /kwiki/ /var/www/kwiki/

Options +ExecCGI

The ScriptAlias makes the browser go to /var/www/kwiki/ when pointed to /kwiki. The Directory block lets Apache execute scripts in that dir.

One then needs to install modules, from CPAN.

All in all, not too easy, a bit annoying, and a bit buggy. Didn’t work well for me.

MediaWiki

Time to give the engine behind Wikipedia itself a try. Probably it is a big overkill for my needs, but what the heck…

First: it is a breeze to install under Debian. First, aptitude install mediawiki, which will automatically install mediawiki1.7, php5-cli and php5-mysql, plus PHP5 and MySQL, if you don’t have them installed. It is also suggested to install memcached.

After aptitude installation, steer your browser to http://localhost/mediawiki/config/index.php (as the README.Debian.gz file says), and fill the required data. After everything is correctly set, copy the Settings file to its final location (also said in the README):

cp /var/lib/mediawiki1.7/config/LocalSettings.php /etc/mediawiki1.7/

URL beautification HowTo

Following instructions at the WikiMedia site.

The default MediaWiki URL is:

http://mywiki.site.tld/wiki/wiki/index.php?title=Article_name

This could be rewriten as:

http://mywiki.site.tld/wiki/Article_name

To achieve that, add the following to /etc/apache2/httpd.conf:

AcceptPathInfo On

#These must come last, and in this order!
Alias /wiki /usr/share/mediawiki1.7/index.php
Alias /index.php /usr/share/mediawiki1.7/index.php

Then the following to /var/lib/mediawiki1.7/LocalSettings.php:

$wgScriptPath = "/mediawiki";
$wgArticlePath = "/wiki/$1";

Then enable the rewrite PHP module, and reload Apache:

ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load
/etc/init.d/apache2 reload

Now just point your broser to http://localhost/wiki/, and you are done.

Comments (1)

Default Ghostscript paper size

The three times god-forsaken Ghostscript (I use the Debian package gs-afpl) suite is shipped worldwide with the US letter default paper size. So, when you use it (e.g. to convert PS to PDF), and if the source file does not specify a paper size, the output file will have a letter size, instead of the more sane A4.

You can specify A4 size at runtime, with the -sPAPERSIZE=a4 flag:

ps2pdf -sPAPERSIZE=a4 input.ps

However, if you want to always use A4 as default, you can change the gs_init.ps file (locate gs_init.ps), and uncomment the following line (remove the leading ‘%‘):

% /DEFAULTPAPERSIZE (a4) def

Beware that in Debian you will have to change it to (because the name of the variable is different):

/DEFPAPERSIZE (a4) def

You will only need to edit the gs_init.ps file (as root), make the changes and save the file. Subsequent gs uses (e.g. ps2pdf), will default to A4 page size.

Comments (2)

SSH connection without password

[Update (24/03/2007): see new post on subject]

Following Txema’s wonderful explanations, and translating from Basque a Dec 2, 2002 e-mail, here they go the instructions to connect from computer A to B via SSH, without computer B ever asking for our password.

Notice that it is not a security breach, because we are allowing a certain computer A (and user) to connect to B. Of course, if A is somehow compromised, then applying this recipe would give the attacker hability to connect from A to B with no hassle. If you fear computer A being compromised, then don’t do it.

On the other hand, it can actually be a hardening of the security of computer B. If only a certain user of A is allowed to connect to B without password, and then remote passwords are deactivated (making that, if you need to input a password, you can not connect), then a cracker breaking into A would have to first break into the account of that certain user to access B. Otherwise, no other user is allowed to try to connect to B from A.

Whatever…. Let’s get going:

In computer A, generate a DSA key for that machine (and account):

ssh-keygen -t dsa

This creates the following file at ~/.ssh/:

id_dsa.pub

The contents of such file should be copy-pasted (beware line-breaking, because it is a single, very long, line) into B, namely into a file called (create if doesn’t exist, append to it if it exists) ~/.ssh/authorized_keys2.

Now, the A user in whose ~/.ssh/ resides the id_dsa.pub, will be able to connect without password to the B computer account of the user in whose ~/.ssh/ is the authorized_keys2 file.

Comments (2)

Xgl with Xfce

I previously posted about running Xgl under GNOME. Well, it seems that the Xgl/Beryl duo can be run smoothly under any other desktop environment, e.g. Xfce.

To attain that (after you have GNOME/Xgl running), just create two files:

/usr/local/bin/startxgl_xfce

Xgl -fullscreen :1 -ac -accel glx:pbuffer -accel xv:pbuffer & sleep 2 && DISPLAY=:1
# Start Xfce
exec xfce4-session

and

/usr/share/xsessions/xfce4-xgl.desktop

[Desktop Entry]
Encoding=UTF-8
Name=Xfce-Xgl
Exec=/usr/local/bin/startxgl_xfce
Icon=
Type=Application

The latter inserts a “Xfce-Xgl” entry in the GDM xsession list, which will call the former. That one actually startx Xgl and opens Xfce. Nice, uh?

Comments

Private networks for dummies

Maybe you have two computers at home, with no router, and no wireless, and want both to share an Internet connection. Or maybe you want to set up a home LAN with non-public addresses. If so, read on.

I have set up my laptop to use my desktop computer as a NAT to connect to the Internet.

Requirements and setup

Your NAT computer needs two Ethernet cards: one to connect it to the Internet, and another one to connect it to the laptop. You also need a crossover cable, to connect the laptop to the desktop computer.

The physical setup is easy: make the connections as below:

Desktop computer setup

You need to set eth1 (second NIC) of your comp to connect to the LAN, and leave the eth0 as it was (to connect to the internet). Edit /etc/network/interfaces (for Debian. Other distros, edit the corresponding file), and add (suposing the network you want to create is 192.168.10.0):

iface eth1 inet static
  address 192.168.10.1
  netmask 255.255.255.0
  broadcast 192.168.10.255
  gateway 192.168.10.1
  post-up route del -net 0.0.0.0 gw 192.168.10.1 eth1
  post-up route add -net 192.168.10.0 netmask 255.255.255.0 gw 192.168.10.1 eth1

The last two lines (man interfaces) remove the default routing path that bringing up eth1 sets, because we want eth0 to still be the default, with only signals going to 192.168.10.0 network being routed through eth1. The latter is set by the last line.

Laptop setup

We have to modify /etc/network/interfaces, too. Here it’s eth0 that we set up:

iface eth0 inet static
  address 192.168.10.2
  netmask 255.255.255.0
  broadcast 192.168.10.255
  gateway 192.168.10.1

Comments

What I’ve done to my laptop

OK, this entry is just a reminder for myself.

Install ATI drivers

I followed the instructions at this wiki. For the record, I used method 1, and it worked.

Update: The link above seems dead. Read a a more recent post about Compiz Fusion under Debian Lenny for info on ATI drivers instalation.

Install a SMP kernel

My CPU is an Intel Core 2 Duo T7200… I want a SMP kernel, otherwise I am wasting one of the two cores!

Problem is, the friggin Ubuntu has no 2.6 kernels labeled “SMP”. Why, oh why!? OK, I found out: all 2.6.*-686 kernels are actually SMP, even if they don’t say anything. If you have 1 CPU, fine. If you have more, they’ll be detected at boot time. No more “-smp” in the kernel names.

Wireless with 686 kernel

The default 2.6.15-686 supports the wireless just fine, but installing a 686 kernel (required for SMP, see above) seems to break the wireless. However, the solution is easy. As stated in this Ubuntu forum thread, one just needs to install the “restricted” kernel modules corresponding to her kernel (in my case 2.6.15-27-686):

% aptitude install linux-restricted-modules-2.6.15-27-686

After that, reboot. I guess that the new module is loadable (try modprobe ipw3945), without having to reboot… dunno. Also, if you want to have the restricted modules package upgrade automatically, install linux-restricted-modules-686.

WPA encription for WiFi

Update: Read a more recent article: WPA under Ubuntu/Debian.

Install a 64-bit kernel

OK, installing the mainstream 32-bit Ubuntu was a success. Now I have given Ubuntu amd64 a try (amd64 is for both EM64T (Intel) and AMD64 (AMD)).

Everything went smooth, except installing the ATI drivers (as explained above): the screen froze black when loading GDM. To solve this, I read the troubleshooting section in the link above, and found out that I could either add:

Load "extmod"

or:

SubSection "extmod"
  Option "omit XVideo"
  Option "omit XVideo-MotionCompensation"
  Option "omit XFree86-VidModeExtension"
EndSubSection

to the Section "Modules" of /etc/X11/xorg.conf (beware, it’s one OR the other, not both). For me the Load "extmod" did not work, but the SubSection "extmod" did.

Now, for the Xgl thing in 64-bits…

Xgl for 64-bits

I followed the instructions in a previous post, but I found out that some packages were missing, so I manually downloaded them from the Xgl.compiz site. Namely, I downloaded them from the “Edgy” section. However, it didn’t work for me :^(

Update: Compiz Fusion under Debian Lenny in a more recent post.

Comments (2)

Xgl with GNOME, under Ubuntu Dapper Drake

OMG!! Xgl is so pretty!!

First things first, I have to say how I’ve made it run. I say in a previous post (that I actually wrote some minutes ago), that I have given a try to Ubuntu, to test how good that Xgl thing is. And man is it good!

Xgl is a graphics server, something that interprets data and displays it on the screen (as XFree86 and X.org). It basically allows for 2D effects of a Desktop Environment to be rendered with the powerfull engine of the Graphical Card, which untill now only accelerated the 3D effects, as e.g. games. However, one needs a window manager that takes advantage of these capabilities to create effects. The first such a wm was Compiz. Sadly, I was not able to install it, but I did install Beryl, which is a fork of Compiz.

I mostly followed the instructions in Fred.cpp’s blog[es].

It basically boils down to:

As root, or with the infamous sudo:

aptitude remove compiz compiz-gnome cgwd cgwd-themes xserver-xgl csm

Add to /etc/apt/sources.list (the last line only if you have a 64-bit CPU):

deb http://www.beerorkid.com/compiz/ dapper main
deb http://xgl.compiz.info/ dapper main
deb-src http://xgl.compiz.info/ dapper main
deb http://xgl.compiz.info/ dapper main main-amd64

Get the GPG keys for the repositories:

wget http://www.beerorkid.com/compiz/quinn.key.asc -O – | sudo apt-key add –

Then:

aptitude update && aptitude upgrade

Install Xgl, Beryl and Emerald (the theme manager for Beryl):

aptitude install xserver-xgl libgl1-mesa xserver-xorg libglitz-glx1 beryl beryl-core beryl-manager beryl-plugins beryl-plugins-data beryl-settings emerald emerald-themes

Now everything is installed, we need to create 2 files:

/usr/local/bin/startxgl, our startx replacement. Its contents:

Xgl -fullscreen :1 -ac -accel glx:pbuffer -accel xv:pbuffer & sleep 2 && DISPLAY=:1
# Start GNOME
exec gnome-session

/usr/share/xsessions/gnome-xgl.desktop, a new entry for the GDM session menu. Its contents:

[Desktop Entry]
Encoding=UTF-8
Name=gnome-xgl
Exec=/usr/local/bin/startxgl
Icon=
Type=Application

Then chmod +x them both.

We then need to enter GNOME as a regular user (if we are not already in it), and go to System/Preferences/Sessions/Autostart programs, and add beryl-manager to them. In the next GDM login, we will have an gnome-xgl option for a session. Choose it, and there you are.

Second, the screenshots (click to enlarge):


A window being minimized, fading away.


Two windows being shown as with MacOS exposè.


Two semitransparent windows. You can see my blog through a terminal :^)


A video, being played at the edge of a cube (the faces of which represent different desktops).


A video being played semitransparent. We can see an icon below it!


The video in the corner, plus it is raining all around!

Comments (2)

Installing Ubuntu Dapper Drake

After failing misserably to run Xgl in Debian Etch, I decided to install Ubuntu Dapper Drake (which allegedly supports it) in a spare partition of my hard disk. Below is the timeline of such an instalation:

13:27

Turn on computer, insert Ubuntu CD. Choose “run the CD as a LiveCD“. See it loading.

13:31

The LiveCD has booted, and I already have a fully functioning GNOME desktop. I spend 2 minutes playing around.

13:33

Select a link for “install Ubuntu on the hard disk”, and answer a couple of questions (username, password, language, time zone, keyboard layout), and off it goes…

13:36

It starts copying files to the hard disk.

13:44

Everything done. Asked whether I wanted to go on using the LiveCD by now, or directly restart to use the Ubuntu installed on disk. I choose the latter.

13:46

I am presented with GDM, which asks me to log in.

13:47

I am already inside GNOME, running my freshly installed Ubuntu OS!!

Summary

4 minutes for LiveCD working 100%, 20 minutes for full installation.

Comments (1)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »