Malware: Vista Capable

I read, via Kriptopolis (es), that “Tim Eades, senior vice-president of sales at security company Sana Security said that 38 per cent of malware is already Vista-compatible.”

Apparently, and according to an article at ITPro.co.uk, more malware than anti-malware has been already ported to Windows Vista.

Go, Vista, go!

Comments

Malicious BitTorrent clients

Another post stressing the fact that freeware is not free software.

A while ago I warned about Browsezila (a freeware web broser, infected with malware), and now I warn about Bitroll and Torrent101. They are freeware, but, since they are proprietary, and closed source, no-one can read the code behind them. Is this important? Does someone actually read the code of free software programs? Well, it seems it is important, and it seems that free software programs do get read, because I am yet to see these problems in free BitTorrent clients.

Comments

PDF exploits for all readers and platforms?

I have read in Kriptopolis some posts about new PDF exploits (in Spanish). The articles say that web broser PDF plugins are vulnerable, dedicated PDF readers are also vulnerable, and new exploits may be created. The Kriptopolis site keeps on talking about new vulnerabilities in PDF documents, and how they affect all platforms. Do they?

If you go to the SecurityFocus site, where they cover the new, you can download an example PDF, that exploits this vulnerability. If you open it with any (vulnerable) PDF reader, the program will freeze, and the CPU usage will go over the roof.

Well, bold as I am, I did the test. I opened it with Acroread 7.0 for GNU/Linux and… it froze, and… the CPU usage hit the roof. I could not Ctrl-C the beast, and a kill would not kill it. Fortunately, a kill -9 did the job :^(

Now, I tried Evince:


Heracles[~/Downloads]: evince MOAB-06-01-2007.pdf
Error (3659): Illegal character ')'
Error (0): PDF file is damaged - attempting to reconstruct xref table...
Segmentation fault

and Xpdf:


Heracles[~/Downloads]: xpdf MOAB-06-01-2007.pdf
Error (3659): Illegal character ')'
Error (0): PDF file is damaged - attempting to reconstruct xref table...
Segmentation fault

Ta-chan!! Yes, they crash, but refusing to open the damned thing! They both complain, and don’t fall for it.

Perhaps it’s worth reminding the reader that Evince and Xpdf are free software, whereas Acroread is not. Acroread is merely free of charge, but not free as in freedom.

Comments

My backups with rsync

In previous posts I have introduced the use of rsync for making incremental backups, and then mentioned an event of making use of such backups. However, I have realized that I haven’t actually explained my backup scheme! Let’s go for it:

Backup plan

I make a backup of my $home directory, say /home/isilanes. Each “backup” will be a set of 18 directories:

  • Current (last day)
  • 7 daily
  • 4 weekly
  • 6 monthly

Each such dir has an apparent complete copy of how /home/isilanes looked like at the moment of making the backup. However, making use of hard links, only the new bits of info are actually written. All the parts that are redundant are written once on disk, and then linked from all the places referring to it.

Result: a 18 copies of a $home of 3.8 GB in a total of 8.7 GB (14% of the apparent size of 63 GB, and 13% of 18x the info size, 68,4 GB).

Perl script for making the backup

Update (Jun 5, 2008): You can find a much refined version of the script here. It no longer requires certain auxiliary script to be installed in the remote machine, and is “better” in general (or it should be!)

Below is the commented Perl script I use. Machine names, directories and IPs are invented. Bart is the name of my computer.


#!/usr/bin/perl -w

use strict;

my $rsync = "rsync -a -e ssh --delete --delete-excluded";
my $home = "/home/isilanes";
my $logfile = "$home/.LOGs/backup_log";

#
# $where -> where to make the backup
#
# $often -> whether this is a daily, weekly or monthly backup
#
my $where = $ARGV[0] || 'none';
my $often = $ARGV[1] || 'none';

my ($source,$remote,$destdir,$excluded,$to,$from);

# Possible "$where"s:
my @wheres = qw /machine1 machine2/;

# Possible "$often"s:
my @oftens = qw /daily weekly monthly/;

# Check remote machine:
my $pass = 0;
foreach my $w (@whats) { $pass = 1 if ($what eq $w) };
die "$what is an incorrect option for \"what\"!\n" unless $pass;

# Check how-often:
$pass = 0;
foreach my $o (@oftens) { $pass = 1 if ($often eq $o) };
die "$often is an incorrect option for \"often\"!\n" unless $pass;

# Set variables:
if ($what eq 'machine1')
{
# Defaults:
$source = $home;
$remote = '0.0.0.1';
$destdir = '/disk2/backup/isilanes/bart.home.current';
$excluded = "--exclude-from $home/.LOGs/excludes_backup.dat";
$to = 'machine1';
$from = 'bart';
}
elsif ($what eq 'machine2')
{
# Defaults:
$source = $home;
$remote = '0.0.0.2';
$destdir = '/scratch/backup/isilanes/bart.home.current';
$excluded = "--exclude-from $home/.LOGs/excludes_backup.dat";
$to = 'machine2';
$from = 'bart';
}

# Do the job:
unless ($what eq 'none')
{
unless ($often eq 'none')
{
# Connect to the remote machine, and run ANOTHER script there, making a rotation
# of the backup dirs:
system "ssh $remote \"/home/isilanes/MyTools/rotate_backups.pl $often\"";

# Actually make the backup:
system "$rsync $excluded $source/ $remote:$destdir/";

# "touch" the backup dir, to give it present timestamp:
system "ssh $remote \"touch $destdir\"";

# Enter a line in the log file defined above ($logfile):
&writelog($from,$often,$to);
};
};

sub writelog
{
my $from = ucfirst($_[0]);
my $often = $_[1];
my $to = uc($_[2]);
my $date = `date`;

open(LOG,">>$logfile");
printf LOG "home@%-10s %-7s backup at %-10s on %1s",$from,$often,$to,$date;
close(LOG);
};

As can be seen, this script relies on the remote machine having a rotate_backups.pl Perl script, located at /home/isilanes/MyTools/. That script makes the rotation of the 18 backups (moving current to yesterday, yesterday to 2-days-ago, 2-days-ago to 3-days-ago and so on). The code for that:


#!/usr/bin/perl -w

use strict;

# Whether daily, weekly or monthly:
my $type = $ARGV[0] || 'daily';

# Backup directory:
my $bdir = '/disk4/backup/isilanes/bart.home';

# Max number of copies:
my %nmax = ( 'daily' => 7,
'weekly' => 4,
'monthly' => 6 );

# Choose one of the above:
my $nmax = $nmax{$type} || 7;

# Rotate N->tmp, N-1->N, ..., 1->2, current->1:
system "mv $bdir.$type.$nmax $bdir.tmp" if (-d "$bdir.$type.$nmax");

my $i;
for ($i=$nmax-1;$i>0;$i--)
{
my $j = $i+1;
system "mv $bdir.$type.$i $bdir.$type.$j" if (-d "$bdir.$type.$i");
};

system "mv $bdir.current $bdir.$type.1" if (-d "$bdir.current");

# Restore last (tmp) backup, and then refresh it:
system "mv $bdir.tmp $bdir.current" if (-d "$bdir.tmp");
system "cp -alf --reply=yes $bdir.$type.1/. $bdir.current/" if (-d "$bdir.$type.1");

Comments

Beware of new UDEV rules!

As some of you might know, udev is a nice program that gives the user the possibility of giving persistent names to hotplugged items (e.g. USB devices), in GNU/Linux systems.

When a USB device is plugged, the kernel “finds” it, and gives it a device name. This “name” is a special file (located at the /dev directory), with which the communication with the USB device is done. For example, this is the name that has to be used for mounting the device:

# mount /dev/devicename /mnt/mountpoint

Now, the old devfs (superceded by udev) gave subsequently plugged USB devices sequential names (e.g., the first one sda, the second one sdb…). So, the device name would not correspond to the physical device you were plugging: an external HD and a portable music player would be given devices sda and sdb respectively, or the opposite, depending on the plugging order!

To fix this, udev allows for creating rules, so that a device matching this rules will always be given the same device name. Each USB device passes some info to the kernel at plugtime, so udev can use that info to identify the device. For example, an excerpt of dmesg in my Debian box, when I connect my external HD:

scsi6 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 9
usb-storage: waiting for device to settle before scanning
Vendor: FUJITSU Model: MHT2080AT Rev: 0811
Type: Direct-Access ANSI SCSI revision: 00
SCSI device sda: 156301488 512-byte hdwr sectors (80026 MB)

The relevant points are that the HD identifies itself as a FUJITSU product, with the model name MHT2080AT. I can now tell udev to create a /dev/woxter device node, each time I plug in a device made by “FUJITSU”, and by the model name of “MHT2080AT”. To do so, I can create a file /etc/udev/rules.d/myrules.rules, with content:

# My Woxter disk:
BUS=="scsi", SYSFS{vendor}=="FUJITSU", SYSFS{model}=="MHT2080AT", NAME="woxter"

Now, the original reason to write this post was that… do you see the '=='? behind 'BUS' and 'SYSFS'? Well, they have the usual meaning of ‘BUS==”scsi”‘ means ‘if BUS equals “scsi”‘, whereas ‘NAME=”woxter”‘ means ‘assign the value “woxter” to “name”‘.

However, in previous versions of udev (I don’t know when they changed), all equal signs in the udev rules were single ‘=’s, and that is the way I had them.

Now, all of a sudden, I update udev, and my USB devices do not get the name they should, according to my rules, because my rules are wrong! Man, they should give a warning or something! Something like:

Warning: file X, line Y. Use of '=' where '==' is expected!

Oh, well. In the end I found out by myself.

Comments

Why J2EE is complex

I read in O’Reillynet a comment on AurigaLogic’s Blogic.

Blogic comments on why J2EE is so complex and tedious to use. Their main thesis to support that complexity is… hold your breath…. fasten your seatbelts… : if it were easier, more stupid people would be using it!. Ta-da!!

Amazing, the “blogic” of this people.

Comments

The meaning of Vista

From markdbd’s blog, the image says it all:

Comments

Parsing command line arguments

In UNIX-like environments, such as GNU/Linux, command line is often used to operate on a bunch of files, such as:

rm -f *.dat

In the command above, “*.dat” is expanded by the shell (the command interpreter), to all matching files in the present directory (e.g.: “file1.dat file2.dat dont_delete_me.dat this_file_is_rubbish.dat“). However, this expansion is performed as a first step, and then the expanded command line is executed, e.g.:

rm -f file1.dat file2.dat dont_delete_me.dat this_file_is_rubbish.dat

This behaviour can potentially fail if a lot of files match the *.dat, because there is an upper limit to how wide a command line can be (brutally high, but finite). This can happen, for example, if you try to delete the contents of a directory with 100,000 files, and use rm -f * (yes, this can happen). For example, a ls in a directory with 100,000 files works fine, but an rm * does not:

Bart[~/kk]: rm *
/bin/rm: Argument list too long.

To avoid this problem, we can make use of xargs and echo (since echo does not seem to suffer from this argument number limitation), in the following way:

echo * | xargs rm -f

Now, xargs takes care of the argument list, and processes it so that rm does not see an endless argument list.

xargs can also be given other uses. For example, consider the following example: We want to convert some OGG files to MP2 (I won’t be caught dead near an MP3, due to its patent issues), so that a Windows luser can hear them. We can use oggdec to convert OGG to WAV, then toolame to convert WAV to MP2. Now, oggdec accepts multiple arguments as input files to convert, so the following works:

oggdec *.ogg

The above generates a .wav for each .ogg in the argument list. However, toolame does not work like that; it expects a single argument, or, if it finds two, the second one is interpreted as the desired output file, so the following fails (too many arguments):

toolame *.wav

This is where xargs can come in handy, with its -n option (see the xargs man page). This option tells xargs how many arguments to process at the same time. If we want each single argument to be processed separately, the following will do the trick:

echo *.wav | xargs -n 1 toolame

In the above example, toolame is called once per WAV file, and sees a single WAV file as argument each time, so it generates a .mp2 file per .wav file, as we wanted.

Comments

TeX capacity exceeded error

I am definitely dumb. Well, LaTeX has its part in it, too.

It turns out that all of a sudden, I started having this error when compiling a .tex file:

! TeX capacity exceeded, sorry [input stack size=1500].

After googling for an answer, I found out that the “stack size” limit is defined in the following file:

/usr/share/texmf/web2c/texmf.cnf

However, changing the value made no good: any limit, no matter how large, would be “exceeded”. The reason (after a little more hitting my head against the wall) is that there was an infinite loop in the input .tex (maybe \input{file.tex} inside file.tex, or somesuch). 10 hours (well, 5 minutes, actually) of head-banging later, when I was pretty sure no freaking infinite loop was there, I found the answer:

I had deleted the \end{document} tag!!

Now, yes, how stupid am I? And… how stupid is LaTeX to give that silly error, instead of:

TeX warning: You are too dumb, and forgot an \end{document}

Comments (43)

Custom style in PowerDot

Rembember I mentioned PowerDot for LaTeX? PowerDot is a LaTeX class to produce PowerPoint-like presentations. It creates PDFs that can be read fullscreen with any PDF reader, and turn out to be very nice looking presentations.

I am now fiddling with it, and wanted to do a custom style. I have read the PowerDot Manual[PDF], and it says all you have to do is to copy and rename an existing style, then modify it:

% cd /usr/share/texmf-texlive/tex/latex/powerdot/
% cp powerdot-default.sty powerdot-isilanes.sty
% vi powerdot-isilanes.sty

Then, put style=isilanes in your .tex, et voilà!. Well, it fails misserably, saying (among the usual garbage):

! Class powerdot Error: unknown style `isilanes'.

But the .sty is there!

OK, the problem is that LaTeX “doesn’t know” you added the style. To remind it, in my Debian Etch box:

% dpkg-reconfigure tetex-base

or, much better (thanks to a comment by bjacquem):

% texhash

This seems to “refresh” the internal LaTeX database, and now it works.

Comments

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »