Please, choose the right format to send me that text. Thanks.

I just received an e-mail with a very interesting text (recipies for [[Pincho|pintxos]]), and it prompted some experiment. The issue is that the text was inside of a [[DOC (computing)|DOC]] file (of course!), which rises some questions and concerns on my side. The size of the file was 471 kB.

I thought that one could make the document more portable by exporting it to [[PDF]] (using [[OpenOffice.org]]). Doing so, the resulting file has a size of 364 kB (1.29 times smaller than the original DOC).

Furthermore, text formatting could be waived, by using a [[plain text]] format. A copy/paste of the contents of the DOC into a TXT file yielded a 186 kB file (2.53x smaller).

Once in the mood, we can go one step further, and compress the TXT file: with [[gzip]] we get a 51 kb file (9.24x), and with [[xz]] a 42 kB one (11.2x)

So far, so good. No surprise. The surprise came when, just for fun, I exported the DOC to [[OpenDocument|ODT]]. I obtained a document equivalent to the original one, but with a 75 kB size! (6.28x smaller than the DOC).

So, for summarizing:

DOC

Pros

  • Editable.
  • Allows for text formatting.

Cons

  • Proprietary. In principle only MS Office can open it. OpenOffice.org can, but because of reverse engineering.
  • If opened with OpenOffice.org, or just a different version of MS Office, the reader can not be sure of seeing the same formatting the writer intended.
  • Size. 6 times bigger than ODT. Even bigger than PDF.
  • MS invented and owns it. You need more reasons?

PDF

Pros

  • Portability. You can open it in any OS (Windows, Linux, Mac, BSD…), on account of there being so many free PDF readers.
  • Smaller than the DOC.
  • Allows for text formatting, and the format the reader sees will be exactly the one the writer intended.

Cons

  • Not editable (I really don’t see the point in editing PDFs. For me the PDF is a product of an underlying format (e.g. LaTeX), as what you see on your browser is the product of some HTML/PHP, or an exe is the product of some source code. But I digress.)
  • Could be smaller

TXT

Pros

  • Portability. You can’t get much more portable than a plain text file. You can edit it anywhere, with your favorite text editor.
  • Size. You can’t get much smaller than a plain text file (as it contains the mere text content), and you can compress it further with ease.

Cons

  • Formatting. If you need text formatting, or including pictures or content other than text, then plain text is not for you.

ODT

Pros

  • Portability. It can be edited with OpenOffice.org (and probably others), which is [[free software]], and has versions for Windows, Linux, and Mac.
  • Editability. Every bit as editable as DOC.
  • Size. 6 times smaller files than DOC.
  • It’s a free standard, not some proprietary rubbish.

Cons

  • None I can think of.

So please, if you send me some text, first consider if plain text will suffice. If not, and no edition is intended on my side, PDF is fine. If edition is important (or size, because it’s smaller than PDF), the ODT is the way to go.

Comments (7)

ChopZip: a parallel implementation of arbitrary compression algorithms

Remember plzma.py? I made a wrapper script for running [[LZMA]] in parallel. The script could be readily generalized to use any compression algorithm, following the principle of breaking the file in parts (one per CPU), compressing the parts, then [[tar (file format)|tarring]] them together. In other words, chop the file, zip the parts. Hence the name of the program that evolved from plzma.py: ChopZip.

Introduction

Currently ChopZip supports [[LZMA|lzma]], [[XZ Utils|xz]], [[gzip]] and lzip. Of them, lzip deserves a brief comment. It was brought to my attention by the a reader of this blog. It is based on the LZMA algorithm, as are lzma and xz. Apparently unlike them, multiple files compressed with lzip can be concatenated to form a single valid lzip-compressed file. Uncompressing the latter generates a concatenation of the formers.

To illustrate the point, check the following shell action:

% echo hello > head
% echo bye > tail
% lzip head
% lzip tail
% cat head.lz tail.lz > all.lz
% lzip -d all.lz
% cat all
hello
bye

However, I just discovered that all gzip, bzip2 and xz do that already! It seems that lzma is advertised as capable of doing it, but it doesn’t work for me. Sometimes it will uncompress the concatenated file to the original file just fine, others it will decompress it to just the first chunk of the set, yet other times it will complain that the “data is corrupt” and refuse to uncompress. For that reason, chopzip will accept two working modes: simple concatenation (gzip, lzip, xz) and tarring (lzma). The relevant mode will be used transparently for the user.

Also, if you use Ubuntu, this bug will apply to you, making it impossible to have xz-utils, lzma and lzip installed at the same time.

The really nice thing about concatenability is that it allows for trivial parallelization of the compression, while maintaining compatibility with the serial compression tool, which can still uncompress the product of a parallel compression. Unfortunatelly, for non-concatenatable compression formats, the output of chopzip will be a tar file of the compressed chunks, making it imposible to uncompress with the original compressor alone (first an untar would be needed, then uncompressing, then concatenation of chunks. Or just use chopzip to decompress).

The rationale behind plzma/chopzip is simple: multi-core computers are commonplace nowadays, but still the most common compression programs do not take advantage of this fact. At least the ones that I know and use don’t. There are at least two initiatives that tackle the issue, but I still think ChopZip has a niche to exploit. The most consolidated one is pbzip2 (which I mention in my plzma post). pbzip2 is great, if you want to use bzip2. It scales really nicely (almost linearly), and pbzipped files are valid bzip2 files. The main drawback is that it uses bzip2 as compression method. bzip2 has always been the “extreme” bother of gzip: compresses more, but it’s so slow that you would only resort to it if compression size is vital. LZMA-based programs (lzma, xz, lzip) are both faster, and even compress more, so for me bzip2 is out of the equation.

A second contender in parallel compression is pxz. As its name suggests, it compresses in using xz. Drawbacks? it’s not in the official repositories yet, and I couldn’t manage to compile it, even if it comprises a single C file, and a Makefile. It also lacks ability to use different encoders (which is not necessarily bad), and it’s a compiled program, versus chopzip, which is a much more portable script.

Scalability benchmark

Anyway, let’s get into chopzip. I have run a simple test with a moderately large file (a 374MB tar file of the whole /usr/bin dir). A table follows with the speedup results for running chopzip on that file, using various numbers of chunks (and consequently, threads). The tests were conducted in a 4GB RAM Intel Core 2 Quad Q8200 computer. Speedups are calculated as how many times faster did #chunks perform with respect to just 1 chunk. It is noteworthy that in every case running chopzip with a single chunk is virtually identical in performance to running the orginal compressor directly. Also decompression times (not show) were identical, irrespective of number of chunks. ChopZip version vas r18.

#chunks xz gzip lzma lzip
1 1.000 1.000 1.000 1.000
2 1.862 1.771 1.907 1.906
4 3.265 1.910 3.262 3.430
8 3.321 1.680 3.247 3.373
16 3.248 1.764 3.312 3.451

Note how increasing the number of chunks beyond the amount of actual cores (4 in this case) can have a small benefit. This happens because N equal chunks of a file will not be compressed with equal speed, so the more chunks, the smaller overall effect of the slowest-compressing chunks.

Conclusion

ChopZip speeds up quite noticeably the compression of arbitrary files, and with arbitrary compressors. In the case of concatenatable compressors (see above), the resulting compressed file is an ordinary compressed file, apt to be decompressed with the regular compressor (xz, lzip, gzip), as well as with ChopZip. This makes ChopZip a valid alternative to them, with the parallelization advantage.

Comments (6)

plzma.py: a wrapper for parallel implementation of LZMA compression

Update: this script has been superseded by ChopZip

Introduction

I discovered the [[Lempel-Ziv-Markov chain algorithm|LZMA]] compression algorithm some time ago, and have been thrilled by its capacity since. It has higher compression ratios than even [[bzip2]], with a faster decompression time. However, although decompressing is fast, compressing is not: LZMA is even slower than bzip2. On the other hand, [[gzip]] remains blazing fast in comparison, while providing a decent level of compression.

More recently I have discovered the interesting pbzip2, which is a parallel implementation of bzip2. With the increasing popularity of multi-core processors (I have a quad-core at home myself), parallelizing the compression tools is a very good idea. pbzip2 performs really well, producing bzip2-compatible files with near-linear scaling with the number of CPUs.

LZMA being such a high performance compressor, I wondered if its speed could be boosted by using it in parallel. Although the [[Lempel-Ziv-Markov chain algorithm|Wikipedia article]] states that the algorithm can be parallelized, I found no such implementation in Ubuntu 9.04, where the utility provided by the lzma package is exclusively serial. Not finding one, I set myself to produce it.

About plzma.py

Any compression can be parallelized as follows:

  1. Split the original file into as many pieces as CPU cores available
  2. Compress (simultaneously) all the pieces
  3. Create a single file by joining all the compressed pieces, and call the result “the compressed file”

In a Linux environment, these three tasks can be carried out easily by split, lzma itself, and tar, respectively. I just made a [[Python (programming language)|Python]] script to automate these tasks, called it plzma.py, and put it in my web site for anyone to download (it’s GPLed). Please notice that plzma.py has been superseded by chopzip, starting with revision 12, whereas latest plzma is revision 6.

I must remark that, while pbzip2 generates bzip2-compatible compressed files, that is not the case with plzma. The products of plzma compression must be decompressed with plzma as well. The actual format of a plzma file is just a TAR file containing as many LZMA-compressed chunks as CPUs used for compression. These chunks, once decompressed individually, can be concatenated (with the cat command) to form the original file.

Benchmarks

What review of compression tools lacks benchmarks? No matter how inaccurate or silly, none of them do. And neither does mine :^)

I used three (single) files as reference:

  • molekel.tar – a 108 MB tar file of the (GPL) [[Molekel]] 5.0 source code
  • usr.bin.tar – 309 MB tar file of the contens of my /usr/bin/ dir
  • hackable.tar – a 782 MB tar file of the hackable:1 [[Debian]]-based distro for the [[Neo FreeRunner]]

The second case is intended as an example of binary file compression, whereas the other two are more of a “real-life” example. I didn’t test text-only files… I might in the future, but don’t expect the conclusions to change much. The testbed was my Frink desktop PC (Intel Q8200 quad-core).

The options for each tool were:

  • gzip/bzip/pbzip2: compression level 6
  • lzma/plzma: compression level 3
  • pbzip2/plzma: 4 CPUs

Compressed size

The most important feature of a compressor is the size of the resulting file. After all, we used it in first place to save space. No matter how fast an algorithm is, if the resulting file is bigger than the original file I wouldn’t use it. Would you?

The graph below shows the compressed size ratio for compression of the three test files with each of the five tools considered. The compressed size ratio is defined as the compressed size divided by the original size for each file.

This test doesn’t surprise much: gzip is the least effective and LZMA the most one. The point to make here is that the parallel implementations perform as well or badly as their serial counterparts.

If you are unimpressed by the supposedly higher performance of bzip2 and LZMA over gzip, when in the picture all final sizes do not look very different, recall that gzip compressed molekel.tar ~ 3 times (to a 0.329 ratio), whereas LZMA compressed it ~ 4.3 times (to a 0.233 ratio). You could stuff 13 LZMAed files where only 9 gzipped ones fit (and just 3 uncompressed ones).

Compression time

However important the compressed size is, compression time is also an important subject. Actually, that’s the very issue I try to address parallelizing LZMA: to make it faster while keeping its high compression ratio.

The graph below shows the normalized times for compression of the three test files with each of the five tools considered. The normalized time is taken as the total time divided by the time it took gzip to finish (an arbitrary scale with t(gzip)=1.0).

Roughly speaking, we could say that in my setting pbzip2 makes bzip2 as fast as gzip, and plzma makes LZMA as fast as serial bzip2.

The speedups for bzip2/pbzip2 and LZMA/plzma are given in the following table:

File pbzip2 plzma
molekel.tar 4.00 2.72
usr.bin.tar 3.61 3.38
hackable.tar 3.80 3.04

The performance of plzma is nowere near pbzip2, but I’d call it acceptable (wouldn’t I?, I’m the author!). There are two reasons I can think of to explain lower-than-linear scalability. The first one is the overhead imposed when cutting the file into pieces then assembling them back. The second one, maybe more important, is the disk performance. Maybe each core can compress each file independently, but the disk I/O for reading the chunks and writing them back compressed is done simultaneously on the same disk, which the four processes share.

Update: I think that a good deal of under-linearity comes from the fact that files of equal size will not be compressed in an equal time. Each chunk compression will take a slightly different time to complete, because some will be easier than others to compress. The program waits for the last compression to finish, so it’s as slow as the slowest one. It is also true that pieces of 1/N size might take more than 1/N time to complete, so the more chunks, the slower the compression in total (the opposite could also be true, though).

Decompression times

Usually we pay less attention to it, because it is much faster (and because we often compress things never to open them again, in which case we had better deleted them in first place… but I digress).

The following graph shows the decompression data equivalent to the compression times graph above.

The most noteworthy point is that pbzip2 decompresses pbzip2-compressed files faster than bzip2 does with bzip2-compressed files. That is, both compression and decompression benefit from the parallelization. However, for plzma that is not the case: decompression is slower than with the serial LZMA. This is due to two effects: first, the decompression part is still not parallelized in my script (it will soon be). This would lead to decompression speeds near to the serial LZMA. However, it is slower due to the second effect: the overhead caused by splitting and then joining.

Another result worth noting is that, although LZMA is much slower than even bzip2 to compress, the decompression is actually faster. This is not random. LZMA was designed with fast uncompression time in mind, so that it could be used in, e.g. software distribution, where a single person compresses the original data (however painstakingly), then the users can download the result (the smaller, the faster), and uncompress it to use it.

Conclusions

While there is room for improvement, plzma seems like a viable option to speed up general compression tasks where a high compression ratio (LZMA level) is desired.

I would like to stress the point that plzma files are not uncompressable with just LZMA. If you don’t use plzma to decompress, you can follow the these steps:

% tar -xf file.plz
% lzma -d file.0[1-4].lz
% cat file.0[1-4] > file
% rm file.0[1-4] file.plz

Comments (4)

Accessing Linux ext2/ext3 partitions from MS Windows

Accessing both Windows [[File Allocation Table|FAT]] and [[NTFS]] file systems from Linux is quite easy, with tools like [[NTFS-3G]]. However (following with the [[shit|MS]] tradition of making itself incompatible with everything else, to thwart competition), doing the opposite (accessing Linux file systems from Windows) is more complicated. One would have to guess why (and how!) [[closed source software|closed]] and [[proprietary software|proprietary]] and technically inferior file systems can be read by free software tools, whereas proprietary software with such a big corporation behind is incapable (or unwilling) to interact with superior and [[free software]] file systems. Why should Windows users be deprived of the choice over [[JFS (file system)|JFS]], [[XFS]] or [[ReiserFS]], when they are free? MS techs are too dumb to implement them? Or too evil to give their users the choice? Or, maybe, too scared that if choice is possible, their users will dump NTFS? Neither explanation makes one feel much love for MS, does it?

This stupid inability of Windows to read any of the many formats Linux can use gives rise to problems for not only Windows users, but also Linux users. For example, when I format my external hard disks or pendrives, I end up wondering if I should reserve some space for a FAT partition, so I could put there data to share with hypothetical Windows users I could lend the disk to. And, seriously, I abhor wasting my hardware with such lousy file systems, when I could use Linux ones.

Anyway, there are some third-party tools to help us which such a task. I found at least two:

I have used the first one, but as some blogs point out (e.g. BloggUccio), ext2fsd is required if the [[inode]] size is bigger than 128 B (256 B in some modern Linux distros).

Getting Ext2IFS

It is a simple exe file you can download from fs-driver.org. Installing it consists on the typical windows next-next-finish click-dance. In principle the defaults are OK. It will ask you about activating “read-only” (which I declined. It’s less safe, but I would like to be able to write too), and something about large file support (which I accepted, because it’s only an issue with Linux kernels older than 2.2… Middle Age stuff).

Formatting the hard drive

In principle, Ext2IFS can read ext2/ext3 partitions with no problem. In practice, if the partition was created with an [[inode]] size of more than 128 bytes, Ext2IFS won’t read it. To create a “compatible” partition, you can mkfs it with the -I flag, as follows:

# mkfs.ext3 -I 128 /dev/whatever

I found out about the 128 B inode thing from this forum thread [es].

Practical use

What I have done, and tested, is what follows: I format my external drives with almost all of it as ext3, as described, leaving a couple of gigabytes (you could cut down to a couple of megabytes if you really want to) for a FAT partition. Then copy the Ext2IFS_1_11a.exe executable to that partition.

Whenever you want to use that drive, Linux will see two partitions (the ext3 and the FAT one), the second one of which you can ignore. From Windows, you will see only a 2GB FAT partition. However, you will be able to open it, find the exe, double-click, and install Ext2IFS. After that, you can unplug the drive and plug it again…et voilà, you will see the ext3 partition just fine.

Comments (2)

Save HD space by using compressed files directly

Maybe the constant increases in hard disk capacity provide us with more space we can waste with our files, but there is always a situation in which we would like to squeeze as much data in as little space as possible. Besides, it is always a good practice to keep disk usage as low as possible, just for tidiness.

The first and most important advice for saving space: for $GOD’s sake, delete the stuff you don’t need!

Now, assuming you want to keep all you presently have, the second tool is [[data compression]]. Linux users have long time friends in the [[gzip]] and [[bzip2]] commands. One would use the former for fast (and reasonably good) compression, and the latter for when saving space is really vital (although bzip2 is really slow). A more recent entry in the “perfect compression tool” contest would be [[Lempel-Ziv-Markov chain algorithm]] (LZMA). This one can compress even more than bzip2, being usually faster (although never as fast as gzip).

One problem with compression is that it is a good way of storing files, but they usually have to be uncompressed to modify, and then re-compressed, and this is very slow. However, we have some tools to interact with the compressed files directly (internally decompressing “on the fly” only the part that we need to edit). I would like to just mention them here:

Shell commands

We can use zcat, zgrep and zdiff as replacements for cat, grep and diff, but for gzipped files. These account for a huge fraction of all the interaction I do with text files from the command line. If you are like me, they can save you tons of time.

Vim

[[Vim (text editor)|Vim]] can be instructed to open some files making use of some decompression tool, to show the contents of the file and work on them transparently. Once we :wq out of the file, we will get the original compressed file. The speed to do this cycle is incredibly fast: almost as fast as opening the uncompressed file, and nowhere near as slow as gunzipping, viming and gzipping sequentially.

You can add the following to your .vimrc config file for the above:

" Only do this part when compiled with support for autocommands.
if has("autocmd")

 augroup gzip
  " Remove all gzip autocommands
  au!

  " Enable editing of gzipped files
  " set binary mode before reading the file
  autocmd BufReadPre,FileReadPre	*.gz,*.bz2,*.lz set bin

  autocmd BufReadPost,FileReadPost	*.gz call GZIP_read("gunzip")
  autocmd BufReadPost,FileReadPost	*.bz2 call GZIP_read("bunzip2")
  autocmd BufReadPost,FileReadPost	*.lz call GZIP_read("unlzma -S .lz")

  autocmd BufWritePost,FileWritePost	*.gz call GZIP_write("gzip")
  autocmd BufWritePost,FileWritePost	*.bz2 call GZIP_write("bzip2")
  autocmd BufWritePost,FileWritePost	*.lz call GZIP_write("lzma -S .lz")

  autocmd FileAppendPre			*.gz call GZIP_appre("gunzip")
  autocmd FileAppendPre			*.bz2 call GZIP_appre("bunzip2")
  autocmd FileAppendPre			*.lz call GZIP_appre("unlzma -S .lz")

  autocmd FileAppendPost		*.gz call GZIP_write("gzip")
  autocmd FileAppendPost		*.bz2 call GZIP_write("bzip2")
  autocmd FileAppendPost		*.lz call GZIP_write("lzma -S .lz")

  " After reading compressed file: Uncompress text in buffer with "cmd"
  fun! GZIP_read(cmd)
    let ch_save = &ch
    set ch=2
    execute "'[,']!" . a:cmd
    set nobin
    let &ch = ch_save
    execute ":doautocmd BufReadPost " . expand("%:r")
  endfun

  " After writing compressed file: Compress written file with "cmd"
  fun! GZIP_write(cmd)
    if rename(expand(""), expand(":r")) == 0
      execute "!" . a:cmd . " :r"
    endif
  endfun

  " Before appending to compressed file: Uncompress file with "cmd"
  fun! GZIP_appre(cmd)
    execute "!" . a:cmd . " "
    call rename(expand(":r"), expand(""))
  endfun

 augroup END
endif " has("autocmd")

I first found the above in my (default) .vimrc file, allowing gzipped and bzipped files to be edited. I added the “support” for LZMAed files quite trivially, as can be seen in the lines containign “lz” in the code above (I use .lz as termination for LZMAed files, instead of the default .lzma. See man lzma for more info).

Non-plaintext files

Other files that I have been able to successfully use in compressed form are [[PostScript]] and [[Portable Document Format|PDF]]. Granted, PDFs are already quite compact, but sometimes gzipping them saves space. In general, PS and EPS files save a lot of space by gzipping.

As far as I have tried, the [[Evince]] document viewer can read gzipped PS, EPS and PDF files with no problem (probably [[Device_independent_file_format|DVI]] files as well).

Comments (3)

4 GB SD cards properly read under Linux

Remember how MacOS X ate my pics? In that post I explained that Linux had problems with >1 GB [[Secure Digital card|Secure Digital cards]].

Well, apparently the problem was in the card reader, not in Linux (or the card itself). I recently acquired a SD card reader on [[eBay]] (actually, one like this one), and with it I can mount and read 2 and 4 GB SD cards without problem.

Comments (1)

DreamHost makes me happy again: free backups

Perhaps you are aware of my first (and last so far) gripe with [[DreamHost]]: as I wrote a couple of months ago, they wouldn’t let me use my account space for non-web content.

Well, it seems that they really work to make their users happy, and probably other people requested something like that, and read what the August DH newsletter says about it:

In keeping with my no-theme theme, uh oh, I think I just made a destroy-the-universe-LHC-style self-contradiction, here’s a new feature that pretty much has nothing to do with anything I said in the introduction!

https://panel.dreamhost.com/?tree=users.backup

Now, you know how we give out a LOT of disk space with our hosting? Well technically that space is only supposed to be used for your _actual_ web site (and email / database stuff) .. not as an online backup for your music, pictures, videos, other servers, etc!

Well, just like every other web host does, we’ve been sort of cracking down on that some lately, and it seems to catch some people by surprise! Nobody likes being surprised, especially in the shower, which is where we typically brought it up, and so now we offer a solution:

You CAN use 50GB of your disk space for backups now! The only caveat is, it’s a separate ftp (or sftp) user on a separate server and it can’t serve any web pages. There are also NO BACKUPS kept of THESE backups (they should already BE your backups, not your only copy), and if you go over 50GB, extra space is only 10 cents a GB a month (a.k.a. cheap)!

Thanks, DreamHost, for showing me that I made a good choice when I chose you!
Update: apparently only [[SSH file transfer protocol|SFTP]] works (or [[File Transfer Protocol|FTP]] if you are idiot enough to enable it), but not scp or any [[Secure Shell|SSH]]-related thing (rsync, …). I hope I find some workaround, because if not that would be a showstopper for me.

Comments (2)

MacOSX ate my pics

OK, so the 2GB [[Secure Digital card|SD]] card I use in my camera has its problems. Apparently, only 1GB of it are recognized by the OS (Linux) and/or the card readers I own. The problem seems to be common, as you can see by googling about it. More info, from the Wikipedia article for SD cards:

Devices that use SD cards identify the card by requesting a 128-bit identification string from the card. For standard-capacity SD cards, 12 of the bits are used to identify the number of memory clusters (ranging from 1 to 4096) and 3 of the bits are used to identify the number of blocks per cluster (which decode to 4, 8, 16, 32, 64, 128, 256 or 512 blocks per cluster).

In older 1.x implementations the standard capacity block was exactly 512 bytes. This gives 4096 x 512 x 512 = 1 gigabyte of storage memory. A later revision of the 1.x standard allowed a 4-bit field to indicate 1024 or 2048 bytes per block instead, yielding more than 1 gigabyte of memory storage. Devices designed before this change may incorrectly identify such cards. Usually by misidentify a card with lower capacity than is the case by assuming 512 bytes per block rather than 1024 or 2048.

For the new SDHC high capacity card (2.0) implementation, 22 bits of the identification string are used to indicate the memory size in increments of 512 KBytes. Currently 16 of the 22 bits are allowed to be used, giving a maximum size of 32 GB. All SDHC 4-GB and larger cards must be 2.0 implementations. Two bits that were previously reserved and fixed at 0 are now used for identifying the type of card, 0=standard, 1=HC, 2=reserved, 3=reserved. Non-HC devices are not programmed to read this code and therefore cannot correctly read the identification of the card.

All SDHC readers work with standard SD cards.[ref]

Many older devices will not accept the 2 or 4 GB size even though it is in the revised standard. The following statement is from the SD association specification:

“To make 2 GByte card, the Maximum Block Length (READ_BL_LEN=WRITE_BL_LEN) shall be set to 1024 bytes. However, the Block Length, set by CMD16, shall be up to 512 bytes to keep consistency with 512 bytes Maximum Block Length cards (Less than and equal 2 Gbyte cards[ref].”

I have had problems in the past with this issue, and could only retrieve the photos exceeding the “first GB” of the card with a tedious operation: copy some photos from the card to the internal memory of the camera (28MB), remove the card from camera, connect camera to computer, download the photos in the internal memory (and empty it), unplug and repeat. 1GB/28MB = boring as hell.

However yesterday I realized that I had a shinny MacBook with its MacOSX intact. I read somewhere that Windows would sometimes read 2GB cards that Linux couldn’t, by the simple method of happily ignoring the f*cked up file system in them. Maybe MacOSX could, too.

Well, it turns out it couldn’t. When I attached the card in the card reader (I didn’t try with the camera directly), MacOSX found the device and mounted it, giving me a nice icon in the Mac version of “My PC”. However, the directory the icon led to was empty. I thought “Tough luck, Mac reads nothing in the card”, unmounted it and unplug it. Then I placed it in the camera again, turned it on and… there was no photo in the darned thing! MacOSX had erased them!

There is still the (likely) possibility that I made some deep mistake, but the explanation for now seems that MacOSX ate my pics. Sad.

Comments (1)