-The MD5 hashing is not satisfactory. It is computed for a file only if
-the said file has to be read fully for a comparison (i.e. two files
-match and we have to read them completely).
-
-Hence, in practice lot of partial MD5s are computed, which costs a lot
-of cpu and is useless. This often hurts more than it helps. The only
-case when it should really be useful is when you have plenty of
-different files of same size, and lot of similar ones, which does not
-happen often.
-
-Forcing the files to be read fully so that the MD5s are properly
-computed is not okay neither, since it would fully read certain files,
-even if we will never need their MD5s.
-
-Anyway, it has to be compiled in with 'make WITH_MD5=yes', and even in
-that case it will be off by default
+The current algorithm is dumb, that is it does not use any hashing of
+the file content. I tried md5 on the whole file, which is not
+satisfactory because files are often never read entirely hence the md5
+can not be properly computed. I also tried XOR of the first 4, 16 and
+256 bytes with rejection as soon as one does not match. Did not help
+either.