-.TH "FINDDUP" 1 "Mar 2010" "Francois Fleuret" "User Commands"
+.TH "FINDDUP" "1.0" "Mar 2010" "Francois Fleuret" "User Commands"
\" This man page was written by Francois Fleuret <francois@fleuret.org>
\" and is distributed under a Creative Commons Attribution-Share Alike
.SH "SYNOPSIS"
-\fBfinddup\fP [OPTION]... DIR1 [[and:|not:]DIR2]
+\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]]
.SH "DESCRIPTION"
-With a single directory argument, \fBfinddup\fP prints the duplicated
-files found in it.
+With one directory as argument, \fBfinddup\fP prints the duplicated
+files found in it. If no directory is provided, it uses the current
+one as default.
With two directories, it prints either the files common to both DIR1
and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not
None known, probably many. Valgrind does not complain though.
+The current algorithm is dumb, as it does not use any hashing of the
+file content.
+
+Here are the things I tried, which did not help at all: (1) Computing
+md5s on the whole files, which is not satisfactory because files are
+often not read entirely, hence the md5s can not be properly computed,
+(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
+soon as one does not match, (3) reading files in parts of increasing
+sizes so that rejection could be done with only a small fraction read
+when possible, (4) using mmap instead of open/read.
+
.SH "WISH LIST"
The format of the output should definitely be improved. Not clear how.
-The comparison algorithm could maybe be improved with some MD5 kind of
-signature. However, most of the time is taken by comparison for
-matching files, which are required even when using a hash.
-
Their could be some fancy option to link two instances of the command
-running on different machines to reduce network disk accesses. Again,
-this may not help much, for the reason given above.
+running on different machines to reduce network disk accesses. This
+may not help much though.
.SH "EXAMPLES"