X-Git-Url: https://www.fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=finddup.1;h=62ab0b878c2410dff0b0ae3835c30f84129fd315;hb=ad1b1b341fdfabc708e174cbc29006988b1df342;hp=46a4326fb9f2e1244aef1bfe9a93588f43505880;hpb=e4133d06373b48e8509afd0811bb0a726d74f8a8;p=finddup.git diff --git a/finddup.1 b/finddup.1 index 46a4326..62ab0b8 100644 --- a/finddup.1 +++ b/finddup.1 @@ -1,4 +1,4 @@ -.TH "FINDDUP" 1 "Mar 2010" "Francois Fleuret" "User Commands" +.TH "FINDDUP" "1.0" "Mar 2010" "Francois Fleuret" "User Commands" \" This man page was written by Francois Fleuret \" and is distributed under a Creative Commons Attribution-Share Alike @@ -10,12 +10,13 @@ finddup \- Find files common to two directories (or not) .SH "SYNOPSIS" -\fBfinddup\fP [OPTION]... DIR1 [[and:|not:]DIR2] +\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]] .SH "DESCRIPTION" -With a single directory argument, \fBfinddup\fP prints the duplicated -files found in it. +With one directory as argument, \fBfinddup\fP prints the duplicated +files found in it. If no directory is provided, it uses the current +one as default. With two directories, it prints either the files common to both DIR1 and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not @@ -61,35 +62,29 @@ show the real path of the files .TP \fB-i\fR, \fB--same-inodes-are-different\fR files with same inode are considered as different -.TP -\fB-m\fR, \fB--md5\fR -use MD5 hashing .SH "BUGS" None known, probably many. Valgrind does not complain though. -The MD5 hashing is not satisfactory. It is computed for a file only if -the said file has to be read fully for a comparison (i.e. two files -match and we have to read them completely). - -Hence, in practice lot of partial MD5s are computed, which costs a lot -of cpu and is useless. This often hurts more than it helps, hence it -is off by default. The only case when it should really be useful is -when you have plenty of different files of same size, and lot of -similar ones, which does not happen often. +The current algorithm is dumb, as it does not use any hashing of the +file content. -Forcing the files to be read fully so that the MD5s are properly -computed is not okay neither, since it would fully read certain files, -even if we will never need their MD5s. +Here are the things I tried, which did not help at all: (1) Computing +md5s on the whole files, which is not satisfactory because files are +often not read entirely, hence the md5s can not be properly computed, +(2) computing XORs of the first 4, 16 and 256 bytes with rejection as +soon as one does not match, (3) reading files in parts of increasing +sizes so that rejection could be done with only a small fraction read +when possible, (4) using mmap instead of open/read. .SH "WISH LIST" The format of the output should definitely be improved. Not clear how. Their could be some fancy option to link two instances of the command -running on different machines to reduce network disk accesses. Again, -this may not help much, for the reason given above. +running on different machines to reduce network disk accesses. This +may not help much though. .SH "EXAMPLES"