1 .TH "FINDDUP" "1.1" "Apr 2010" "Francois Fleuret" "User Commands"
3 \" This man page was written by Francois Fleuret <francois@fleuret.org>
4 \" and is distributed under a Creative Commons Attribution-Share Alike
9 finddup \- Find files common to two directories (or not)
13 \fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]]
17 With one directory as argument, \fBfinddup\fP prints the duplicated
18 files found in it. If no directory is provided, it uses the current
21 With two directories, it prints either the files common to both DIR1
22 and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not
23 in DIR2. The `and:' prefix is assumed by default and necessary only if
24 you have a directory name starting with `not:'.
26 This command compares files by first comparing their sizes, hence goes
29 When looking for identical files, \fBfinddup\fP associates a group ID
30 to every content, and prints it along the file names. Use the \fB-g\fP
35 is virtually the same as
40 \fB-v\fR, \fB--version\fR
41 print the version number and exit
43 \fB-h\fR, \fB--help\fR
44 print the help and exit
46 \fB-d\fR, \fB--ignore-dots\fR
47 ignore files and directories starting with a dot
49 \fB-0\fR, \fB--ignore-empty\fR
52 \fB-c\fR, \fB--hide-matchings\fR
53 do not show which files from DIR2 correspond to files from DIR1
54 (hence, show only the files from DIR1 which have an identical twin in
57 \fB-g\fR, \fB--no-group-ids\fR
58 do not show the file group IDs
60 \fB-t\fR, \fB--time-sort\fR
61 sort files in each group according to the modification times
63 \fB-p\fR, \fB--show-progress\fR
64 show progress information in stderr
66 \fB-r\fR, \fB--real-paths\fR
67 show the real path of the files
69 \fB-i\fR, \fB--same-inodes-are-different\fR
70 files with same inode are considered as different
74 None known, probably many. Valgrind does not complain though.
76 Since files with same inodes are considered as different when looking
77 for duplicates in a single directory, there are weird behaviors -- not
78 bugs -- with hard links.
80 The current algorithm is dumb, as it does not use any hashing of the
83 Here are the things I tried, which did not help at all: (1) Computing
84 md5s on the whole files, which is not satisfactory because files are
85 often not read entirely, hence the md5s can not be properly computed,
86 (2) computing XORs of the first 4, 16 and 256 bytes with rejection as
87 soon as one does not match, (3) reading files in parts of increasing
88 sizes so that rejection could be done with only a small fraction read
89 when possible, (4) using mmap instead of open/read.
93 The format of the output should definitely be improved. Not clear how.
95 Their could be some fancy option to link two instances of the command
96 running on different machines to reduce network disk accesses. This
97 may not help much though.
104 List duplicated files in directory ./blah/, show a progress bar,
105 ignore empty files, and ignore files and directories starting with a
109 .B finddup sources not:/mnt/backup
112 List all files found in \fB./sources/\fR which do not have
113 content-matching equivalent in \fB/mnt/backup/\fR.
116 .B finddup -g tralala cuicui
119 List groups of files with same content which exist both in
120 \fB./tralala/\fR and \fB./cuicui/\fR. Do not show group IDs, instead
121 write empty lines between groups of files of same content.
125 Written by Francois Fleuret <francois@fleuret.org> and distributed
126 under the terms of the GNU General Public License version 3 as
127 published by the Free Software Foundation. This is free software: you
128 are free to change and redistribute it. There is NO WARRANTY, to the
129 extent permitted by law.