+None known, probably many. Valgrind does not complain though.
+
+Since files with same inodes are considered as different when looking
+for duplicates in a single directory, there are weird behaviors -- not
+bugs -- with hard links.
+
+The current algorithm is dumb, as it does not use any hashing of the
+file content.
+
+Here are the things I tried, which did not help at all: (1) Computing
+md5s on the whole files, which is not satisfactory because files are
+often not read entirely, hence the md5s can not be properly computed,
+(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
+soon as one does not match, (3) reading files in parts of increasing
+sizes so that rejection could be done with only a small fraction read
+when possible, (4) using mmap instead of open/read.
+
+.SH "WISH LIST"
+
+The format of the output should definitely be improved. Not clear how.
+
+Their could be some fancy option to link two instances of the command
+running on different machines to reduce network disk accesses. This
+may not help much though.