-######################################################################
-## INTRODUCTION
+I. INTRODUCTION
- This is the C++ implementation of the folded hierarchy of
- classifiers for cat detection described in
+ This is the open-source C++ implementation of the folded hierarchy
+ of classifiers for cat detection described in
F. Fleuret and D. Geman, "Stationary Features and Cat Detection",
Journal of Machine Learning Research (JMLR), 2008, to appear.
- Please cite this paper when referring to this software.
+ Please use that citation when referring to this software.
-######################################################################
-## INSTALLATION
+ Contact Francois Fleuret at fleuret@idiap.ch for comments and bug
+ reports.
- This program was developed on Debian GNU/Linux computers with the
- following main tool versions
-
- * GNU bash, version 3.2.39
- * g++ 4.3.2
- * gnuplot 4.2 patchlevel 4
+II. INSTALLATION
If you have installed the RateMyKitten images provided on
- http://www.idiap.ch/folded-ctf
+ http://www.idiap.ch/folded-ctf
in the source directory, everything should work seamlessly by
- invoking the ./run.sh script. It will
+ invoking the ./run.sh script.
+
+ It will
* Compile the source code entirely
* Run 20 rounds of training / test (ten rounds for each of HB and
H+B detectors with different random seeds)
- You can also run the full thing with the following commands if you
- have wget installed
+ You can run the full thing with the following commands if you have
+ wget installed
- > wget http://www.idiap.ch/folded-ctf/not-public-yet/data/folding-gpl.tgz
+ > wget http://www.idiap.ch/folded-ctf/data/folding-gpl.tgz
> tar zxvf folding-gpl.tgz
> cd folding
- > wget http://www.idiap.ch/folded-ctf/not-public-yet/data/rmk.tgz
+ > wget http://www.idiap.ch/folded-ctf/data/rmk.tgz
> tar zxvf rmk.tgz
> ./run.sh
Note that every one of the twenty rounds of training/testing takes
more than three days on a powerful PC. However, the script detects
already running computations by looking at the presence of the
- corresponding result directory. Hence, it can be run in parallel on
- several machines as long as they see the same result directory.
+ corresponding result directories. Hence, it can be run in parallel
+ on several machines as long as they see the same result directory.
When all or some of the experimental rounds are over, you can
- generate the ROC curves by invoking the ./graph.sh script.
+ generate the ROC curves by invoking the ./graph.sh script. You need
+ a fairly recent version of Gnuplot.
+
+ This program was developed on Debian GNU/Linux computers with the
+ following main tool versions
+
+ * GNU bash, version 3.2.39
+ * g++ 4.3.2
+ * gnuplot 4.2 patchlevel 4
- You are welcome to send bug reports and comments to fleuret@idiap.ch
+ Due to approximations in the optimized arithmetic operations with
+ g++, results may vary with different versions of the compiler
+ and/or different levels of optimization.
-######################################################################
-## PARAMETERS
+III. PARAMETERS
- To set the value of a parameter during an experiment, just add an
- argument of the form --parameter-name=value before the commands that
- should take into account that value.
+ To set the value of a parameter, just add an argument of the form
+ --parameter-name=value before the commands that should take it into
+ account.
For every parameter below, the default value is given between
parenthesis.
* pool-name (no default)
- Where are the data to use
+ The scene pool file name.
* test-pool-name (no default)
- Should we use a separate pool file, and ignore proportion-for-test
- then.
+ Should we use a separate test pool file. If none is given, then
+ the test scenes are taken at random from the main pool file
+ according to proportion-for-test.
* detector-name ("default.det")
* tree-depth-max (1)
Maximum depth of the decision trees used as weak learners in the
- classifier. The default value corresponds to stumps.
+ classifier. The default value of 1 corresponds to stumps.
* proportion-negative-cells-for-training (0.025)
Overall proportion of negative cells to use during learning (we
- sample among them)
+ sample among for boosting).
* nb-negative-samples-per-positive (10)
* nb-features-for-boosting-optimization (10000)
- How many pose-indexed features to use at every step of boosting.
+ How many pose-indexed features to look at for optimization at
+ every step of boosting.
* force-head-belly-independence ("no")
Should we force the independence between the two levels of the
detector (i.e. make an H+B detector)
- * nb-weak-learners-per-classifier (10)
+ * nb-weak-learners-per-classifier (100)
- This parameter corresponds to the value U in the JMLR paper, and
- should be set to 100.
+ This parameter corresponds to the value U in the article.
* nb-classifiers-per-level (25)
- This parameter corresponds to the value B in the JMLR paper.
+ This parameter corresponds to the value B in the article.
- * nb-levels (1)
+ * nb-levels (2)
- How many levels in the hierarchy. This should be 2 for the JMLR
- paper experiments.
+ How many levels in the hierarchy.
- * proportion-for-train (0.5)
+ * proportion-for-train (0.75)
The proportion of scenes from the pool to use for training.
* write-parse-images ("no")
Should we save one image for every test scene with the resulting
- alarms.
+ alarms. This option generates a lot of images for every round and
+ is switched off by default. Switch it on to produce images such as
+ the full page of results in the paper.
* write-tag-images ("no")
Should we save the (very large) tag images when saving the
materials.
- * wanted-true-positive-rate (0.5)
+ * wanted-true-positive-rate (0.75)
What is the target true positive rate. Note that this is the rate
without post-processing and without pose tolerance in the
* progress-bar ("yes")
- Should we display a progress bar.
+ Should we display a progress bar during long computations.
-######################################################################
-## COMMANDS
+IV. COMMANDS
* open-pool
* compute-thresholds
- Compute the thresholds of the detector classifiers to obtain the
- required wanted-true-positive-rate
+ Compute the thresholds of the detector classifiers from the
+ validation set to obtain the required wanted-true-positive-rate.
* test-detector
Visit nb-wanted-true-positive-rates rates between 0 and
wanted-true-positive-rate, for each compute the detector
- thresholds on the validation set, estimate the error rate on the
- test set.
+ thresholds on the validation set and estimate the error rate on
+ the test set.
* write-detector
* write-pool-images
- Write PNG images of the scenes in the pool.
+ For every of the first nb-images of the pool, save one PNG image
+ with the ground truth, one with the corresponding referential at
+ the reference scale, and one with the feature material-feature-nb
+ from the detector. This last image is not saved if either no
+ detector has been read/trained or if no feature number has been
+ specified.
--
Francois Fleuret