+#' Summary of the function execution flow:
+#' \enumerate{
+#' \item Compute and serialize all contributions, obtained through discrete wavelet
+#' decomposition (see Antoniadis & al. [2013])
+#' \item Divide series into \code{ntasks} groups to process in parallel. In each task:
+#' \enumerate{
+#' \item iterate the first clustering algorithm on its aggregated outputs,
+#' on inputs of size \code{nb_series_per_chunk}
+#' \item optionally, if WER=="mix":
+#' a) compute the K1 synchrones curves,
+#' a) compute WER distances (K1xK1 matrix) between medoids and
+#' b) apply the second clustering algorithm (output: K2 indices)
+#' }
+#' \item Launch a final task on the aggregated outputs of all previous tasks:
+#' ntasks*K1 if WER=="end", ntasks*K2 otherwise
+#' \item Compute synchrones (sum of series within each final group)
+#' }
+#' \cr
+#' The main argument -- \code{series} -- has a quite misleading name, since it can be
+#' either a [big.]matrix, a CSV file, a connection or a user function to retrieve series.
+#' When \code{series} is given as a function, it must take a single argument,
+#' 'indices', integer vector equal to the indices of the curves to retrieve;
+#' see SQLite example.
+#' WARNING: the return value must be a matrix (in columns), or NULL if no matches.
+#' \cr
+#' Note: Since we don't make assumptions on initial data, there is a possibility that
+#' even when serialized, contributions do not fit in RAM. For example,
+#' 30e6 series of length 100,000 would lead to a +4Go contribution matrix. Therefore,
+#' it's safer to place these in (binary) files; that's what we do.
+#'
+#' @param series Access to the (time-)series, which can be of one of the three