X-Git-Url: https://git.auder.net/?a=blobdiff_plain;f=TODO;h=3c1fd78ec8aadcc1ff7a9df076932c2db515f99b;hb=dc1aa85a96bbf815b0d896c22a9b4a539a9e8a9c;hp=662635d48ac9904c026063d48781fb4f8c4d01dc;hpb=7709d507dfab9256a401f2c77ced7bc70d90fec3;p=epclust.git diff --git a/TODO b/TODO index 662635d..3c1fd78 100644 --- a/TODO +++ b/TODO @@ -15,9 +15,54 @@ geometric structure of high dim data and dim reduction 2011 https://docs.docker.com/engine/getstarted/step_one/ A faire: - - finir les experiences (sur nb de classes, nb de curves / chunk, nb de procs) + - finir les experiences (sur nb de classes, nb de curves / chunk, nb de procs) et sur d'autres architectures - - interface matrice -> binaire + +dans old_C_code/build : +cmake ../stage1/src +make + +dans data/, lancer R puis : +source("../old_C_code/wrapper.R") +serialize("../old_C_code/build", "2009.csv","2009.bin",1) +library(parallel) +np = detectCores() +nbSeriesPerChunk = 3000 +nbClusters = 20 +ppam_exe("../old_C_code/build",np,"2009.bin",nbSeriesPerChunk,nbClusters) +C = getMedoids("../old_C_code/build", "ppamResult.xml", "ppamFinalSeries.bin") +first100series = deserialize("../old_C_code/build", "2009.bin", "2009.csv.part", "1-100") +distor = getDistor("../old_C_code/build", "ppamResult.xml", "2009.bin") + +- interface matrice -> binaire + OK + - courbe synchrone + ?? + +Piste à explorer pour les comparaisons: H20 + +renvoyer nombre d'individues par classe ? (+ somme ?) +hypothèse : données déjà ordonnées 48 1/2H sur 365j +utiliser du mixmod avec modèles allongés +doit toutner sur machine plutôt standard, utilisateur "lambda" +utiliser Rcpp ? + +===== + +trategies for upscaling +From 25K to 25M : in 1000 chunks of 25K +Reference values : + K 0 = 200 super consumers (SC) + K ∗ = 15 nal clusters +1st strategy + Do 1000 times ONLY Energycon's 1st-step strategy on 25K clients + With the 1000 × K 0 SC perform a 2-step run leading to K ∗ clusters + +--> il faut s'arranger pour que -Piste à explorer pour les comparaisons: H20 +2nd strategy + Do 1000 times Energycon's 2-step strategy on 25K clients leading to + 1000 × K ∗ intermediate clusters + Treat the intermediate clusters as individual curves and perform a + single 2-step run to get K ∗ nal clusters