X-Git-Url: https://git.auder.net/?a=blobdiff_plain;f=vignettes%2Freport.Rmd;h=4de2b4dad48217ab20dbe617ebab76a89318368b;hb=5859426b074bdb7084627f8eeba806f479f04f05;hp=41497177de20a8c05c9e2bafbbe7bb51cd3dd9c6;hpb=3d5b50604d7e63fa0a0d6c37d34f6a4595bcfd34;p=morpheus.git diff --git a/vignettes/report.Rmd b/vignettes/report.Rmd index 4149717..4de2b4d 100644 --- a/vignettes/report.Rmd +++ b/vignettes/report.Rmd @@ -1,5 +1,5 @@ --- -title: morpheus........... +title: Use morpheus package output: pdf_document: @@ -13,150 +13,100 @@ knitr::opts_chunk$set(echo = TRUE, include = TRUE, out.width = "100%", fig.align = "center") ``` -0) Tell that we try to learn classification parameters in a non-EM way, using algebric manipulations. -1) Model. -2) Algorithm (as in article) -3) Experiments: show package usage - -# Expériences - -```{r, results="show", include=TRUE, echo=TRUE} -library(Rmixmod) #to get clustering probability matrix -library(ClusVis) -library(FactoMineR) #for PCA - -plotPCA <- function(prob) -{ - par(mfrow=c(2,2), mar=c(4,4,2,2), mgp=c(2,1,0)) - partition <- apply(prob, 1, which.max) - n <- nrow(prob) - K <- ncol(prob) - palette <- rainbow(K, s=.5) - cols <- palette[partition] - tmp <- PCA(rbind(prob, diag(K)), ind.sup=(n+1):(n+K), scale.unit=F, graph=F) - scores <- tmp$ind$coord[,1:3] #samples coords, by rows - ctrs <- tmp$ind.sup$coord #projections of indicator vectors (by cols) - for (i in 1:2) - { - for (j in (i+1):3) - { - absc <- scores[,i] - ords <- scores[,j] - xrange <- range(absc) - yrange <- range(ords) - plot(absc, ords, col=c(cols,rep(colors()[215],K),rep(1,K)), - pch=c(rep("o",n),rep(as.character(1:K),2)), - xlim=xrange, ylim=yrange, - xlab=paste0("Dim ", i, " (", round(tmp$eig[i,2],2), "%)"), - ylab=paste0("Dim ", j, " (", round(tmp$eig[j,2],2), "%)")) - ctrsavg <- t(apply(as.matrix(palette), 1, - function(cl) c(mean(absc[cols==cl]), mean(ords[cols==cl])))) - text(ctrsavg[,1], ctrsavg[,2], as.character(1:K), col=colors()[215]) - text(ctrs[,i], ctrs[,j], as.character(1:K), col=1) - title(paste0("PCA ", i, "-", j, " / K=",K)) - } - } - # TODO: - plot(0, xaxt="n", yaxt="n", xlab="", ylab="", col="white", bty="n") -} - -plotClvz <- function(xem, alt=FALSE) -{ - par(mfrow=c(2,2), mar=c(4,4,2,2), mgp=c(2,1,0)) - if (alt) { - resvisu <- clusvis(log(xem@bestResult@proba), xem@bestResult@parameters@proportions) - } else { - resvisu <- clusvisMixmod(xem) - } - plotDensityClusVisu(resvisu, positionlegend=NULL) - plotDensityClusVisu(resvisu, add.obs=TRUE, positionlegend=NULL) - # TODO: - plot(0, xaxt="n", yaxt="n", xlab="", ylab="", col="white", bty="n") - plot(0, xaxt="n", yaxt="n", xlab="", ylab="", col="white", bty="n") -} - -grlplot <- function(x, K, alt=FALSE) #x: data, K: nb classes -{ - xem <- mixmodCluster(x, K, strategy=mixmodStrategy(nbTryInInit=500,nbTry=25)) - plotPCA(xem@results[[1]]@proba) - plotClvz(xem, alt) -} -``` +## Introduction + -## Iris data +*morpheus* is a contributed R package which attempts to find the parameters of a +mixture of logistic classifiers. +When the data under study come from several groups that have different characteristics, +using mixture models is a very popular way to handle heterogeneity. +Thus, many algorithms were developed to deal with various mixtures models. +Most of them use likelihood methods or Bayesian methods that are likelihood dependent. +*flexmix* is an R package which implements these kinds of algorithms. -```{r, results="show", include=TRUE, echo=TRUE} -data(iris) -x <- iris[,-5] #remove class info -for (i in 3:5) -{ - print(paste("Resultats en", i, "classes")) - grlplot(x, i) -} -``` +However, one problem of such methods is that they can converge to local maxima, +so several starting points must be explored. +Recently, spectral methods were developed to bypass EM algorithms and they were proved +able to recover the directions of the regression parameter +in models with known link function and random covariates (see [XX]). +Our package extends such moment methods using least squares to get estimators of the +whole parameters (with theoretical garantees, see [XX]). +Currently it can handle only binary output $-$ which is a common case. + +## Model + +Let $X\in \R^{d}$ be the vector of covariates and $Y\in \{0,1\}$ be the binary output. +A binary regression model assumes that for some link function $g$, the probability that +$Y=1$ conditionally to $X=x$ is given by $g(\langle \beta, x \rangle +b)$, where +$\beta\in \R^{d}$ is the vector of regression coefficients and $b\in\R$ is the intercept. +Popular examples of link functions are the logit link function where for any real $z$, +$g(z)=e^z/(1+e^z)$ and the probit link function where $g(z)=\Phi(z),$ with $\Phi$ +the cumulative distribution function of the standard normal ${\cal N}(0,1)$. +Both are implemented in the package. + +If now we want to modelise heterogeneous populations, let $K$ be the number of +populations and $\omega=(\omega_1,\cdots,\omega_K)$ their weights such that +$\omega_{j}\geq 0$, $j=1,\ldots,K$ and $\sum_{j=1}^{K}\omega{j}=1$. +Define, for $j=1,\ldots,K$, the regression coefficients in the $j$-th population +by $\beta_{j}\in\R^{d}$ and the intercept in the $j$-th population by +$b_{j}\in\R$. Let $\omega =(\omega_{1},\ldots,\omega_{K})$, +$b=(b_1,\cdots,b_K)$, $\beta=[\beta_{1} \vert \cdots,\vert \beta_K]$ the $d\times K$ +matrix of regression coefficients and denote $\theta=(\omega,\beta,b)$. +The model of population mixture of binary regressions is given by: + +\begin{equation} +\label{mixturemodel1} +\PP_{\theta}(Y=1\vert X=x)=\sum^{K}_{k=1}\omega_k g(<\beta_k,x>+b_k). +\end{equation} + +## Algorithm, theoretical garantees + +The algorithm uses spectral properties of some tensor matrices to estimate the model +parameters $\Theta = (\omega, \beta, b)$. Under rather mild conditions it can be +proved that the algorithm converges to the correct values (its speed is known too). +For more informations on that subject, however, please refer to our article [XX]. +In this vignette let's rather focus on package usage. + +## Usage + + +The two main functions are: + * computeMu(), which estimates the parameters directions, and + * optimParams(), which builds an object \code{o} to estimate all other parameters + when calling \code{o$run()}, starting from the directions obtained by the + previous function. +A third function is useful to run Monte-Carlo or bootstrap estimations using +different models in various contexts: multiRun(). We'll show example for all of them. + +### Estimation of directions + +In a real situation you would have (maybe after some pre-processing) the matrices +X and Y which contain vector inputs and binary output. +However, a function is provided in the package to generate such data following a +pre-defined law: + +io <- generateSampleIO(n=10000, p=1/2, beta=matrix(c(1,0,0,1),ncol=2), b=c(0,0), link="probit") + +n is the total number of samples (lines in X, number of elements in Y) +p is a vector of proportions, of size d-1 (because the last proportion is deduced from + the others: p elements sums to 1) [TODO: omega or p?] +beta is the matrix of linear coefficients, as written above in the model. +b is the vector of intercepts (as in linear regression, and as in the model above) +link can be either "logit" or "probit", as mentioned earlier. + +This function outputs a list containing in particular the matrices X and Y, allowing to +use the other functions (which all require either these, or the moments). + +TODO: computeMu(), explain input/output + +### Estimation of other parameters + +TODO: just run optimParams$run(...) + +### Monte-Carlo and bootstrap -### finance dataset (from Rmixmod package) -# -#This dataset has two categorical attributes (the year and financial status), and four continuous ones. -# -#Warnings, some probabilities of classification are exactly equal to zero then we cannot use ClusVis -# -#```{r, results="show", include=TRUE, echo=TRUE} -#data(finance) -#x <- finance[,-2] -#for (i in 3:5) -#{ -# print(paste("Resultats en", i, "classes")) -# grlplot(x, i, TRUE) -#} -#``` -# -### "Cathy dataset" (12 clusters) -# -#Warnings, some probabilities of classification are exactly equal to zero then we cannot use ClusVis -# -#```{r, results="hide", include=TRUE, echo=TRUE} -#cathy12 <- as.matrix(read.table("data/probapostCatdbBlocAtrazine-K12.txt")) -#resvisu <- clusvis(log(cathy12), prop = colMeans(cathy12)) -#par(mfrow=c(2,2), mar=c(4,4,2,2), mgp=c(2,1,0)) -#plotDensityClusVisu(resvisu, positionlegend = NULL) -#plotDensityClusVisu(resvisu, add.obs = TRUE, positionlegend = NULL) -#plotPCA(cathy12) -#``` -# -### Pima indian diabete -# -#[Source.](https://gist.github.com/ktisha/c21e73a1bd1700294ef790c56c8aec1f) -# -#```{r, results="show", include=TRUE, echo=TRUE} -#load("data/pimaData.rda") -#for (i in 3:5) -#{ -# print(paste("Resultats en", i, "classes")) -# grlplot(x, i) -#} -#``` -# -### Breast cancer -# -#[Source.](http://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+\%28diagnostic\%29) -# -#```{r, results="show", include=TRUE, echo=TRUE} -#load("data/wdbc.rda") -#for (i in 3:5) -#{ -# print(paste("Resultats en", i, "classes")) -# grlplot(x, i) -#} -#``` -# -### House-votes -# -#```{r, results="show", include=TRUE, echo=TRUE} -#load("data/house-votes.rda") -#for (i in 3:5) -#{ -# print(paste("Resultats en", i, "classes")) -# grlplot(x, i) -#} -#``` +TODO: show example comparison with flexmix, show plots.