madeleine udell thesis

inflection point in the scree plot to determine the number of components to keep (the rest are noise). 5 C(n1) leftarrow minimize over C while holding W W(n1) constant. If the top two principal components capture a large majority of variance, then the dataset is more-or-less two-dimensional. There are n20 observations, each with p3 features. For example, we might record I neurons and estimate their firing rate at J timepoints; or we might measure the expression of J genes across I cells. An example data matrix ( left ) with n12 observations and p8 features is approximated by the outer product mathbfw mathbfcT ( middle ) which produces a rank-one matrix ( right ). Contents, many scientists are familiar with organizing and handling data in 2D tables.

The classic approach would be to compute the eigenvalues of mathbfXT mathbfX (the covariance matrix with dimensions p times p) and set mathbfc to the eigenvector associated with the largest eigenvalue. Gavish Donoho (2014) present a long overdue result on this problem and their answer is surprisingly simple and concrete. We can organize the top r principal components into a matrix C mathbfc_1, mathbfc_2, mathbfc_r and the loading weights into W mathbfw_1, mathbfw_2, mathbfw_r. Another popular dimensionality technique is non-negative matrix factorization (NMF), which is similar to non-negative least-squares regression.

This suggests the alternating minimization algorithm which can work very well in practice. Have many zero entries). Lets assume that we solve the optimization problem (1) by some method. Even better, we can exploit the fact that these optimization problems are biconvex. 2 This is more-or-less what happens under the hood when you call pca in matlab or python the eigendecomposition of the covariance matrix is computed via the singular value decomposition (SVD). This product is at most a rank-r matrix (in this example, r3). Interestingly, the rest of the PCA variants listed in this post cannot be analytically solved. Opening up the black box on a statistical technique is worthwhile in and of itself, but the real reason Im motivated to write this is the number of seriously cool and super useful extensions/variations of PCA (e.g., Non-negative matrix factorization, Sparse PCA, Tensor Decompositions which. I aimed to be as pedagogical as possible in this post, but you will need to be familiar with some linear algebra to follow along. See Julia code here to reproduce this figure.

madeleine udell thesis