Jan Draisma
Factor analysis addresses the problem of testing whether n observed random variables are conditionally independent given k hidden variables, called the factors. In the case where the joint distribution of all n + k variables is multivariate Gaussian, the parameter space F_{n,k} for the k-factor model is the set of (n x n)-covariance matrices of the form D + S where D is diagonal and positive definite and S is positive semidefinite of rank at most k. An algebraic approach to factor analysis seeks to determine all polynomial relations among the matrix entries in F_{n,k} ; these relations are called model invariants. Clearly, model invariants for F_{m,k} yield model invariants for F_{n,k} for any n ≥ m by taking principal (m x m)-submatrices. This leads to the natural question of whether, for fixed k and varying n, the model invariants are finitely generated in the natural manner. I will discuss a set-theoretic version of this question, which we recently settled. |