|Affiliations:||Mathematical Institute of the Serbian Academy of Sciences and Arts||Title:||Fast sparse Gaussian Markov Random fields learning based on Cholesky factorization||Journal:||IJCAI International Joint Conference on Artificial Intelligence||First page:||2758||Last page:||2764||Conference:||26th International Joint Conference on Artificial Intelligence, IJCAI 2017; Melbourne; Australia; 19 August 2017 through 25 August 2017||Issue Date:||1-Jan-2017||Rank:||M30||ISBN:||978-0-999-24110-3||ISSN:||1045-0823||DOI:||10.24963/ijcai.2017/384||Abstract:||
Learning the sparse Gaussian Markov Random Field, or conversely, estimating the sparse inverse covariance matrix is an approach to uncover the underlying dependency structure in data. Most of the current methods solve the problem by optimizing the maximum likelihood objective with a Laplace prior L1 on entries of a precision matrix. We propose a novel objective with a regularization term which penalizes an approximate product of the Cholesky decomposed precision matrix. This new reparametrization of the penalty term allows efficient coordinate descent optimization, which in synergy with an active set approach results in a very fast and efficient method for learning the sparse inverse covariance matrix. We evaluated the speed and solution quality of the newly proposed SCHL method on problems consisting of up to 24,840 variables. Our approach was several times faster than three state-of-the-art approaches. We also demonstrate that SCHL can be used to discover interpretable networks, by applying it to a high impact problem from the health informatics domain.
|Publisher:||International Joint Conferences on Artificial Intelligence||Project:||DARPA, Grants FA9550-12-1-0406 and 66001-11-1-4183
AFOSR, DARPA and the ARO, Grant No. W911NF-16-C-0050
NSF BIG-DATA, Grant 14476570
ONR, Grant N00014-15-1-2729
Show full item record
checked on Jan 20, 2022
checked on Jan 21, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.