Penalized Estimation in Large-Scale Generalized Linear Array Models

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Penalized Estimation in Large-Scale Generalized Linear Array Models. / Lund, Adam; Vincent, Martin; Hansen, Niels Richard.

In: Journal of Computational and Graphical Statistics, Vol. 26, No. 3, 2017, p. 709-724.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Lund, A, Vincent, M & Hansen, NR 2017, 'Penalized Estimation in Large-Scale Generalized Linear Array Models', Journal of Computational and Graphical Statistics, vol. 26, no. 3, pp. 709-724. https://doi.org/10.1080/10618600.2017.1279548

APA

Lund, A., Vincent, M., & Hansen, N. R. (2017). Penalized Estimation in Large-Scale Generalized Linear Array Models. Journal of Computational and Graphical Statistics, 26(3), 709-724. https://doi.org/10.1080/10618600.2017.1279548

Vancouver

Lund A, Vincent M, Hansen NR. Penalized Estimation in Large-Scale Generalized Linear Array Models. Journal of Computational and Graphical Statistics. 2017;26(3):709-724. https://doi.org/10.1080/10618600.2017.1279548

Author

Lund, Adam ; Vincent, Martin ; Hansen, Niels Richard. / Penalized Estimation in Large-Scale Generalized Linear Array Models. In: Journal of Computational and Graphical Statistics. 2017 ; Vol. 26, No. 3. pp. 709-724.

Bibtex

@article{80717681f4d44499909b7e6590e9beef,
title = "Penalized Estimation in Large-Scale Generalized Linear Array Models",
abstract = "Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage ofits tensor product design matrix can be impossible due to time and memory constraints, and previously considereddesign matrix free algorithms do not scale well with the dimension of the parameter vector. A newdesign matrix free algorithm is proposed for computing the penalized maximum likelihood estimate forGLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implementedand available via the R package glamlasso. It combines several ideas—previously consideredseparately—to obtain sparse estimates while at the same time efficiently exploiting the GLAM structure.In this article, the convergence of the algorithm is treated and the performance of its implementation isinvestigated and compared to that of glmnet on simulated as well as real data. It is shown that the computationtime for glamlasso scales favorably with the size of the problem when compared to glmnet.Supplementary materials, in the form of R code, data, and visualizations of results, are available online",
keywords = "Generalized linear array models, Multidimensional smoothing, Penalized estimation, Proximal gradient algorithm",
author = "Adam Lund and Martin Vincent and Hansen, {Niels Richard}",
year = "2017",
doi = "10.1080/10618600.2017.1279548",
language = "English",
volume = "26",
pages = "709--724",
journal = "Journal of Computational and Graphical Statistics",
issn = "1061-8600",
publisher = "Taylor & Francis",
number = "3",

}

RIS

TY - JOUR

T1 - Penalized Estimation in Large-Scale Generalized Linear Array Models

AU - Lund, Adam

AU - Vincent, Martin

AU - Hansen, Niels Richard

PY - 2017

Y1 - 2017

N2 - Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage ofits tensor product design matrix can be impossible due to time and memory constraints, and previously considereddesign matrix free algorithms do not scale well with the dimension of the parameter vector. A newdesign matrix free algorithm is proposed for computing the penalized maximum likelihood estimate forGLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implementedand available via the R package glamlasso. It combines several ideas—previously consideredseparately—to obtain sparse estimates while at the same time efficiently exploiting the GLAM structure.In this article, the convergence of the algorithm is treated and the performance of its implementation isinvestigated and compared to that of glmnet on simulated as well as real data. It is shown that the computationtime for glamlasso scales favorably with the size of the problem when compared to glmnet.Supplementary materials, in the form of R code, data, and visualizations of results, are available online

AB - Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage ofits tensor product design matrix can be impossible due to time and memory constraints, and previously considereddesign matrix free algorithms do not scale well with the dimension of the parameter vector. A newdesign matrix free algorithm is proposed for computing the penalized maximum likelihood estimate forGLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implementedand available via the R package glamlasso. It combines several ideas—previously consideredseparately—to obtain sparse estimates while at the same time efficiently exploiting the GLAM structure.In this article, the convergence of the algorithm is treated and the performance of its implementation isinvestigated and compared to that of glmnet on simulated as well as real data. It is shown that the computationtime for glamlasso scales favorably with the size of the problem when compared to glmnet.Supplementary materials, in the form of R code, data, and visualizations of results, are available online

KW - Generalized linear array models

KW - Multidimensional smoothing

KW - Penalized estimation

KW - Proximal gradient algorithm

U2 - 10.1080/10618600.2017.1279548

DO - 10.1080/10618600.2017.1279548

M3 - Journal article

VL - 26

SP - 709

EP - 724

JO - Journal of Computational and Graphical Statistics

JF - Journal of Computational and Graphical Statistics

SN - 1061-8600

IS - 3

ER -

ID: 184322927