UCPH Statistics Seminar: Joseph Meyer and Ricardo Blum

There will be two talk each 20min length with some time for discussion.


First talk

Speaker: Ricardo Blum from Heidelberg University

Title: Nonparametric regression using a variant of Random Forests

Abstract: We present a modification of the Random Forests algorithm for regression which, in particular, aims for better performance when the regression function has interaction terms of order two, or possibly larger. Our method can be motivated from finding an optimal double split, i.e. by finding the best partition of a rectangular cell in the feature space into four cells using rectangular cuts. We present some simulation results. Furthermore, we discuss consistency for our method by adopting a consistency result from Chi et al. (2022) for Random Forests.


Second talk

Speaker:  Joseph Meyer from Heidelberg University

Title: Optimal Convergence Rates of Deep Neural Networks in a Classification Setting

Abstract: We establish optimal convergence rates up to a log-factor for a class of deep neural networks in a classification setting under a restraint sometimes referred to as the Tsybakov noise condition. We construct classifiers in a general setting where the boundary of the bayes-rule can be approximated well by neural networks. Corresponding rates of convergence are proven with respect to the misclassification error. It is then shown that these rates are optimal in the minimax sense if the boundary satisfies a smoothness condition. Non-optimal convergence rates already exist for this setting. Our main contribution lies in improving existing rates and showing optimality, which was an open problem. Furthermore, we show almost optimal rates under some additional restraints which circumvent the curse of dimensionality. For our analysis we require a condition which gives new insight on the restraint used. In a sense it acts as a requirement for the "correct noise exponent" for a class of functions.