Georgios is a postdoctoral researcher at Institut de la Vision. Previously, he worked as a postdoctoral researcher at Prof. Dr. Stephane Mallat’s DATA team at École Normale Supérieure. He received the degree of Dr. rer. nat. from the University of Oldenburg on 01.11.2016. During his doctoral studies, he worked with Prof. Dr. Roland Memisevic at the University of Frankfurt (2012-2013), Prof. Dr. Bruno Olshausen at the Redwood Center for Theoretical Neuroscience at UC Berkeley (2013-2014), and Prof. Dr. Jörg Lücke at the University of Oldenburg(2014-2016). He holds an M.Sc. in Computational Science from the Department for Theoretical Physics of Frankfurt University and a Diploma from the Mathematics Department of the Aristotle University of Thessaloniki.

His research interests revolve around sparse coding, graphical models, signal processing, deep learning, and theoretical neuroscience. He has studied their applications primarily on spike sorting, receptive field estimation, quantum chemistry, and natural image statistics.

- Artificial Intelligence
- Machine Learning
- Theoretical Neuroscience

Dr. rer. nat. in Machine Learning, 2016

Carl von Ossietzky University of Oldenburg

M.Sc. in Computational Science, 2012

Goethe University Frankfurt

Dimploma in Mathematics, 2008

Aristotle University of Thessaloniki

- Machine Learning I and II @ Oldenburg.
- Machine and Deep learning @ École Polytechnique.

Kymatio is a Python module for computing wavelet and scattering transforms.
It is built on top of PyTorch, but also has a fast CUDA …

We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the black box applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model’s prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure (A) to Binary Sparse Coding (a model with continuous observables), and (B) to basic Sigmoid Belief Networks (which are models with binary observables). Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps.

We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant “solid harmonic scattering coefficients” that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.

Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

Quickly discover relevant content by filtering publications.

ProSper is a python library containing probabilistic algorithms to learn dictionaries. Given a set of data points, the implemented …

The wavelet scattering transform is an invariant signal representation suitable for many signal processing and machine learning …

We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM …

We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory …

We introduce a solid harmonic wavelet scattering representation, which is invariant to rigid movements and stable to deformations, for …