## Borja Balle

University of Lancaster

### Differential Privacy and Secure Multi-Party Computation in Linear Regression

I will present two recent works involving complementary approaches to privacy-preserving algorithms for linear regression. The first work is motivated by the use of reinforcement learning in medical domains, and addresses the problem of differentially private linear regression with correlated data. The second part of the talk will focus on the problem of multi-party private learning where sensitive training data is distributed among several parties. In this setting we will show how multiple cryptographic primitives can be combined to obtain a scalable linear regression protocol with privacy guarantees inspired by secure multi-party computation.

## Michele Donini

University of College London

### Learning Deep Kernels in the Space of Dot Product Polynomials

Recent literature has shown the merits of having deep representations in the context of neural networks. An emerging challenge in kernel learning is the definition of similar deep representations. I will present a general methodology to define a hierarchy of base kernels with increasing expressiveness and combine them via Multiple Kernel Learning (MKL) with the aim to generate overall deeper kernels. As a leading example, this methodology is applied to learning the kernel in the space of Dot-Product Polynomials (DPPs), that is a positive combination of homogeneous polynomial kernels (HPKs). I will show theoretical properties about the expressiveness of HPKs that make their combination empirically very effective. This can also be seen as learning the coefficients of the Maclaurin expansion of any definite positive dot product kernel thus making this proposed method generally applicable.

## Rémi Eyraud

University of Aix-Marseille

### Designing and Learning Substitutable Plane Graph Grammars

Grammar learning is usually using a different framework that mainstream statistical machine learning. Indeed, even the simplest grammar classes (e.g. finite state automata) have an infinite VC-dimension, which implies that principles like ERM are of little use. In this context, learning algorithms are often studied in a particular learning paradigm, namely 'identification in the limit', which under certain conditions has been shown to have links with the well known PAC paradigm, while at the same time allowing interesting results for grammars.
Researchers of the field have mainly focus their attention to string grammars, with great positive results over the last decades. However, though graph grammars have been widely investigated for 40 years in the context of language and graph theory, few learning results exist for them. The main reasons come from complexity issues that are inherent when graphs, and a fortiori graph grammars, are considered. The picture is however different if one considers drawings of graphs, rather than the graphs themselves. For instance, it has recently been proved that the isomorphism and pattern searching problems could be solved in polynomial time for plane graphs, that is, planar embedding of planar graphs.
In this talk, after giving some global ideas about grammatical induction, I will introduce the Plane Graph Grammars (PGG) and detail how they differ from usual graph grammar formalisms, while at the same time they share important properties with string context-free grammars. I will try to convince the audience that PGG are well-shaped for learning by showing how recent results on string grammars can be extended to PGG by providing a learning algorithm that identifies in the limit the class of substitutable plane graph languages. This algorithm runs in polynomial time assuming a reasonable restriction used for polynomial parsing, and the amount of data needed for convergence is comparable to the one required in the case of strings.

## Massimiliano Pontil

University of College London & IIT

### Multilinear models for machine learning

I'll present some recent work on multilinear models, with
particular focus on regularization methods which encourage low rank
tensors, and discuss their application to machine learning.