My research activities are centered around *Decision Theory*. More
precisely, I am interested in the mathematical models underlying our decision
making processes. To make it short, consider I ask you whether you want to play
to a certain game (decision \(d_1\)) or not (decision \(d_2\)). If I do not tell you
which game it is, it will be difficult for you to tell me which decision
you would prefer. Now I explain to you that playing the game simply consists in
giving you $100. You will certainly tell me that you prefer decision \(d_1\)) to
\(d_2\), which could be denoted mathematically by \(d_1 \succsim d_2\), where
\(\succsim\) is a binary relation over decisions. One question that naturally arises
is: why did you prefer \(d_1\) to \(d_2\)? Simply because the consequence (or outcome)
of taking this decision, i.e. winning $100, seems more appealing than that of
decision \(d_2\), i.e. winning nothing. Hence \(\succsim\), our preference relation
over decisions, reflects our preferences over the consequences of the decisions.
This explains why one aspect of my researches concerns the mathematical models
representing preference relations over consequence sets.

Let us now change the above game as follows: instead of giving you $100, I toss a coin, and if the head obtains, I give you $100, but if the tail obtains, you lose $200. Now which decision do you prefer? Here, if the coin is fair, you will probably prefer \(d_2\) to \(d_1\) because there is a substantial chance that \(d_1\) results in losing $200. However if the coin is such that the probability that the head obtains is 99.99%, you may prefer \(d_1\) because the chance of losing $200 is very small. This means that preference relation \(\succsim\) does not only take into account your preferences over outcomes, but it also takes into account the uncertainty about the realisation of the outcomes. Probabilities are a popular model of uncertainty, and since the late 80's and Pearl's seminal book, graphical models have been widely used for their encoding and their computation. This explains why my second research field concerns graphical models. It turns out that graphical models also prove useful for preference encodings.

Mathematically, modeling preferences over a set of decisions or over a set of consequences merely amounts to finding a binary relation over pairs decisions or pairs of consequences. For instance, assume that a Decision Maker (DM) has some preferences over a set \(X\) = {eat some chicken, eat some fish, eat an apple pie, eat a soup}. For any pair \((x,y)\) of elements of \(X\), the following may happen:

- the Decision Maker may not be able to compare \(x\) and \(y\), i.e., she cannot say whether she prefers \(x\) to \(y\) or \(y\) to \(x\). This may be the case for \(x\)="eat a soup" and \(y\)="eat an apple pie" since one is a starter and the other is a dessert.
- the Decision Maker may prefer \(x\) to \(y\) or \(y\) to \(x\), or even be indifferent between \(x\) and \(y\).

Mathematically, the Decision Maker's preferences may be represented by a binary relation \(\succsim\) on \(X \times X\) such that \(x \succsim y\) means the DM prefers \(x\) to \(y\) or is indifferent between \(x\) and \(y\). Thus \(x\) and \(y\) being incomparable is equivalent to Not (\(x \succsim y\)) and Not (\(y \succsim x\)). The strict preference of \(x\) over \(y\) can be represented by \(x \succsim y\) and Not (\(y \succsim x\)), and the indifference between \(x\) and \(y\) is captured by \(x \succsim y\) and \(y \succsim x\).

The qualitative aspect of binary relation \(\succsim\) makes it not particularly
well suited for fast computation, in particular if one wishes to find the best
decision under a given set of constraints. Hence, in practical situations, this
relation needs often be encoded numerically. One of the most popular encodings is that
of *utility functions*, a.k.a. *utilities*. The idea is to use a
mapping \(u : X \mapsto \mathbb{R}\), the set of the real numbers, assigning numbers
to elements of \(X\) such that the higher the preferred. More formally, \(u\) is
defined as:
\[x \succsim y \Longleftrightarrow u(x) \geq u(y), \forall\ x,y \in X.\]

In practical situations, the DM's decisions take into account multiple conflicting objectives, hence the outcome set is a multidimensional space and the outcomes are tuples (of attributes) \(x = (x_1,\ldots,x_n)\). Thus utilities are functions of tuples. This is a major problem when they are to be constructed. Indeed, as each Decision Maker has her own preferences, utilities differ from one DM to another. So the only way to construct the DM's utlities is to ask her some questions. But when questions involve to many different attributes, they become too complicated for the human brain to handle and the DM is not able to give accurate answers. In order to simplify the questions, assumptions on the structure of the utilities need be made. One of the most popular is the additive decomposition: \[u(x_1,\ldots,x_n) = u_1(x_1) + \cdots + u_n(x_n).\]

During my PhD thesis, I have studied testable conditions on the DM preference relation ensuring the existence of additively decomposable utility functions (additive utilities for short). I am still interested in decompositions of utilities, but I now study more complicated decompositions such as generalized additive decompositions (GAI): instead of splitting the overall utility into pieces, each one depending only on one attribute, GAI splits it into utilities over some (possibly overlapping) sets of attributes. For instance~: \[u(a,b,c,d,e,f) = u_1(a,b,c) + u_2(c,d,e) + u_3(d,e,f).\]

Among graphical models, I am especially interested in
Bayesian networks (BN) and GAI-nets. The former are powerful tools of the
*Artificial Intelligence* community enabling fast computation of
probabilities (marginal, conditional, a priori or a posteriori probabilities).
To achieve this result, BN use graphical structures that encode knowledge
about the decomposition of joint probability distributions of interest.
GAI-nets are rather similar in spirit, except that instead of decomposing a
probability distribution over random variables, they decompose attributes
or criteria of preference tuples.

Mainly my current activities on graphical models can be divided into three classes:

- Until now, I was mainly concerned with the improvement of BN computation algorithms. In the literature, roughly two main approaches emerged: directed and undirected methods. It is commonly thought that undirected methods outperform directed ones. One of my goals was to unify both types of methods to show that directed methods can compete with undirected ones. Using the advantages of each kind of method, this unification enabled the improvement of all the algorithms.
- As I mentioned, Bayesian networks are graphical structures.
These represent the decomposition of joint probabilities. Hence
they are specific for each problem of interest. In theory, if
you possess a sufficiently
*good*database of values of the random variables you are interested in, computer programs should be able to learn the graph structure of the BN (using for instance statistical independence tests). In practice, however, such programs do not always produce high quality graphs. This explains why I am working on learning methods. - My last main research activity concerns GAI-nets. The idea is to take advantage of some decomposition of the Decision Maker's preference relations to be able to elicit efficiently his/her preferences, and to answer quickly to questions such as "does the Decision Maker prefers this alternative to that one?" or "what is the preferred alternative of the Decision Maker?". Efficient elicitation procedures could prove useful for instance for web shopping sites.