Exposés prévus

Résumé: La simulation Monte Carlo est l’outil d’évaluation privilégié quand les hypothèse sur le modèle mathématique ne sont pas suffisamment fortes pour une résolution exacte ou par analyse numérique classique, mais aussi quand l’espace d’état est trop important pour pouvoir obtenir par ces mêmes méthodes une réponse en un temps raisonnable. Estimer la probabilité d’événements rares par simulation est cependant complexe. En effet, si on cherche par exemple à valider une probabilité de 10-9, il faut par simulation du modèle de base un échatillon de taille 109 pour obtenir en moyenne une fois l’événement, et bien plus pour obtenir un intervalle de confiance.

Au cours de cet exposé, nous commencerons par présenter différents contextes caractéristiques d’événements rares. Nous expliquerons ensuite pourquoi la simulation est souvent requise et introduirons les deux techniques majeures dites d’accélération, c’est à dire d’amélioration de l’efficacité. Ces techniques sont l’échantillonnage préférentiel (importance sampling) qui modifie la loi de probabilité de l’échantillon, introduisant un biais qui peut être annulé en changeant le variable aléatoire considérée, et la ramification de trajectoires (splitting) qui, décompose l’événement rare en une succession d’événements moins rares: dès qu’un événement intermédiaire est atteint, la trajectoire est scindée en plusieurs nouvelles trajectoires tentant d’atteindre l’événement suivant. Nous décrirons également les axes récents de recherche dans ces domaines.

Résumé : Le model checking est maintenant une technique éprouvée pour la vérification de propriétés de systèmes à événements discrets. Les propriétés sont exprimées par des formules de logique temporelle et la vérification combine construction et exploration de graphes. Dans cet exposé, je rappellerai les bases du model checking puis j’indiquerai comment adapter cette technique aux chaînes de Markov à temps discret ou continu.

Abstract: The asymptotic behaviour of the eigenvalues of large random matrices has been extensively studied since the fifties. One of the first related result was the work of Eug~ne Wigner in 1955 who remarked that the eigenvalue distribution of a standard Gaussian hermitian matrix converges to a deterministic probability distribution called the semi-circular law when the dimensions of the matrix converge to infinity. Since that time, the study of the eigenvalue distribution of random matrices has triggered numerous works, in the theoretical physics as well as probability theory communities. However, as far as communications systems are concerned, until 1996, exhaustive simulations were thought to be the only hope to get some hints on how communications systems with many users/antennas behave even with respect to very basic performance measures such as bit error probability. All this changed in 1997 when large system analysis based on random matrix theory was discovered as an appropriate tool to regain intuitive insight into communication systems, even if their performance measures happened to depend on thousand of parameters. The results led to very active research in many fields such as MIMO, CDMA and Relay networks. In this talk, we will describe recent advances in the field and show the potential of random matrix theory as a tool design of wireless networks.

Le Network Calculus peut être présenté comme une théorie permettant de donner des bornes déterministes sur des mesures de performances dans les réseaux telles que les délais ou la charge. La possibilité d’analyser ainsi le comportement pire cas d’un réseau rend cette théorie attractive pour assurer la qualité de service aussi bien dans des grands réseaux de communication (type Internet) que dans des réseaux dédiés à des applications temps-réel (pour lesquels avoir des garanties de performance sûres est critique).

Cette théorie est basée sur un formalisme utilisant des opérations des algèbres (min+) et (max,+). Elle consiste notamment à combiner des courbes décrivant localement la forme du traffic et des services pour en déduire le comportement global du réseau, et en particulier des bornes sur les mesures de bout-en-bout.

Après une brève introduction au Network Calculus, je présenterai différentes questions algorithmiques soulevées par cette théorie. Il sera bien sûr question d’algèbre (min,+), mais aussi d’algèbre linéaire, de géomètrie algorithmique ou de théorèmes min-max.

Une très bonne initiation au Network Calculus consiste à naviguer sur le site de l’EPFL depuis l’adresse : http://ica1www.epfl.ch/netcal/ qui regroupe un état de l’art récent, des tutoriels et un livre de référence “Network Calculus – A Theory of Deterministic Queuing Systems for the Internet” de J.-Y. Le Boudec et P. Thiran.

Abstract: We consider models of N interacting objects, where the interaction is via a common resource and the distribution of states of all objects. We introduce the key scaling concept of intensity; informally, the expected number of transitions per object per time slot is of the order of the intensity. We consider the case of vanishing intensity, i.e. the expected number of object transitions per time slot is o(N). We show that, under mild assumptions and for large N, the occupancy measure converges, in mean square (and thus in probability) over any finite horizon, to a deterministic dynamical system. The mild assumption is essentially that the coefficient of variation of the number of object transitions per time slot remains bounded with $N$. No independence assumption is needed anywhere. The convergence results allow us to derive properties valid in the stationary regime. We discuss when one can assure that a stationary point of the ODE is the large N limit of the stationary probability distribution of the state of one object for the system with N objects. We use this to develop a critique of the fixed point method sometimes used in conjunction with the decoupling assumption.

This is joint work with Michel Benaim.

Abstract: In this talk we will provide an overview of the main results available in the literature on optimal scheduling in a stochastic single server queue. The setting considered is as follows: jobs arrive to the server according to some stochastic process and request a service time that is randomly distributed. Upon reception of this amount of service, jobs depart and leave the queue. We will consider the single-class and multi-class settings. The objective is to determine the optimal scheduling policy that minimizes the number of jobs in the system.

We will first consider a single class queue. Scheduling policies are typically classified as anticipating or non-anticipating policies. We say that a policy is anticipating (non-anticipating) if information on the service time of jobs is (not) available to the scheduler. We will show that within the set of anticipating policies, the so-called Shortest Remaining Processing Time (SRPT) policy minimizes the number of jobs at every instant of time for any arrival process and any service time distribution. Within the set of non-anticipating policies, the optimal policy will depend on the characteristics of the service time distribution, but not on the characteristics of the arrival process. We will show that the so-called Gittins index policy minimizes the mean number of jobs in the system. When the service time distribution has a decreasing hazard rate, the so called Least-Attained-Service (LAS) discipline, which gives full priority to the job with the least attained service, minimizes stochastically the number of jobs in the system. It was widely believed that size-based disciplines like SRPT or LAS would cause starvation of long jobs. Recent results on SRPT and LAS indicate that this is not the case, and that the performance that long jobs perceive is still satisfactory.

We will next consider a multi-class queue. The results we will cover apply to a (i) multi-class queue with exponential service times and a non-anticipating scheduling discipline or (ii) a multi-class queue with general service times and a non-preemptive scheduling discipline. We will show that under these assumptions, certain performance measures of interest (for example the per class unfinished work) satisfy the so-called strong conservation laws, that is, the total performance over all job types is invariant with respect to the scheduling discipline, and the total performance over any subset (say A) of job classes is minimized by offering absolute priority to the classes in A over all the other classes. Strong conservation laws have led to the development of the achievable region approach. Given a performance vector of interest and a certain system-wide performance measure that is expressed as a function of the performance vector, the achievable region approach allows to determine the optimal control policy that minimizes the system-wide performance by (1) characterizing the space of all achievable performance vectors and (2) optimizing the system-wide performance measure over the performance space. For example, under the assumptions of (i) and (ii), the achievable region approach provides an elegant proof for the optimality of the c\mu rule.

Menu

MESCAL