15 resultados para Monotone splines
em Université de Lausanne, Switzerland
Resumo:
In this paper we propose a highly accurate approximation procedure for ruin probabilities in the classical collective risk model, which is based on a quadrature/rational approximation procedure proposed in [2]. For a certain class of claim size distributions (which contains the completely monotone distributions) we give a theoretical justification for the method. We also show that under weaker assumptions on the claim size distribution, the method may still perform reasonably well in some cases. This in particular provides an efficient alternative to a related method proposed in [3]. A number of numerical illustrations for the performance of this procedure is provided for both completely monotone and other types of random variables.
Resumo:
Motivation. The study of human brain development in itsearly stage is today possible thanks to in vivo fetalmagnetic resonance imaging (MRI) techniques. Aquantitative analysis of fetal cortical surfacerepresents a new approach which can be used as a markerof the cerebral maturation (as gyration) and also forstudying central nervous system pathologies [1]. However,this quantitative approach is a major challenge forseveral reasons. First, movement of the fetus inside theamniotic cavity requires very fast MRI sequences tominimize motion artifacts, resulting in a poor spatialresolution and/or lower SNR. Second, due to the ongoingmyelination and cortical maturation, the appearance ofthe developing brain differs very much from thehomogenous tissue types found in adults. Third, due tolow resolution, fetal MR images considerably suffer ofpartial volume (PV) effect, sometimes in large areas.Today extensive efforts are made to deal with thereconstruction of high resolution 3D fetal volumes[2,3,4] to cope with intra-volume motion and low SNR.However, few studies exist related to the automatedsegmentation of MR fetal imaging. [5] and [6] work on thesegmentation of specific areas of the fetal brain such asposterior fossa, brainstem or germinal matrix. Firstattempt for automated brain tissue segmentation has beenpresented in [7] and in our previous work [8]. Bothmethods apply the Expectation-Maximization Markov RandomField (EM-MRF) framework but contrary to [7] we do notneed from any anatomical atlas prior. Data set &Methods. Prenatal MR imaging was performed with a 1-Tsystem (GE Medical Systems, Milwaukee) using single shotfast spin echo (ssFSE) sequences (TR 7000 ms, TE 180 ms,FOV 40 x 40 cm, slice thickness 5.4mm, in plane spatialresolution 1.09mm). Each fetus has 6 axial volumes(around 15 slices per volume), each of them acquired inabout 1 min. Each volume is shifted by 1 mm with respectto the previous one. Gestational age (GA) ranges from 29to 32 weeks. Mother is under sedation. Each volume ismanually segmented to extract fetal brain fromsurrounding maternal tissues. Then, in-homogeneityintensity correction is performed using [9] and linearintensity normalization is performed to have intensityvalues that range from 0 to 255. Note that due tointra-tissue variability of developing brain someintensity variability still remains. For each fetus, ahigh spatial resolution image of isotropic voxel size of1.09 mm is created applying [2] and using B-splines forthe scattered data interpolation [10] (see Fig. 1). Then,basal ganglia (BS) segmentation is performed on thissuper reconstructed volume. Active contour framework witha Level Set (LS) implementation is used. Our LS follows aslightly different formulation from well-known Chan-Vese[11] formulation. In our case, the LS evolves forcing themean of the inside of the curve to be the mean intensityof basal ganglia. Moreover, we add local spatial priorthrough a probabilistic map created by fitting anellipsoid onto the basal ganglia region. Some userinteraction is needed to set the mean intensity of BG(green dots in Fig. 2) and the initial fitting points forthe probabilistic prior map (blue points in Fig. 2). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed as described in [8]. Results.The case study presented here has 29 weeks of GA. Thehigh resolution reconstructed volume is presented in Fig.1. The steps of BG segmentation are shown in Fig. 2.Overlap in comparison with manual segmentation isquantified by the Dice similarity index (DSI) equal to0.829 (values above 0.7 are considered a very goodagreement). Such BG segmentation has been applied on 3other subjects ranging for 29 to 32 GA and the DSI hasbeen of 0.856, 0.794 and 0.785. Our segmentation of theinner (red and blue contours) and outer cortical surface(green contour) is presented in Fig. 3. Finally, torefine the results we include our WM segmentation in theFreesurfer software [12] and some manual corrections toobtain Fig.4. Discussion. Precise cortical surfaceextraction of fetal brain is needed for quantitativestudies of early human brain development. Our workcombines the well known statistical classificationframework with the active contour segmentation forcentral gray mater extraction. A main advantage of thepresented procedure for fetal brain surface extraction isthat we do not include any spatial prior coming fromanatomical atlases. The results presented here arepreliminary but promising. Our efforts are now in testingsuch approach on a wider range of gestational ages thatwe will include in the final version of this work andstudying as well its generalization to different scannersand different type of MRI sequences. References. [1]Guibaud, Prenatal Diagnosis 29(4) (2009). [2] Rousseau,Acad. Rad. 13(9), 2006, [3] Jiang, IEEE TMI 2007. [4]Warfield IADB, MICCAI 2009. [5] Claude, IEEE Trans. Bio.Eng. 51(4) (2004). [6] Habas, MICCAI (Pt. 1) 2008. [7]Bertelsen, ISMRM 2009 [8] Bach Cuadra, IADB, MICCAI 2009.[9] Styner, IEEE TMI 19(39 (2000). [10] Lee, IEEE Trans.Visual. And Comp. Graph. 3(3), 1997, [11] Chan, IEEETrans. Img. Proc, 10(2), 2001 [12] Freesurfer,http://surfer.nmr.mgh.harvard.edu.
Resumo:
OBJECTIVES: To analyse the prevalence of lifetime recourse to prostitution (LRP) among men in the general population of Switzerland from a trend and cohort perspective. METHODS: Using nine repeated representative cross-sectional surveys from 1987 to 2000, age-specific estimates of LRP were computed. Trends and period effect were analysed as the evolution of cross-sectional population estimates within age groups and overall. Cohort analysis relied on cohorts constructed from the 1989 survey and followed in subsequent waves. Age and cohort effects were modelled using logistic regression and non-parametric monotone regression. RESULTS: Whereas prevalence for the younger groups was found to be logically lower, there was no consistent increasing or decreasing trend over the years; there was no significant period effect. For the 17-30 year age group, the mean estimate over 1987-2000 was 11.5% (range 8.3 to 12.7%); for the 31-45 year group, the mean was 21.5% (range over 1989-2000 20.3 to 23.0%). Regarding cohort analysis, the prevalence of LRP was found to increase steeply in the youngest ages before reaching a plateau near the age of 40 years. At the age of 43 years, the prevalence was estimated to be 22.6% (95% CI 21.1% to 24.1%). CONCLUSIONS: The steep increase in the cohort-wise prevalence of LRP in younger ages calls for a concentration of prevention activities in young people. If the plateauing at approximately 40 years of age is not followed by a further increase later in life, which is not known, then consumers of paid sex would be repeat buyers only, a fact that should be taken into account by prevention.
Resumo:
We show that a simple mixing idea allows one to establish a number of explicit formulas for ruin probabilities and related quantities in collective risk models with dependence among claim sizes and among claim inter-occurrence times. Examples include compound Poisson risk models with completely monotone marginal claim size distributions that are dependent according to Archimedean survival copulas as well as renewal risk models with dependent inter-occurrence times.
Resumo:
OBJECTIVES: To examine trends in the prevalence of congenital heart defects (CHDs) in Europe and to compare these trends with the recent decrease in the prevalence of CHDs in Canada (Quebec) that was attributed to the policy of mandatory folic acid fortification. STUDY DESIGN: We used data for the period 1990-2007 for 47 508 cases of CHD not associated with a chromosomal anomaly from 29 population-based European Surveillance of Congenital Anomalies registries in 16 countries covering 7.3 million births. We estimated trends for all CHDs combined and separately for 3 severity groups using random-effects Poisson regression models with splines. RESULTS: We found that the total prevalence of CHDs increased during the 1990s and the early 2000s until 2004 and decreased thereafter. We found essentially no trend in total prevalence of the most severe group (group I), whereas the prevalence of severity group II increased until about 2000 and decreased thereafter. Trends for severity group III (the most prevalent group) paralleled those for all CHDs combined. CONCLUSIONS: The prevalence of CHDs decreased in recent years in Europe in the absence of a policy for mandatory folic acid fortification. One possible explanation for this decrease may be an as-yet-undocumented increase in folic acid intake of women in Europe following recommendations for folic acid supplementation and/or voluntary fortification. However, alternative hypotheses, including reductions in risk factors of CHDs (eg, maternal smoking) and improved management of maternal chronic health conditions (eg, diabetes), must also be considered for explaining the observed decrease in the prevalence of CHDs in Europe or elsewhere.
Resumo:
Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.
Resumo:
Etude des modèles de Whittle markoviens probabilisés Résumé Le modèle de Whittle markovien probabilisé est un modèle de champ spatial autorégressif simultané d'ordre 1 qui exprime simultanément chaque variable du champ comme une moyenne pondérée aléatoire des variables adjacentes du champ, amortie d'un coefficient multiplicatif ρ, et additionnée d'un terme d'erreur (qui est une variable gaussienne homoscédastique spatialement indépendante, non mesurable directement). Dans notre cas, la moyenne pondérée est une moyenne arithmétique qui est aléatoire du fait de deux conditions : (a) deux variables sont adjacentes (au sens d'un graphe) avec une probabilité 1 − p si la distance qui les sépare est inférieure à un certain seuil, (b) il n'y a pas d'adjacence pour des distances au-dessus de ce seuil. Ces conditions déterminent un modèle d'adjacence (ou modèle de connexité) du champ spatial. Un modèle de Whittle markovien probabilisé aux conditions où p = 0 donne un modèle de Whittle classique qui est plus familier en géographie, économétrie spatiale, écologie, sociologie, etc. et dont ρ est le coefficient d'autorégression. Notre modèle est donc une forme probabilisée au niveau de la connexité du champ de la forme des modèles de Whittle classiques, amenant une description innovante de l'autocorrélation spatiale. Nous commençons par décrire notre modèle spatial en montrant les effets de la complexité introduite par le modèle de connexité sur le pattern de variances et la corrélation spatiale du champ. Nous étudions ensuite la problématique de l'estimation du coefficent d'autorégression ρ pour lequel au préalable nous effectuons une analyse approfondie de son information au sens de Fisher et de Kullback-Leibler. Nous montrons qu'un estimateur non biaisé efficace de ρ possède une efficacité qui varie en fonction du paramètre p, généralement de manière non monotone, et de la structure du réseau d'adjacences. Dans le cas où la connexité du champ est non observée, nous montrons qu'une mauvaise spécification de l'estimateur de maximum de vraisemblance de ρ peut biaiser celui-ci en fonction de p. Nous proposons dans ce contexte d'autres voies pour estimer ρ. Pour finir, nous étudions la puissance des tests de significativité de ρ pour lesquels les statistiques de test sont des variantes classiques du I de Moran (test de Cliff-Ord) et du I de Moran maximal (en s'inspirant de la méthode de Kooijman). Nous observons la variation de puissance en fonction du paramètre p et du coefficient ρ, montrant par cette voie la dualité de l'autocorrélation spatiale entre intensité et connectivité dans le contexte des modèles autorégressifs
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
RESUME Cette thèse se situe à la frontière de la recherche en économie du développement et du commerce international et vise à intégrer les apports de l'économie géographique. Le premier chapitre s'intéresse aux effets de création et de détournement de commerce au sein des accords régionaux entre pays en développement et combine une approche gravitaire et une estimation non paramétrique des effets de commerce. Cette analyse confirme un effet de commerce non monotone pour six accords régionaux couvrant l'Afrique, l'Amérique Latine et l'Asie (AFTA, CAN, CACM, CEDEAO, MERCO SUR et SADC) sur la période 1960-1996. Les accords signés dans les années 90 (AFTA, CAN, MERCOSUR et SADC) semblent avoir induis une amélioration du bien-être de leurs membres mais avec un impact variable sur le reste du monde, tandis que les accords plus anciens (CEDEAO et CACM) semblent montrer que les effets de commerce et de bien-être se réduisent pour finir par s'annuler à mesure que le nombre d'années de participation des Etats membres augmente. Le deuxième chapitre pose la question de l'impact de la géographie sur les échanges Sud-Sud. Ce chapitre innove par rapport aux méthodes classiques d'estimation en dérivant une équation de commerce à partir de l'hypothèse d'Armington et en intégrant une fonction de coût de transport qui prend en compte la spécificité des pays de l'UEMOA. Les estimations donnent des effets convaincants quant au rôle de l'enclavement et des infrastructures: deux pays enclavés de l'UEMOA commercent 92% moins que deux autres pays quelconques, tandis que traverser un pays de transit au sein de l'espace UEMOA augmente de 6% les coûts de transport, et que bitumer toutes les routes inter-Etat de l'Union induirait trois fois plus de commerce intra-UEMOA. Le chapitre 3 s'intéresse à la persistance des différences de développement au sein des accords régionaux entre pays en développement. Il montre que la géographie différenciée des pays du Sud membres d'un accord induit un impact asymétrique de celui-ci sur ses membres. Il s'agit d'un modèle stylisé de trois pays dont deux ayant conclu un accord régional. Les résultats obtenus par simulation montrent qu'une meilleure dotation en infrastructure d'un membre de l'accord régional lui permet d'attirer une plus grande part industrielle à mesure que les coûts de transport au sein de l'accord régional sont baissés, ce qui conduit à un développement inégal entre les membres. Si les niveaux d'infrastructure domestique de transport sont harmonisés au sein des pays membres de l'accord d'intégration, leurs parts industrielles peuvent converger au détriment des pays restés hors de l'union. Le chapitre 4 s'intéresse à des questions d'économie urbaine en étudiant comment l'interaction entre rendements croissants et coûts de transport détermine la localisation des activités et des travailleurs au sein d'un pays ou d'une région. Le modèle développé reproduit un fait stylisé observé à l'intérieur des centres métropolitains des USA: sur une période longue (1850-1990), on observe une spécialisation croissante des centres urbains et de leurs périphéries associée à une évolution croissante puis décroissante de la population des centres urbains par rapport à leurs périphéries. Ce résultat peut se transférer dans un contexte en développement avec une zone centrale et une zone périphérique: à mesure que l'accessibilité des régions s'améliore, ces régions se spécialiseront et la région principale, d'abord plus importante (en termes de nombre de travailleurs) va finir par se réduire à une taille identique à celle de la région périphérique.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
The objective of this study was to evaluate the performance of stacked species distribution models in predicting the alpha and gamma species diversity patterns of two important plant clades along elevation in the Andes. We modelled the distribution of the species in the Anthurium genus (53 species) and the Bromeliaceae family (89 species) using six modelling techniques. We combined all of the predictions for the same species in ensemble models based on two different criteria: the average of the rescaled predictions by all techniques and the average of the best techniques. The rescaled predictions were then reclassified into binary predictions (presence/absence). By stacking either the original predictions or binary predictions for both ensemble procedures, we obtained four different species richness models per taxa. The gamma and alpha diversity per elevation band (500 m) was also computed. To evaluate the prediction abilities for the four predictions of species richness and gamma diversity, the models were compared with the real data along an elevation gradient that was independently compiled by specialists. Finally, we also tested whether our richness models performed better than a null model of altitudinal changes of diversity based on the literature. Stacking of the ensemble prediction of the individual species models generated richness models that proved to be well correlated with the observed alpha diversity richness patterns along elevation and with the gamma diversity derived from the literature. Overall, these models tend to overpredict species richness. The use of the ensemble predictions from the species models built with different techniques seems very promising for modelling of species assemblages. Stacking of the binary models reduced the over-prediction, although more research is needed. The randomisation test proved to be a promising method for testing the performance of the stacked models, but other implementations may still be developed.
Resumo:
In vivo fetal magnetic resonance imaging provides aunique approach for the study of early human braindevelopment [1]. In utero cerebral morphometry couldpotentially be used as a marker of the cerebralmaturation and help to distinguish between normal andabnormal development in ambiguous situations. However,this quantitative approach is a major challenge becauseof the movement of the fetus inside the amniotic cavity,the poor spatial resolution provided by very fast MRIsequences and the partial volume effect. Extensiveefforts are made to deal with the reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution [2,3,4]. Frameworkswere developed for the segmentation of specific regionsof the fetal brain such as posterior fossa, brainstem orgerminal matrix [5,6], or for the entire brain tissue[7,8], applying the Expectation-Maximization MarkovRandom Field (EM-MRF) framework. However, many of theseprevious works focused on the young fetus (i.e. before 24weeks) and use anatomical atlas priors to segment thedifferent tissue or regions. As most of the gyraldevelopment takes place after the 24th week, acomprehensive and clinically meaningful study of thefetal brain should not dismiss the third trimester ofgestation. To cope with the rapidly changing appearanceof the developing brain, some authors proposed a dynamicatlas [8]. To our opinion, this approach however faces arisk of circularity: each brain will be analyzed /deformed using the template of its biological age,potentially biasing the effective developmental delay.Here, we expand our previous work [9] to proposepost-processing pipeline without prior that allow acomprehensive set of morphometric measurement devoted toclinical application. Data set & Methods: Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences (TR 7000 ms, TE 180 ms, FOV 40 x 40 cm,slice thickness 5.4mm, in plane spatial resolution1.09mm). For each fetus, 6 axial volumes shifted by 1 mmwere acquired under motherâeuro?s sedation (about 1min pervolume). First, each volume is segmentedsemi-automatically using region-growing algorithms toextract fetal brain from surrounding maternal tissues.Inhomogeneity intensity correction [10] and linearintensity normalization are then performed. Brain tissues(CSF, GM and WM) are then segmented based on thelow-resolution volumes as presented in [9]. Ahigh-resolution image with isotropic voxel size of 1.09mm is created as proposed in [2] and using B-splines forthe scattered data interpolation [11]. Basal gangliasegmentation is performed using a levet setimplementation on the high-resolution volume [12]. Theresulting white matter image is then binarized and givenas an input in FreeSurfer software(http://surfer.nmr.mgh.harvard.edu) to providetopologically accurate three-dimensional reconstructionsof the fetal brain according to the local intensitygradient. References: [1] Guibaud, Prenatal Diagnosis29(4) (2009). [2] Rousseau, Acad. Rad. 13(9), 2006. [3]Jiang, IEEE TMI 2007. [4] Warfield IADB, MICCAI 2009. [5]Claude, IEEE Trans. Bio. Eng. 51(4) 2004. [6] Habas,MICCAI 2008. [7] Bertelsen, ISMRM 2009. [8] Habas,Neuroimage 53(2) 2010. [9] Bach Cuadra, IADB, MICCAI2009. [10] Styner, IEEE TMI 19(39 (2000). [11] Lee, IEEETrans. Visual. And Comp. Graph. 3(3), 1997. [12] BachCuadra, ISMRM 2010.
Resumo:
RESUME : Cette étude est une analyse métrique et stylistique de La Pulcella d'Orléans de Vincenzo Monti - traduction-réécriture de l'homonyme poème de Voltaire, La Pucelle d'Orléans - commencée à Milan en 1798 et terminée à Chambéry, en Savoie, en 1799. Le texte italien a été considéré comme une version autonome par rapport au texte français, étant donné le particulier choix de réduire la composante philosophique et idéologique d'origine, et de mettre en relation le modèle avec une littérature italienne spécifique, principalement par l'adoption d'une grille strophique fortement marquée. La Pulcella est traduite en octaves, un mètre chevaleresque qui possède au moins depuis trois siècles sa propre "grammaire" ainsi qu'une formidable tradition de référence. De plus, avec sa traduction, l'auteur a voulu mettre l'accent sur les aspects de l'histoire les plus amusantes et provocatrices de Jeanne d'Arc - déjà narrée par Voltaire avec un ton ironique et irrévérencieux - dans le but d'une grande expérimentation au niveau de la langue, de la métrique et de la syntaxe. La traduction de la Pucelle est en effet liée à une dimension hédonistique et livresque: elle n'est pas un prétexte pour connaitre une oeuvre étrangère, ni un texte conçu pour être publiée; il s'agit plutôt d'un exercice personnel, un divertissement privé, demeuré dans le tiroir de l'auteur. Alors que pour Voltaire le but principal du poème est la polémique idéologique du fond, exprimée par un registre fort satirique, pour Monti la réécriture est un jeu stylistique, une complaisance littéraire, qui repose autant sur les composantes désacralisantes et provocatrices que sur les éléments poétiques et idylliques. Le modèle français est donc retravaillé, en premier lieu, au niveau du ton: d'un côté la traduction réduit l'horizon idéologique et la perspective historique des événements; de l'autre elle accroît les aspects les plus hédonistiques et ludiques de Voltaire, par la mise en évidence de l'élément comique, plus coloré et ouvert. En raison de la dimension intime de cette traduction, de nos jours la tradition de la Pulcella italienne se fonde sur trois témoins manuscrits seulement, dont un retrouvé en 1984 et qui a rouvert le débat philologique. Pour ma thèse j'ai utilisé l'édition critique qu'on possède à présent, imprimée en 1982 sous la direction de M. Mari et G. Barbarisi, qui se fonde seulement sur deux témoins du texte; de toute façon mon travail a essayé de considérer aussi en compte le nouvel autographe découvert. Ce travail de thèse sur la Pulcella est organisé en plusieurs chapitres qui reflètent la structure de l'analyse, basée sur les différents niveaux d'élaboration du texte. Au début il y a une introduction générale, où j'ai encadré les deux versions, la française et l'italienne, dans l'histoire littéraire, tout en donnant des indications sur la question philologique relative au texte de Monti. Ensuite, les chapitres analysent quatre aspects différents de la traduction: d'abord, les hendécasyllabes du poème: c'est à dire le rythme des vers, la prosodie et la distribution des différents modules rythmiques par rapport aux positions de l'octave. La Pucelle de Voltaire est en effet écrite en décasyllabes, un vers traditionnellement assez rigide à cause de son rythme coupé par la césure; dans la traduction le vers français est rendu par la plus célèbre mesure de la tradition littéraire italienne, l'endécasyllabe, un vers qui correspond au décasyllabe seulement pour le nombre de syllabes, mais qui présente une majeure liberté rythmique pour la disposition des accents. Le deuxième chapitre considère le mètre de l'octave, en mettant l'accent sur l'organisation syntaxique interne des strophes et sur les liens entre elles ; il résulte que les strophes sont traitées de manière différente par rapport à Voltaire. En effet, au contraire des octaves de Monti, la narration française se développe dans chaque chant en une succession ininterrompue de vers, sans solutions de continuité, en délinéant donc des structures textuelles très unitaires et linéaires. Le troisième chapitre analyse les enjambements de la Pulcella dans le but de dévoiler les liaisons syntactiques entre les verses et les octaves, liaisons presque toujours absentes en Voltaire. Pour finir, j'ai étudié le vocabulaire du poème, en observant de près les mots les plus expressives quant à leur côté comique et parodique. En effet, Monti semble exaspérer le texte français en utilisant un vocabulaire très varié, qui embrasse tous les registres de la langue italienne: de la dimension la plus basse, triviale, populaire, jusqu'au niveau (moins exploité par Voltaire) lyrique et littéraire, en vue d'effets de pastiche comique et burlesque. D'après cette analyse stylistique de la traduction, surgit un aspect très intéressant et unique de la réécriture de Monti, qui concerne l'utilisation soit de l'endécasyllabe, soit de l'octave, soit du vocabulaire du texte. Il s'agit d'un jeu constant sur la voix - ou bien sur une variation continue des différents plans intonatives - et sur la parole, qui devient plus expressive, plus dense. En effet, la lecture du texte suppose une variation mélodique incessante entre la voix de l'auteur (sous forme de la narration et du commentaire) et la voix de personnages, qu'on entend dans les nombreux dialogues; mais aussi une variation de ton entre la dimension lexical littéraire et les registres les plus baissés de la langue populaire. Du point de vue de la syntaxe, par rapport au modèle français (qui est assez monotone et linéaire, basé sur un ordre syntactique normal, sur le rythme régulier du decasyllabe et sur un langage plutôt ordinaire), Monti varie et ennoblit le ton du discours à travers des mouvements syntaxiques raffinés, des constructions de la période plus ou moins réguliers et l'introduction de propositions à cheval des vers. Le discours italien est en effet compliquée par des interruptions continues (qui ne se réalisent pas dans des lieux canoniques, mais plutôt dans la première partie du vers ou en proximité de la pointe) qui marquent des changements de vitesse dans le texte (dialogues, narration, commentaires): ils se vérifient, en somme, des accélérations et des décélérations continues du récit ainsi qu'un jeu sur les ouvertures et fermetures de chaque verse. Tout se fait à travers une recherche d'expressivité qui, en travaillant sur la combinaison et le choc des différents niveaux, déstabilise la parole et rend l'écriture imprévisible.
Resumo:
STUDY QUESTION: What are the long term trends in the total (live births, fetal deaths, and terminations of pregnancy for fetal anomaly) and live birth prevalence of neural tube defects (NTD) in Europe, where many countries have issued recommendations for folic acid supplementation but a policy for mandatory folic acid fortification of food does not exist? METHODS: This was a population based, observational study using data on 11 353 cases of NTD not associated with chromosomal anomalies, including 4162 cases of anencephaly and 5776 cases of spina bifida from 28 EUROCAT (European Surveillance of Congenital Anomalies) registries covering approximately 12.5 million births in 19 countries between 1991 and 2011. The main outcome measures were total and live birth prevalence of NTD, as well as anencephaly and spina bifida, with time trends analysed using random effects Poisson regression models to account for heterogeneities across registries and splines to model non-linear time trends. SUMMARY ANSWER AND LIMITATIONS: Overall, the pooled total prevalence of NTD during the study period was 9.1 per 10 000 births. Prevalence of NTD fluctuated slightly but without an obvious downward trend, with the final estimate of the pooled total prevalence of NTD in 2011 similar to that in 1991. Estimates from Poisson models that took registry heterogeneities into account showed an annual increase of 4% (prevalence ratio 1.04, 95% confidence interval 1.01 to 1.07) in 1995-99 and a decrease of 3% per year in 1999-2003 (0.97, 0.95 to 0.99), with stable rates thereafter. The trend patterns for anencephaly and spina bifida were similar, but neither anomaly decreased substantially over time. The live birth prevalence of NTD generally decreased, especially for anencephaly. Registration problems or other data artefacts cannot be excluded as a partial explanation of the observed trends (or lack thereof) in the prevalence of NTD. WHAT THIS STUDY ADDS: In the absence of mandatory fortification, the prevalence of NTD has not decreased in Europe despite longstanding recommendations aimed at promoting peri-conceptional folic acid supplementation and existence of voluntary folic acid fortification. FUNDING, COMPETING INTERESTS, DATA SHARING: The study was funded by the European Public Health Commission, EUROCAT Joint Action 2011-2013. HD and ML received support from the European Commission DG Sanco during the conduct of this study. No additional data available.