28 resultados para Automòbils de competició -- Proves
Resumo:
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count nonculturableor non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescencemicroscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the ''impaction on nutrient agar'' method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria. [Authors]
Resumo:
The Radioimmunotherapy Network (RIT-N) is a Web-based, international registry collecting long-term observational data about radioimmunotherapy-treated patients with malignant lymphoma outside randomized clinical studies. The RIT-N collects unbiased data on treatment indications, disease stages, patients' conditions, lymphoma subtypes, and hematologic side effects of radioimmunotherapy treatment. Methods: RIT-N is located at the University of Gottingen, Germany, and collected data from 14 countries. Data were entered by investigators into a Web-based central database managed by an independent clinical research organization. Results: Patients (1,075) were enrolled from December 2006 until November 2009, and 467 patients with an observation time of at least 12 mo were included in the following analysis. Diagnoses were as follows: 58% follicular lymphoma and 42% other B-cell lymphomas. The mean overall survival was 28 mo for follicular lymphoma and 26 mo for other lymphoma subtypes. Hematotoxicity was mild for hemoglobin (World Health Organization grade II), with a median nadir of 10 g/dL, but severe (World Health Organization grade III) for platelets and leukocytes, with a median nadir of 7,000/mu L and 2.2/mu L, respectively. Conclusion: Clinical usage of radioimmunotherapy differs from the labeled indications and can be assessed by this registry, enabling analyses of outcome and toxicity data beyond clinical trials. This analysis proves that radioimmunotherapy in follicular lymphoma and other lymphoma subtypes is a safe and efficient treatment option.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Recognition and identification processes for deceased persons. Determining the identity of deceased persons is a routine task performed essentially by police departments and forensic experts. This thesis highlights the processes necessary for the proper and transparent determination of the civil identities of deceased persons. The identity of a person is defined as the establishment of a link between that person ("the source") and information pertaining to the same individual ("identifiers"). Various identity forms could emerge, depending on the nature of the identifiers. There are two distinct types of identity, namely civil identity and biological identity. The paper examines four processes: identification by witnesses (the recognition process) and comparisons of fingerprints, dental data and DNA profiles (the identification processes). During the recognition process, the memory function is examined and helps to clarify circumstances that may give rise to errors. To make the process more rigorous, a body presentation procedure is proposed to investigators. Before examining the other processes, three general concepts specific to forensic science are considered with regard to the identification of a deceased person, namely, matter divisibility (Inman and Rudin), transfer (Locard) and uniqueness (Kirk). These concepts can be applied to the task at hand, although some require a slightly broader scope of application. A cross comparison of common forensic fields and the identification of deceased persons reveals certain differences, including 1 - reverse positioning of the source (i.e. the source is not sought from traces, but rather the identifiers are obtained from the source); 2 - the need for civil identity determination in addition to the individualisation stage; and 3 - a more restricted population (closed set), rather than an open one. For fingerprints, dental and DNA data, intravariability and intervariability are examined, as well as changes in these post mortem (PM) identifiers. Ante-mortem identifiers (AM) are located and AM-PM comparisons made. For DNA, it has been shown that direct identifiers (taken from a person whose civil identity has been alleged) tend to lead to determining civil identity whereas indirect identifiers (obtained from a close relative) direct towards a determination of biological identity. For each process, a Bayesian model is presented which includes sources of uncertainty deemed to be relevant. The results of the different processes combine to structure and summarise an overall outcome and a methodology. The modelling of dental data presents a specific difficulty with respect to intravariability, which in itself is not quantifiable. The concept of "validity" is, therefore, suggested as a possible solution to the problem. Validity uses various parameters that have an acknowledged impact on teeth intravariability. In cases where identifying deceased persons proves to be extremely difficult due to the limited discrimination of certain procedures, the use of a Bayesian approach is of great value in bringing a transparent and synthetic value. RESUME : Titre: Processus de reconnaissance et d'identification de personnes décédées. L'individualisation de personnes décédées est une tâche courante partagée principalement par des services de police, des odontologues et des laboratoires de génétique. L'objectif de cette recherche est de présenter des processus pour déterminer valablement, avec une incertitude maîtrisée, les identités civiles de personnes décédées. La notion d'identité est examinée en premier lieu. L'identité d'une personne est définie comme l'établissement d'un lien entre cette personne et des informations la concernant. Les informations en question sont désignées par le terme d'identifiants. Deux formes distinctes d'identité sont retenues: l'identité civile et l'identité biologique. Quatre processus principaux sont examinés: celui du témoignage et ceux impliquant les comparaisons d'empreintes digitales, de données dentaires et de profils d'ADN. Concernant le processus de reconnaissance, le mode de fonctionnement de la mémoire est examiné, démarche qui permet de désigner les paramètres pouvant conduire à des erreurs. Dans le but d'apporter un cadre rigoureux à ce processus, une procédure de présentation d'un corps est proposée à l'intention des enquêteurs. Avant d'entreprendre l'examen des autres processus, les concepts généraux propres aux domaines forensiques sont examinés sous l'angle particulier de l'identification de personnes décédées: la divisibilité de la matière (Inman et Rudin), le transfert (Locard) et l'unicité (Kirk). Il est constaté que ces concepts peuvent être appliqués, certains nécessitant toutefois un léger élargissement de leurs principes. Une comparaison croisée entre les domaines forensiques habituels et l'identification de personnes décédées montre des différences telles qu'un positionnement inversé de la source (la source n'est plus à rechercher en partant de traces, mais ce sont des identifiants qui sont recherchés en partant de la source), la nécessité de devoir déterminer une identité civile en plus de procéder à une individualisation ou encore une population d'intérêt limitée plutôt qu'ouverte. Pour les empreintes digitales, les dents et l'ADN, l'intra puis l'inter-variabilité sont examinées, de même que leurs modifications post-mortem (PM), la localisation des identifiants ante-mortem (AM) et les comparaisons AM-PM. Pour l'ADN, il est démontré que les identifiants directs (provenant de la personne dont l'identité civile est supposée) tendent à déterminer une identité civile alors que les identifiants indirects (provenant d'un proche parent) tendent à déterminer une identité biologique. Puis une synthèse des résultats provenant des différents processus est réalisée grâce à des modélisations bayesiennes. Pour chaque processus, une modélisation est présentée, modélisation intégrant les paramètres reconnus comme pertinents. À ce stade, une difficulté apparaît: celle de quantifier l'intra-variabilité dentaire pour laquelle il n'existe pas de règle précise. La solution préconisée est celle d'intégrer un concept de validité qui intègre divers paramètres ayant un impact connu sur l'intra-variabilité. La possibilité de formuler une valeur de synthèse par l'approche bayesienne s'avère d'une aide précieuse dans des cas très difficiles pour lesquels chacun des processus est limité en termes de potentiel discriminant.
Resumo:
Velopharyngeal insufficiency (VPI) is a structural or functional trouble, which causes hypernasal speech. Velopharyngeal flaps, speech therapy and augmentation pharyngoplasty, using different implants, have all been used to address this trouble. We hereby present our results following rhinopharyngeal autologous fat injection in 18 patients with mild velopharyngeal insufficiency (12 soft palate clefts, 4 functional VPI, 2 myopathy). 28 injections were carried out between 2004 and 2007. The degree of hypernasal speech was evaluated pre- and postoperatively by a speech therapist and an ENT specialist and quantified by an acoustic nasometry (Kay Elemetrics). All patients were exhaustively treated with preoperative speech therapy (average, 8 years). The mean value of the nasalance score was 37% preoperatively and 23% postoperatively (p = 0.015). The hypernasality was reduced postoperatively in all patients (1-3 degrees of the Borel-Maisonny score). There were no major complications, two minor complications (one hematoma, one cervical pain). The autologous fat injection is a simple, safe, minimally invasive procedure. It proves to be efficient in cases of mild velopharyngeal insufficiency or after a suboptimal velopharyngoplasty.
Resumo:
INTRODUCTION: Radiosurgery (RS) is gaining increasing acceptance in the upfront management of brain metastases (BM). It was initially used in so-called radioresistant metastases (melanoma, renal cell, sarcoma) because it allowed delivering higher dose to the tumor. Now, RS is also used for BM of other cancers. The risk of high incidence of new BM questions the need for associated whole-brain radiotherapy (WBRT). Recent evidence suggests that RS alone allows avoiding cognitive impairment related to WBRT, and the latter should be upheld for salvage therapy. Thus the increase use of RS for single and multiple BM raises new technical challenges for treatment delivery and dosimetry. We present our single institution experience focusing on the criteria that led to patients' selection for RS treatment with Gamma Knife (GK) in lieu of Linac. METHODS: Leksell Gamma Knife Perfexion (Elekta, Sweden) was installed in July 2010. Currently, the Swiss federal health care supports the costs of RS for BM with Linac but not with GK. Therefore, in our center, we always consider first the possibility to use Linac for this indication, and only select patients for GK in specific situations. All cases of BM treated with GK were retrospectively reviewed for criteria yielding to GK indication, clinical information, and treatment data. Further work in progress includes a posteriori dosimetry comparison with our Linac planning system (Brainscan V.5.3, Brainlab, Germany). RESULTS: From July 2010 to March 2012, 20 patients had RS for BM with GK (7 patients with single BM, and 13 with multiple BM). During the same period, 31 had Linac-based RS. Primary tumor was melanoma in 9, lung in 7, renal in 2, and gastrointestinal tract in 2 patients. In single BM, the reason for choosing of GK was the anatomical location close to, or in highly functional areas (1 motor cortex, 1 thalamic, 1 ventricular, 1 mesio-temporal, 3 deep cerebellar close to the brainstem), especially since most of these tumors were intended to be treated with high-dose RS (24 Gy at margin) because of their histology (3 melanomas, 1 renal cell). In multiple BM, the reason for choosing GK in relation with the anatomical location of the lesions was either technical (limitations of Linac movements, especially in lower posterior fossa locations) or closeness of multiple lesions to highly functional areas (typically, multiple posterior fossa BM close to the brainstem), precluding optimal dosimetry with Linac. Again, this was made more critical for multiple BM needing high-dose RS (6 melanoma, 2 hypernephroma). CONCLUSION: Radiosurgery for BM may represent some technical challenge in relation with the anatomical location and multiplicity of the lesions. These considerations may be accentuated for so-called radioresistant BM, when higher dose RS in needed. In our experience, Leksell Gamma Knife Perfexion proves to be useful in addressing these challenges for the treatment of BM.
Resumo:
Dans le contexte d'un climat de plus en plus chaud, une étude « géosystémique » de la répartition du pergélisol dans l'ensemble d'un versant périglaciaire alpin, de la paroi rocheuse jusqu'au glacier rocheux, s'avère primordiale. S'insérant dans cette problématique, ce travail de thèse vise comme objectif général l'étude des versants d'éboulis situés à l'intérieur de la ceinture du pergélisol discontinu selon deux volets de recherche différents : une étude de la stratigraphie et de la répartition du pergélisol dans les éboulis de haute altitude et des processus qui lui sont associés ; une reconstitution de l'histoire paléoenvironnementale du domaine périglaciaire alpin pendant le Tardiglaciaire et l'Holocène. La stratigraphie et la répartition spatiale du pergélisol a été étudiée dans cinq éboulis des Alpes Valaisannes (Suisse), dont trois ont fait l'objet de forages profonds, grâce à la prospection géophysique de détail effectuée à l'aide de méthodes thermiques, de résistivité, sismiques et nucléaires. Les mesures effectuées ont permis de mettre en évidence que, dans les cinq éboulis étudiés, la répartition du pergélisol est discontinue et aucun des versants n'est intégralement occupé par du pergélisol. En particulier, il a été possible de prouver de manière directe que, dans un éboulis, le pergélisol est présent dans les parties inférieures du versant et absent dans les parties supérieures. Trois facteurs de contrôle principaux de la répartition du pergélisol déterminée au sein des éboulis étudiés ont été individualisés, pouvant agir seuls ou de manière combinée : la ventilation ascendante, l'augmentation de la granulométrie en direction de l'aval et la redistribution de la neige par le vent et les avalanches. Parmi ceux-ci, la relation ventilation-granulométrie semble être le facteur de contrôle principal permettant d'expliquer la présence de pergélisol dans les parties inférieures d'un éboulis et son absence dans les parties supérieures. Enfin, l'analyse de la structure des éboulis périglaciaires de haute altitude a permis de montrer que la stratigraphie du pergélisol peut être un élément important pour l'interprétation de la signification paléoclimatique de ce type de formes. Pour le deuxième volet de la recherche, grâce aux datations relatives effectuées à l'aide de l'utilisation conjointe de la méthode paléogéographique et du marteau de Schmidt, il a été possible de définir la chrono-stratigraphie du retrait glaciaire et du développement des glaciers rocheux et des versants d'éboulis des quatre régions des Alpes suisses étudiées (régions du Mont Gelé - Mont Fort, des Fontanesses et de Chamosentse, dans les Alpes Valaisannes, et Massif de la Cima di Gana Bianca, dans les Alpes Tessinoises). La compilation de toutes les datations effectuées a permis de montrer que la plupart des glaciers rocheux actifs étudiés se seraient développés soit juste avant et/ou pendant l'Optimum Climatique Holocène de 9.5-6.3 ka cal BP, soit au plus tard juste après cet évènement climatique majeur du dernier interglaciaire. Parmi les glaciers rocheux fossiles datés, la plupart aurait commencé à se former dans la deuxième moitié du Tardiglaciaire et se serait inactivé dans la première partie de l'Optimum Climatique Holocène. Pour les éboulis étudiés, les datations effectuées ont permis d'observer que leur surface date de la période entre le Boréal et l'Atlantique récent, indiquant que les taux d'éboulisation après la fin de l'Optimum Climatique Holocène ont dû être faibles, et que l'intervalle entre l'âge maximal et l'âge minimal est dans la plupart des cas relativement court (4-6 millénaires), indiquant que les taux d'éboulisation durant la période de formation des éboulis ont dû être importants. Grâce au calcul des taux d'érosion des parois rocheuses sur la base du volume de matériaux rocheux pour quatre des éboulis étudiés, il a été possible mettre en évidence l'existence d'une « éboulisation parapériglaciaire » liée à la dégradation du pergélisol dans les parois rocheuses, fonctionnant principalement durant les périodes de réchauffement climatique rapide comme cela a été le cas au début du Bølling, du Préboréal à la fin de l'Atlantique récent et, peut-être, à partir des années 1980. - In the context of a warmer climate, a « geosystemical » study of the permafrost distribution in a whole alpine periglacial hillslope, from the rockwall to the rockglacier, is of great importance. With respect to this problem, the general objective of this PhD thesis is the global study of talus slopes located within the alpine periglacial belt following two different research axes: the analysis of the internal structure and of the permafrost distribution of high altitude talus slopes and of the related processes; the reconstruction of the palaeoenvironmental history of the alpine periglacial belt during the Lateglacial and the Holocene. The stratigraphy and the permafrost distribution were studied in five talus slopes of the Valais Alps (Switzerland) with the analysis of borehole data (on three of the five talus slopes) and other methods of permafrost prospecting: Electrical Resistivity Tomography (ERT), Refraction Seismic Tomography (RST) and nuclear well logging. The collected data shows that, in all of the studied talus slopes, permafrost distribution is discontinuous and that neither of the hillslopes is integrally characterised by permafrost. In particular, this data proves by direct investigations that, in talus slopes, permafrost is present in the lower parts of the hillslope, whereas it is absent in the upper parts. Permafrost distribution in alpine talus slopes is depending of the combination of almost three controlling factors, whose respective importance is variable: the chimney effect, the increase of grain size downslope and the redistribution of snow by avalanches. Depending on the size of the talus and on topographical and geomorphological heterogeneities, various cases are possible: one dominant controlling factor or the combination of various factors. Nevertheless, it would be an error to consider each controlling factor independently, without considering their relationships. Between these controlling factors, the relationship chimney effect/grain size seems to be the most important factor controlling the presence of permafrost in the lowest part of periglacial talus slopes, and its absence in the upper parts. Finally, the analysis of the talus structure shows that the permafrost stratigraphy may be an important element of interpretation of the palaeoclimatic significance of an alpine talus slope. The second research axe focused on the establishment of a chronology of the Lateglacial glacier retreat and the dating of rockglaciers and talus slopes development in four studied regions of the Swiss Alps (Mont Gelé - Mont Fort, Fontanesses and Chamosentse regions, in the Valais Alps, and the Cima di Gana Bianca Massif, in the Ticino Alps). The compilation of the dates acquired through the combination of the palaeogeographical method and of the Schmidt hammer indicates that most of the investigated active rockglaciers started to evolve during the early phases of the Holocene or, at the latest, after the early-to-mid Holocene Climatic Optimum (ending around 6.3 ka cal BP). For the dated relict rockglaciers, most of them started to evolve in the second half of the Lateglacial, and probably became inactive at the beginning of the Holocene Climatic Optimum. For the investigated talus slopes, the relative dating carried out allowed to show that their surface date from the period included between the Boreal and the end of the Atlantic, pointing out that the rockwall retreat after the end of the Holocene Climatic Optimum was weak, and that the interval between maximal and minimal ages is in most cases relatively short (4-6 millennia). Therefore, the rockwall retreat during the development period of the talus slopes must has been considerable. Thanks to the calculation of rockwall erosion rates based on the volume of talus accumulations for four of the investigated hillslopes, it was possible to find evidences of the existence of "paraperiglacial rockfall phases" related to the permafrost degradation in rockwalls. These phases coincide with rapid climate warming periods, as at the beginning of the Bølling, during the Preboreal or, maybe, since 1980.
Resumo:
Background: Well-conducted behavioural surveillance (BS) is essential for policy planning and evaluation. Data should be comparable across countries. In 2008, the European Centre for Disease Prevention and Control (ECDC) began a programme to support Member States in the implementation of BS for Second Generation Surveillance. Methods: Data from a mapping exercise on current BS activities in EU/EFTA countries led to recommendations for establishing national BS systems and international coordination, and the definition of a set of core and transversal (UNGASS-Dublin compatible) indicators for BS in the general and eight specific populations. A toolkit for establishing BS has been developed and a BS needs-assessment survey has been launched in 30 countries. Tools for BS self-assessment and planning are currently being tested during interactive workshops with country representatives. Results: The mapping exercise revealed extreme diversity between countries. Around half had established a BS system, but this did not always correspond to the epidemiological situation. Challenges to implementation and harmonisation at all levels emerged from survey findings and workshop feedback. These include: absence of synergy between biological and behavioural surveillance and of actors having an overall view of all system elements; lack of awareness of the relevance of BS and of coordination between agencies; insufficient use of available data; financial constraints; poor sustainability, data quality and access to certain key populations; unfavourable legislative environments. Conclusions: There is widespread need in the region not only for technical support but also for BS advocacy: BS remains the neglected partner of second generation surveillance and requires increased political support and capacity-building in order to become effective. Dissemination of validated tools for BS, developed in interaction with country experts, proves feasible and acceptable.
Resumo:
This paper investigates the use of ensemble of predictors in order to improve the performance of spatial prediction methods. Support vector regression (SVR), a popular method from the field of statistical machine learning, is used. Several instances of SVR are combined using different data sampling schemes (bagging and boosting). Bagging shows good performance, and proves to be more computationally efficient than training a single SVR model while reducing error. Boosting, however, does not improve results on this specific problem.
Resumo:
This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.
Resumo:
This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.
Resumo:
CONTEXT: Communication guidelines often advise physicians to disclose to their patients medical uncertainty regarding the diagnosis, origin of the problem, and treatment. However, the effect of the expression of such uncertainty on patient outcomes (e.g. satisfaction) has produced conflicting results in the literature that indicate either no effect or a negative effect. The differences in the results of past studies may be explained by the fact that potential gender effects on the link between physician-expressed uncertainty and patient outcomes have not been investigated systematically. OBJECTIVES: On the basis of previous research documenting indications that patients may judge female physicians by more severe criteria than they do male physicians, and that men are more prejudiced than women towards women, we predicted that physician-expressed uncertainty would have more of a negative impact on patient satisfaction when the physician in question was female rather than male, and especially when the patient was a man. METHODS: We conducted two studies with complementary designs. Study 1 was a randomised controlled trial conducted in a simulated setting (120 analogue patients Analogue patients are healthy participants asked to put themselves in the shoes of real medical patients by imagining being the patients of physicians shown on videos); Study 2 was a field study conducted in real medical interviews (36 physicians, 69 patients). In Study 1, participants were presented with vignettes that varied in terms of the physician's gender and physician-expressed uncertainty (high versus low). In Study 2, physicians were filmed during real medical consultations and the level of uncertainty they expressed was coded by an independent rater according to the videos. In both studies, patient satisfaction was assessed using a questionnaire. RESULTS: The results confirmed that expressed uncertainty was negatively related to patient satisfaction only when the physician was a woman (Studies 1 and 2) and when the patient was a man (Study 2). CONCLUSIONS: We believe that patients have the right to be fully informed of any medical uncertainties. If our results are confirmed in further research, the question of import will refer not to whether female physicians should communicate uncertainty, but to how they should communicate it. For instance, if it proves true that uncertainty negatively impacts on (male) patients' satisfaction, female physicians might want to counterbalance this impact by emphasizing other communication skills.
Resumo:
L'étude classique des attributions de responsabilité instiguée par Heider en psychologie sociale s'est principalement bornée à aborder ce processus psychosocial dans une perspective individualiste qui se cantonne aux niveaux intra-individuel et interpersonnel (selon la distinction opérée par Doise). Les réflexions et les travaux empiriques présentés dans cette thèse ont deux objectifs. Dans un premier temps, il s?agit d'élargir cette perspective aux autres niveaux sociologique et idéologique (en faisant notamment recours à l'approche des attributions sociales et aux propositions de Fauconnet sur les règles de responsabilité). Deuxièmement, il s?agit d'éprouver la pertinence d'une telle approche dans un contexte particulier : celui du travail en groupe dont la nature des rapports sociaux qui y étaient présentés ont été manipulés à l'aide de scénarii chez des étudiant-e-s de l?Université de Lausanne. L?objectif principal de cette thèse est donc de tester un modèle d?ancrage des attributions de responsabilité qui permette de souligner les dynamiques représentationnelles sous-jacentes en termes de légitimation ou de remise en cause de l?organisation des groupes. Dans l?ensemble les résultats indiquent que si la nature des rapports sociaux (re)présentés dans un groupe sont de puissants déterminants de la manière de légitimer ou de remettre en cause l?organisation des groupes, le niveau individuel d'adhésion à des croyances idéologiques dominantes, comme la justification du système économique, représente un modérateur des prises de position des répondant-e-s interrogé-e-s. De plus, il semble que ces processus évoluent dans le temps, faisant ainsi apparaître l'existence de phénomènes de socialisation relativement plus complexes que ne le laissent entendre les recherches actuellement effectuées dans ce domaine. En effet, si des connaissances idéologiques sur le monde sont acquises dans les filières universitaires et n?interviennent pas toujours dans les processus de formation des représentations du travail en groupe, des connaissances spécifiques aux disciplines et à la politique de sélection universitaire semblent intervenir dans le processus de légitimation des rapports sociaux dans les groupes au niveau des attributions. En tentant une articulation entre les concepts d?ancrage des représentations sociales, d?attribution et de socialisation, cette thèse permet ainsi de souligner la pertinence qu?il y a à insérer une problématique en termes de croyances idéologiques dans l?étude des groupes sociaux.<br/><br/>Heider?s approach of responsibility attributions almost exclusively emphasized on an individualistic point of view ; i.e. focusing at an intraindividual and interpersonnal level of analysis according to Doise?s distinction. The reflexions and empirical studies presented here firstly aim at broaden this perspective by taking socio-structural as well as societal levels of analysis into account. Secondly, it is to test this approach in the particular domain of organized groups. Manipulation of the structure of social relations in work groups on screenplays were undertaken (in a population of students from the Lausanne University). Hence, the main goal of these studies is to test the impact of the social ancoring of social representations in the responsibility processes in terms of legitimation or opposition to the group organization. All in all, the results show that social structures are powerfull predictors of the formation of social representations of a work situation and so forth of the attribution process. Nevertheless hegemonic ideological beliefs, such as Economical System Justification, do play a substantial moderating role in this process. It also proves to be evolving through time. The present findings show that a complexe process of socialization is occuring during the student?s university life. Indeed, the results let us believe that ideological beliefs may not interact anytime in the group?s perception and in the construction of the representation of the situation. In the same time, it seems that more discipline specific oriented knowledge and the impact of selection policy at the Lausanne University also predict the groupe legimation process and interfer with the ideological beliefs. Trying to articulate concepts of fields of research like social representations, attribution and socialization, the present thesis allows to underline the heuristic potential of reabilitating ideological beliefs at a dispositional level in the study of group process.