96 resultados para intractable empirical likelihood
em Université de Lausanne, Switzerland
Resumo:
OBJECTIVE: Barbiturate-induced coma can be used in patients to treat intractable intracranial hypertension when other therapies, such as osmotic therapy and sedation, have failed. Despite control of intracranial pressure, cerebral infarction may still occur in some patients, and the effect of barbiturates on outcome remains uncertain. In this study, we examined the relationship between barbiturate infusion and brain tissue oxygen (PbtO2). METHODS: Ten volume-resuscitated brain-injured patients who were treated with pentobarbital infusion for intracranial hypertension and underwent PbtO2 monitoring were studied in a neurosurgical intensive care unit at a university-based Level I trauma center. PbtO2, intracranial pressure (ICP), mean arterial pressure, cerebral perfusion pressure (CPP), and brain temperature were continuously monitored and compared in settings in which barbiturates were or were not administered. RESULTS: Data were available from 1595 hours of PbtO2 monitoring. When pentobarbital administration began, the mean ICP, CPP, and PbtO2 were 18 +/- 10, 72 +/- 18, and 28 +/- 12 mm Hg, respectively. During the 3 hours before barbiturate infusion, the maximum ICP was 24 +/- 13 mm Hg and the minimum CPP was 65 +/- 20 mm Hg. In the majority of patients (70%), we observed an increase in PbtO2 associated with pentobarbital infusion. Within this group, logistic regression analysis demonstrated that a higher likelihood of compromised brain oxygen (PbtO2 < 20 mm Hg) was associated with a decrease in pentobarbital dose after controlling for ICP and other physiological parameters (P < 0.001). In the remaining 3 patients, pentobarbital was associated with lower PbtO2 levels. These patients had higher ICP, lower CPP, and later initiation of barbiturates compared with patients whose PbtO2 increased. CONCLUSION: Our preliminary findings suggest that pentobarbital administered for intractable intracranial hypertension is associated with a significant and independent increase in PbtO2 in the majority of patients. However, in some patients with more compromised brain physiology, pentobarbital may have a negative effect on PbtO2, particularly if administered late. Larger studies are needed to examine the relationship between barbiturates and cerebral oxygenation in brain-injured patients with refractory intracranial hypertension and to determine whether PbtO2 responses can help guide therapy.
Resumo:
It has been long recognized that highly polymorphic genetic markers can lead to underestimation of divergence between populations when migration is low. Microsatellite loci, which are characterized by extremely high mutation rates, are particularly likely to be affected. Here, we report genetic differentiation estimates in a contact zone between two chromosome races of the common shrew (Sorex araneus), based on 10 autosomal microsatellites, a newly developed Y-chromosome microsatellite, and mitochondrial DNA. These results are compared to previous data on proteins and karyotypes. Estimates of genetic differentiation based on F- and R-statistics are much lower for autosomal microsatellites than for all other genetic markers. We show by simulations that this discrepancy stems mainly from the high mutation rate of microsatellite markers for F-statistics and from deviations from a single-step mutation model for R-statistics. The sex-linked genetic markers show that all gene exchange between races is mediated by females. The absence of male-mediated gene flow most likely results from male hybrid sterility.
Resumo:
Objectives: To study the outcome of disconnective epilepsy surgery for intractable hemispheric and sub-hemispheric pediatric epilepsy. Methods: A retrospective analysis of the epilepsy surgery database was done in all children (age <18 years) who underwent a peri-insular hemispherotomy (PIH) or a peri-insular posterior quadrantectomy (PIPQ) from April 2000 to March 2011. All patients underwent a detailed pre surgical evaluation. Seizure outcome was assessed by the Engel's classification and cognitive skills by appropriate measures of intelligence that were repeated annually. Results: There were 34 patients in all. Epilepsy was due to Rasmussen's encephalitis (RE), Infantile hemiplegia seizure syndrome (IHSS), Hemimegalencephaly (HM), Sturge Weber syndrome (SWS) and due to post encephalitic sequelae (PES). Twenty seven (79.4%) patients underwent PIH and seven (20.6%) underwent PIPQ. The mean follow up was 30.5 months. At the last follow up, 31 (91.1%) were seizure free. The age of seizure onset and etiology of the disease causing epilepsy were predictors of a Class I seizure outcome. Conclusions: There is an excellent seizure outcome following disconnective epilepsy surgery for intractable hemispheric and subhemispheric pediatric epilepsy. An older age of seizure onset, RE, SWS and PES were good predictors of a Class I seizure outcome.
Resumo:
This paper extends previous research and discussion on the use of multivariate continuous data, which are about to become more prevalent in forensic science. As an illustrative example, attention is drawn here on the area of comparative handwriting examinations. Multivariate continuous data can be obtained in this field by analysing the contour shape of loop characters through Fourier analysis. This methodology, based on existing research in this area, allows one describe in detail the morphology of character contours throughout a set of variables. This paper uses data collected from female and male writers to conduct a comparative analysis of likelihood ratio based evidence assessment procedures in both, evaluative and investigative proceedings. While the use of likelihood ratios in the former situation is now rather well established (typically, in order to discriminate between propositions of authorship of a given individual versus another, unknown individual), focus on the investigative setting still remains rather beyond considerations in practice. This paper seeks to highlight that investigative settings, too, can represent an area of application for which the likelihood ratio can offer a logical support. As an example, the inference of gender of the writer of an incriminated handwritten text is forwarded, analysed and discussed in this paper. The more general viewpoint according to which likelihood ratio analyses can be helpful for investigative proceedings is supported here through various simulations. These offer a characterisation of the robustness of the proposed likelihood ratio methodology.
Resumo:
The thesis examines the impact of collective war victimization on individuals' readiness to accept or assign collective guilt for past war atrocities. As a complement to previous studies, its aim is to articulate an integrated approach to collective victimization, which distinguishes between individual-, communal-, and societal-level consequences of warfare. Building on a social representation approach, it is guided by the assumption that individuals form beliefs about a conflict through their personal experiences of victimization, communal experiences of warfare that occur in their proximal surrounding, and the mass- mediatised narratives that circulate in a society's public sphere. Four empirical studies test the hypothesis that individuals' beliefs about the conflict depend on the level and type of war experiences to which they have been exposed, that is, on informative and normative micro and macro contexts in which they are embedded. The studies have been conducted in the context of the Yugoslav wars that attended the breakup of Yugoslavia, a series of wars fought between 1991 and 2001 during which numerous war atrocities were perpetrated causing a massive victimisation of population. To examine the content and impact of war experiences at each level of analysis, the empirical studies employed various methodological strategies, from quantitative analyses of a representative public opinion survey, to qualitative analyses of media content and political speeches. Study 1 examines the impact of individual- and communal- level war experiences on individuals' acceptance and assignment of collective guilt. It further examines the impact of the type of communal level victimization: exposure to symmetric (i.e., violence that similarly affects members of different ethnic groups, including adversaries) and asymmetric violence. The main goal of Study 2 is to examine the structural and political circumstances that enhance collective guilt assignment. While the previous studies emphasize the role of past victimisation, Study 2 tests the assumption that the political demobilisation strategy employed by elites facing public discontent in the collective system-threatening circumstances can fuel out-group blame. Studies 3 and 4 have been conducted predominantly in the context of Croatia and examine rhetoric construction of the dominant politicized narrative of war in a public sphere (Study 3) and its maintenance through public delegitimization of alternative (critical) representations (Study 4). Study 4 further examines the likelihood that highly identified group members adhere to publicly delegitimized critical stances on war. - Cette thèse étudie l'impact de la victimisation collective de guerre sur la capacité des individus à accepter ou à attribuer une culpabilité collective liée à des atrocités commises en temps de guerre. En compléments aux recherches existantes, le but de ce travail est de définir une approche intégrative de la victimisation collective, qui distingue les conséquences de la guerre aux niveaux individuel, régional et sociétal. En partant de l'approche des représentations sociales, cette thèse repose sur le postulat que les individus forment des croyances sur un conflit au travers de leurs expériences personnelles de victimisation, de leurs expériences de guerre lorsque celle-ci se déroule près d'eux, ainsi qu'au travers des récits relayés par les mass media. Quatre études testent l'hypothèse que les croyances des individus dépendent des niveaux et des types d'expériences de guerre auxquels ils ont été exposés, c'est-à-dire, des contextes informatifs et normatifs, micro et macro dans lesquels ils sont insérés. Ces études ont été réalisées dans le contexte des guerres qui, entre 1991 et 2001, ont suivi la dissolution de la Yougoslavie et durant lesquelles de nombreuses atrocités de guerre ont été commises, causant une victimisation massive de la population. Afin d'étudier le contenu et l'impact des expériences de guerre sur chaque niveau d'analyse, différentes stratégies méthodologiques ont été utilisées, des analyses quantitatives sur une enquête représentative d'opinion publique aux analyses qualitatives de contenu de médias et de discours politiques. L'étude 1 étudie l'impact des expériences de guerre individuelles et régionales sur l'acceptation et l'attribution de la culpabilité collective par les individus. Elle examine aussi l'impact du type de victimisation régionale : exposition à la violence symétrique (i.e., violence qui touche les membres de différents groupes ethniques, y compris les adversaires) et asymétrique. L'étude 2 se penche sur les circonstances structurelles et politiques qui augmentent l'attribution de culpabilité collective. Alors que les recherches précédentes ont mis l'accent sur le rôle de la victimisation passée, l'étude 2 teste l'hypothèse que la stratégie de démobilisation politique utilisée par les élites pour faire face à l'insatisfaction publique peut encourager l'attribution de la culpabilité à l'exogroupe. Les études 3 et 4 étudient, principalement dans le contexte croate, la construction rhétorique du récit de guerre politisé dominant (étude 3) et son entretien à travers la délégitimation publique des représentations alternatives (critiques] (étude 4). L'étude 4 examine aussi la probabilité qu'ont les membres de groupe fortement identifiés d'adhérer à des points de vue sur la guerre critiques et publiquement délégitimés.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
Quantitative ultrasound (QUS) appears to be developing into an acceptable, low-cost and readily-accessible alternative to dual X-ray absorptiometry (DXA) measurements of bone mineral density (BMD) in the detection and management of osteoporosis. Perhaps the major difficulty with their widespread use is that many different QUS devices exist that differ substantially from each other, in terms of the parameters they measure and the strength of empirical evidence supporting their use. But another problem is that virtually no data exist outside of Caucasian or Asian populations. In general, heel QUS appears to be most tested and most effective. Some, but not all heel QUS devices are effective assessing fracture risk in some, but not all populations, the evidence being strongest for Caucasian females > 55 years old, though some evidence exists for Asian females > 55 and for Caucasian and Asian males > 70. Certain devices may allow to estimate the likelihood of osteoporosis, but very limited evidence exists supporting QUS use during the initiation or monitoring of osteoporosis treatment. Likely, QUS is most effective when combined with an assessment of clinical risk factors (CRF); with DXA reserved for individuals who are not identified as either high or low risk using QUS and CRF. However, monitoring and maintenance of test and instrument accuracy, precision and reproducibility are essential if QUS devices are to be used in clinical practice; and further scientific research in non-Caucasian, non-Asian populations clearly is compulsory to validate this tool for more widespread use.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
A better understanding of the factors that mould ecological community structure is required to accurately predict community composition and to anticipate threats to ecosystems due to global changes. We tested how well stacked climate-based species distribution models (S-SDMs) could predict butterfly communities in a mountain region. It has been suggested that climate is the main force driving butterfly distribution and community structure in mountain environments, and that, as a consequence, climate-based S-SDMs should yield unbiased predictions. In contrast to this expectation, at lower altitudes, climate-based S-SDMs overpredicted butterfly species richness at sites with low plant species richness and underpredicted species richness at sites with high plant species richness. According to two indices of composition accuracy, the Sorensen index and a matching coefficient considering both absences and presences, S-SDMs were more accurate in plant-rich grasslands. Butterflies display strong and often specialised trophic interactions with plants. At lower altitudes, where land use is more intense, considering climate alone without accounting for land use influences on grassland plant richness leads to erroneous predictions of butterfly presences and absences. In contrast, at higher altitudes, where climate is the main force filtering communities, there were fewer differences between observed and predicted butterfly richness. At high altitudes, even if stochastic processes decrease the accuracy of predictions of presence, climate-based S-SDMs are able to better filter out butterfly species that are unable to cope with severe climatic conditions, providing more accurate predictions of absences. Our results suggest that predictions should account for plants in disturbed habitats at lower altitudes but that stochastic processes and heterogeneity at high altitudes may limit prediction success of climate-based S-SDMs.
Resumo:
The paper follows on from earlier work [Taroni F and Aitken CGG. Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence. Science & Justice 1998; 38: 165-177]. Different explanations of the value of DNA evidence were presented to students from two schools of forensic science and to members of fifteen laboratories all around the world. The responses were divided into two groups; those which came from a school or laboratory identified as Bayesian and those which came from a school or laboratory identified as non-Bayesian. The paper analyses these responses using a likelihood approach. This approach is more consistent with a Bayesian analysis than one based on a frequentist approach, as was reported by Taroni F and Aitken CGG. [Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence] in Science & Justice 1998.
Resumo:
Independent regulatory agencies are one of the main institutional features of the 'rising regulatory state' in Western Europe. Governments are increasingly willing to abandon their regulatory competencies and to delegate them to specialized institutions that are at least partially beyond their control. This article examines the empirical consistency of one particular explanation of this phenomenon, namely the credibility hypothesis, claiming that governments delegate powers so as to enhance the credibility of their policies. Three observable implications are derived from the general hypothesis, linking credibility and delegation to veto players, complexity and interdependence. An independence index is developed to measure agency independence, which is then used in a multivariate analysis where the impact of credibility concerns on delegation is tested. The analysis relies on an original data set comprising independence scores for thirty-three regulators. Results show that the credibility hypothesis can explain a good deal of the variation in delegation. The economic nature of regulation is a strong determinant of agency independence, but is mediated by national institutions in the form of veto players.