106 resultados para two stage quantile regression
em Université de Lausanne, Switzerland
Resumo:
Albitization is a common process during which hydrothermal fluids convert plagioclase and/or K-feldspar into nearly pure albite; however, its specific mechanism in granitoids is not well understood. The c. 1700 Ma A-type metaluminous ferroan granites in the Khetri complex of Rajasthan, NW India, have been albitized to a large extent by two metasomatic fronts, an initial transformation of oligoclase to nearly pure albite and a subsequent replacement of microcline by albite, with sharp contacts between the microcline-bearing and microcline-free zones. Albitization has bleached the original pinkish grey granite and turned it white. The mineralogical changes include transformation of oligoclase (similar to An(12)) and microcline (similar to Or(95)) to almost pure albite (similar to An(0 center dot 5-2)), amphibole from potassian ferropargasite (X-Fe 0 center dot 84-0 center dot 86) to potassic hastingsite (X-Fe 0 center dot 88-0 center dot 97) and actinolite (X-Fe 0 center dot 32-0 center dot 67), and biotite from annite (X-Fe 0 center dot 71-0 center dot 74) to annite (X-Fe 0 center dot 90-0 center dot 91). Whole-rock isocon diagrams show that, during albitization, the granites experienced major hydration, slight gain in Si and major gain in Na, whereas K, Mg, Fe and Ca were lost along with Rb, Ba, Sr, Zn, light rare earth elements and U. Whole-rock Sm-Nd isotope data plot on an apparent isochron of 1419 +/- 98 Ma and reveal significant disturbance and at least partial resetting of the intrusion age. Severe scatter in the whole-rock Rb-Sr isochron plot reflects the extreme Rb loss in the completely albitized samples, effectively freezing Sr-87/Sr-86 ratios in the albite granites at very high values (0 center dot 725-0 center dot 735). This indicates either infiltration of highly radiogenic Sr from the country rock or, more likely, radiogenic ingrowth during a considerable time lag (estimated to be at least 300 Myr) between original intrusion and albitization. The albitization took place at similar to 350-400 degrees C. It was caused by the infiltration of an ascending hydrothermal fluid that had acquired high Na/K and Na/Ca ratios during migration through metamorphic rocks at even lower temperatures in the periphery of the plutons. Oxygen isotope ratios increase from delta O-18 = 7 parts per thousand in the original granite to values of 9-10 parts per thousand in completely albitized samples, suggesting that the fluid had equilibrated with surrounding metamorphosed crust. A metasomatic model, using chromatographic theory of fluid infiltration, explains the process for generating the observed zonation in terms of a leading metasomatic front where oligoclase of the original granite is converted to albite, and a second, trailing front where microcline is also converted to albite. The temperature gradients driving the fluid infiltration may have been produced by the high heat production of the granites themselves. The confinement of the albitized granites along the NE-SW-trending Khetri lineament and the pervasive nature of the albitization suggest that the albitizing fluids possibly originated during reactivation of the lineament. More generally, steady-state temperature gradients induced by the high internal heat production of A-type granites may provide the driving force for similar metasomatic and ore-forming processes in other highly enriched granitoid bodies.
Resumo:
On Ile de Groix, Variscan metamorphic former tholeiitic and alkaline basalts occur as glaucophane-eclogites, blueschists and greenschists in isolated lenses and layers within metapelites. Whole-rock delta O-18(SMOW) values of the metabasites show limited variations (10.4-12.0 parts per thousand) and no systematic differences among rock types and metamorphic grades. This provides no argument for large-scale blueschist-to-greenschist transformation driven by infiltration of externally derived fluids. Metamorphic mineralogical changes should have been triggered by internal fluids. Element variations in interlayered blue- and greenschists can be attributed to magmatic fractionation. Assemblages with garnet, clinopyroxene and glaucophane of a high-pressure/low-temperature (HP-LT) metamorphism M1, and NaCa-amphiboles (barroisite, magnesiohornblende, actinolite) of a medium-pressure/medium-temperature metamorphism M2 crystallized during deformation Dl. Detailed core-rim zonation profiles display increasing and then decreasing Al-IV in glaucophane of M1. NaCa-amphiboles of M2, mantling glaucophane and crystallized in porphyroblasts, show first increasing, then decreasing, Al-IV and Al-IV. Empirically calibrated thermobarometers allowed P-T path reconstructions. In glaucophane-eclogites of a metamorphic zone I, a prograde evolution to M1 peak conditions at 400-500 degreesC/10-12 kbar was followed by a retrograde P-T path within the glaucophane stability field. The subsequent M2 evolution was again prograde up to > 600 degreesC at 8 kbar and then retrograde. Similarly, in metamorphic zones II and III, prograde and retrograde paths of MI and M2 at lower maximal temperatures and pressures exist. The almost complete metamorphic cycle during M2 signalizes that the HP-LT rocks escaped from an early erosion by a moderate second burial event and explains the longlasting slow uplift with low average cooling rates.
Resumo:
Different therapeutic options for prosthetic joint infections exist, but surgery remains the key. With a two-stage exchange procedure, a success rate above 90% can be expected. Currently, there is no consensus regarding the optimal duration between explantation and the reimplantation in a two-stage procedure. The aim of this study was to retrospectively compare treatment outcomes between short-interval and long-interval two-stage exchanges. Patients having a two-stage exchange of a hip or knee prosthetic joint infection at Lausanne University Hospital (Switzerland) between 1999 and 2013 were included. The satisfaction of the patient, the function of the articulation and the eradication of infection, were compared between patients having a short (2 to 4 weeks) versus a long (4 weeks and more) interval during a two-stage procedure. Patient satisfaction was defined as good if the patient did not have pain and bad if the patient had pain. Functional outcome was defined good if the patient had a prosthesis in place and could walk, medium if the prosthesis was in place but the patient could not walk, and bad if the prosthesis was no longer in place. Infection outcome was considered good if there had been no re-infection and bad if there had been a re-infection of the prosthesis 145 patients (100 hips, 45 knees) were identified with a median age of 68 years (range 19-103). The median hospital stay was 58 days (range 10-402). The median follow-up was 12.9 months (range 0.5-152). 28 % and 72 % of the patients had a short-interval and long-interval exchange of the prosthesis, respectively. Patient satisfaction, functional outcome and infection outcome for patients having a short versus a long interval are reported in the Table. The patient satisfaction was higher when a long interval was performed whereas the functional and infection outcomes were higher when a short interval was performed. According to this study a short-interval exchange appears preferable to a long interval, especially in the view of treatment effectiveness and functional outcome.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
Objectives: Imatinib has been increasingly proposed for therapeutic drug monitoring (TDM), as trough concentrations (Cmin) correlate with response rates in CML patients. This analysis aimed to evaluate the impact of imatinib exposure on optimal molecular response rates in a large European cohort of patients followed by centralized TDM.¦Methods: Sequential PK/PD analysis was performed in NONMEM 7 on 2230 plasma (PK) samples obtained along with molecular response (PD) data from 1299 CML patients. Model-based individual Bayesian estimates of exposure, parameterized as to initial dose adjusted and log-normalized Cmin (log-Cmin) or clearance (CL), were investigated as potential predictors of optimal molecular response, while accounting for time under treatment (stratified at 3 years), gender, CML phase, age, potentially interacting comedication, and TDM frequency. PK/PD analysis used mixed-effect logistic regression (iterative two-stage method) to account for intra-patient correlation.¦Results: In univariate analyses, CL, log-Cmin, time under treatment, TDM frequency, gender (all p<0.01) and CML phase (p=0.02) were significant predictors of the outcome. In multivariate analyses, all but log-Cmin remained significant (p<0.05). Our model estimates a 54.1% probability of optimal molecular response in a female patient with a median CL of 14.4 L/h, increasing by 4.7% with a 35% decrease in CL (percentile 10 of CL distribution), and decreasing by 6% with a 45% increased CL (percentile 90), respectively. Male patients were less likely than female to be in optimal response (odds ratio: 0.62, p<0.001), with an estimated probability of 42.3%.¦Conclusions: Beyond CML phase and time on treatment, expectedly correlated to the outcome, an effect of initial imatinib exposure on the probability of achieving optimal molecular response was confirmed in field-conditions by this multivariate analysis. Interestingly, male patients had a higher risk of suboptimal response, which might not exclusively derive from their 18.5% higher CL, but also from reported lower adherence to the treatment. A prospective longitudinal study would be desirable to confirm the clinical importance of identified covariates and to exclude biases possibly affecting this observational survey.
Resumo:
Leaders must scan the internal and external environment, chart strategic and task objectives, and provide performance feedback. These instrumental leadership (IL) functions go beyond the motivational and quid-pro quo leader behaviors that comprise the full-range-transformational, transactional, and laissez faire-leadership model. In four studies we examined the construct validity of IL. We found evidence for a four-factor IL model that was highly prototypical of good leadership. IL predicted top-level leader emergence controlling for the full-range factors, initiating structure, and consideration. It also explained unique variance in outcomes beyond the full-range factors; the effects of transformational leadership were vastly overstated when IL was omitted from the model. We discuss the importance of a "fuller full-range" leadership theory for theory and practice. We also showcase our methodological contributions regarding corrections for common method variance (i.e., endogeneity) bias using two-stage least squares (2SLS) regression and Monte Carlo split-sample designs.
Resumo:
Although the relationship between serum uric acid (SUA) and adiposity is well established, the direction of the causality is still unclear in the presence of conflicting evidences. We used a bidirectional Mendelian randomization approach to explore the nature and direction of causality between SUA and adiposity in a population-based study of Caucasians aged 35 to 75 years. We used, as instrumental variables, rs6855911 within the SUA gene SLC2A9 in one direction, and combinations of SNPs within the adiposity genes FTO, MC4R and TMEM18 in the other direction. Adiposity markers included weight, body mass index, waist circumference and fat mass. We applied a two-stage least squares regression: a regression of SUA/adiposity markers on our instruments in the first stage and a regression of the response of interest on the fitted values from the first stage regression in the second stage. SUA explained by the SLC2A9 instrument was not associated to fat mass (regression coefficient [95% confidence interval]: 0.05 [-0.10, 0.19] for fat mass) contrasting with the ordinary least square estimate (0.37 [0.34, 0.40]). By contrast, fat mass explained by genetic variants of the FTO, MC4R and TMEM18 genes was positively and significantly associated to SUA (0.31 [0.01, 0.62]), similar to the ordinary least square estimate (0.27 [0.25, 0.29]). Results were similar for the other adiposity markers. Using a bidirectional Mendelian randomization approach in adult Caucasians, our findings suggest that elevated SUA is a consequence rather than a cause of adiposity.
Resumo:
Carbon isotope ratio of androgens in urine specimens is routinely determined to exclude an abuse of testosterone or testosterone prohormones by athletes. Increasing application of gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS) in the last years for target and systematic investigations on samples has resulted in the demand for rapid sample throughput as well as high selectivity in the extraction process particularly in the case of conspicuous samples. For that purpose, we present herein the complimentary use of an SPE-based assay and an HPLC fractionation method as a two-stage strategy for the isolation of testosterone metabolites and endogenous reference compounds prior to GC/C/IRMS analyses. Assays validation demonstrated acceptable performance in terms of intermediate precision (range: 0.1-0.4 per thousand) and Bland-Altman analyses revealed no significant bias (0.2 per thousand). For further validation of this two-stage analyses strategy, all the specimens (n=124) collected during a major sport event were processed.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
We investigated the association between diet and head and neck cancer (HNC) risk using data from the International Head and Neck Cancer Epidemiology (INHANCE) consortium. The INHANCE pooled data included 22 case-control studies with 14,520 cases and 22,737 controls. Center-specific quartiles among the controls were used for food groups, and frequencies per week were used for single food items. A dietary pattern score combining high fruit and vegetable intake and low red meat intake was created. Odds ratios (OR) and 95% confidence intervals (CI) for the dietary items on the risk of HNC were estimated with a two-stage random-effects logistic regression model. An inverse association was observed for higher-frequency intake of fruit (4th vs. 1st quartile OR = 0.52, 95% CI = 0.43-0.62, p (trend) < 0.01) and vegetables (OR = 0.66, 95% CI = 0.49-0.90, p (trend) = 0.01). Intake of red meat (OR = 1.40, 95% CI = 1.13-1.74, p p (trend) < 0.01) was positively associated with HNC risk. Higher dietary pattern scores, reflecting high fruit/vegetable and low red meat intake, were associated with reduced HNC risk (per score increment OR = 0.90, 95% CI = 0.84-0.97).
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
The costs related to the treatment of infected total joint arthroplasties represent an ever groving burden to the society. Different patient-adapted therapeutic options like débridement and retention, 1- or 2-step exchange can be used. If a 2-step exchange is used we have to consider short (2-4 weeks) or long (>4-6 weeks) interval treatment. The Swiss DRG (Diagnose related Groups) determines the reimboursement the hopsital receives for the treatment of an infected total arthroplasty. The review assesses the cost-effectiveness of hospitalisation practices linked to surgical treatment in the two-stage exchange of a prosthetic-joint infection. The aim of this retrospectiv study is to compare the economical impact between a short (2 to 4 weeks) versus a long (6 weeks and above) interval during a two-satge procedure to determine the financial impact. Retrospectiv study of the patients with a two-stage procedure for a hip or knee prosthetic joint infection at CHUV hospital Lausanne (Switzerland) between 2012 and 2013. The review analyses the correlation between the interval length and the length of the hospital stay as well as with the costs and revenues per hospital stay. In average there is a loss of 40′000 Euro per hospitalisation for the treatment of prosthetic joint infection. Revenues never cover all the costs, even with a short interval procedure. This economical loss increases with the length of the hospital stay if a long-term intervall is choosen. The review explores potential for improvement in reimbourement practices and hospitalisation practices in the current Swiss healthcare setting. There should be alternative setups to decrease the burden of medical costs by a) increase the reimboursment for the treatment of infected total joints or by b) splitting the hospital stay with partners (rapid transfer after first operation from center hospital to level 2 hospital and retransfer for second operation to center) in order to increase revenues.
Resumo:
Asbestos exposure can result in serious and frequently lethal diseases, including malignant mesothelioma. The host sensor for asbestos-induced inflammation is the NLRP3 inflammasome and it is widely assumed that this complex is essential for asbestos-induced cancers. Here, we report that acute interleukin-1β production and recruitment of immune cells into peritoneal cavity were significantly decreased in the NLRP3-deficient mice after the administration of asbestos. However, NLRP3-deficient mice displayed a similar incidence of malignant mesothelioma and survival times as wild-type mice. Thus, early inflammatory reactions triggered by asbestos are NLRP3-dependent, but NLRP3 is not critical in the chronic development of asbestos-induced mesothelioma. Notably, in a two-stage carcinogenesis-induced papilloma model, NLRP3-deficient mice showed a resistance phenotype in two different strain backgrounds, suggesting a tumour-promoting role of NLRP3 in certain chemically-induced cancer types.