91 resultados para Hierarchical regression model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Artemether-lumefantrine is the most widely used artemisinin-based combination therapy for malaria, although treatment failures occur in some regions. We investigated the effect of dosing strategy on efficacy in a pooled analysis from trials done in a wide range of malaria-endemic settings. METHODS: We searched PubMed for clinical trials that enrolled and treated patients with artemether-lumefantrine and were published from 1960 to December, 2012. We merged individual patient data from these trials by use of standardised methods. The primary endpoint was the PCR-adjusted risk of Plasmodium falciparum recrudescence by day 28. Secondary endpoints consisted of the PCR-adjusted risk of P falciparum recurrence by day 42, PCR-unadjusted risk of P falciparum recurrence by day 42, early parasite clearance, and gametocyte carriage. Risk factors for PCR-adjusted recrudescence were identified using Cox's regression model with frailty shared across the study sites. FINDINGS: We included 61 studies done between January, 1998, and December, 2012, and included 14 327 patients in our analyses. The PCR-adjusted therapeutic efficacy was 97·6% (95% CI 97·4-97·9) at day 28 and 96·0% (95·6-96·5) at day 42. After controlling for age and parasitaemia, patients prescribed a higher dose of artemether had a lower risk of having parasitaemia on day 1 (adjusted odds ratio [OR] 0·92, 95% CI 0·86-0·99 for every 1 mg/kg increase in daily artemether dose; p=0·024), but not on day 2 (p=0·69) or day 3 (0·087). In Asia, children weighing 10-15 kg who received a total lumefantrine dose less than 60 mg/kg had the lowest PCR-adjusted efficacy (91·7%, 95% CI 86·5-96·9). In Africa, the risk of treatment failure was greatest in malnourished children aged 1-3 years (PCR-adjusted efficacy 94·3%, 95% CI 92·3-96·3). A higher artemether dose was associated with a lower gametocyte presence within 14 days of treatment (adjusted OR 0·92, 95% CI 0·85-0·99; p=0·037 for every 1 mg/kg increase in total artemether dose). INTERPRETATION: The recommended dose of artemether-lumefantrine provides reliable efficacy in most patients with uncomplicated malaria. However, therapeutic efficacy was lowest in young children from Asia and young underweight children from Africa; a higher dose regimen should be assessed in these groups. FUNDING: Bill & Melinda Gates Foundation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of the present study was to determine the impact of trabecular bone score on the probability of fracture above that provided by the clinical risk factors utilized in FRAX. We performed a retrospective cohort study of 33,352 women aged 40-99 years from the province of Manitoba, Canada, with baseline measurements of lumbar spine trabecular bone score (TBS) and FRAX risk variables. The analysis was cohort-specific rather than based on the Canadian version of FRAX. The associations between trabecular bone score, the FRAX risk factors and the risk of fracture or death were examined using an extension of the Poisson regression model and used to calculate 10-year probabilities of fracture with and without TBS and to derive an algorithm to adjust fracture probability to take account of the independent contribution of TBS to fracture and mortality risk. During a mean follow-up of 4.7 years, 1754 women died and 1639 sustained one or more major osteoporotic fractures excluding hip fracture and 306 women sustained one or more hip fracture. When fully adjusted for FRAX risk variables, TBS remained a statistically significant predictor of major osteoporotic fractures excluding hip fracture (HR/SD 1.18, 95 % CI 1.12-1.24), death (HR/SD 1.20, 95 % CI 1.14-1.26) and hip fracture (HR/SD 1.23, 95 % CI 1.09-1.38). Models adjusting major osteoporotic fracture and hip fracture probability were derived, accounting for age and trabecular bone score with death considered as a competing event. Lumbar spine texture analysis using TBS is a risk factor for osteoporotic fracture and a risk factor for death. The predictive ability of TBS is independent of FRAX clinical risk factors and femoral neck BMD. Adjustment of fracture probability to take account of the independent contribution of TBS to fracture and mortality risk requires validation in independent cohorts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Due to the underlying diseases and the need for immunosuppression, patients after lung transplantation are particularly at risk for gastrointestinal (GI) complications that may negatively influence long-term outcome. The present study assessed the incidences and impact of GI complications after lung transplantation and aimed to identify risk factors. METHODS: Retrospective analysis of all 227 consecutively performed single- and double-lung transplantations at the University hospitals of Lausanne and Geneva was performed between January 1993 and December 2010. Logistic regressions were used to test the effect of potentially influencing variables on the binary outcomes overall, severe, and surgery-requiring complications, followed by a multiple logistic regression model. RESULTS: Final analysis included 205 patients for the purpose of the present study, and 22 patients were excluded due to re-transplantation, multiorgan transplantation, or incomplete datasets. GI complications were observed in 127 patients (62 %). Gastro-esophageal reflux disease was the most commonly observed complication (22.9 %), followed by inflammatory or infectious colitis (20.5 %) and gastroparesis (10.7 %). Major GI complications (Dindo/Clavien III-V) were observed in 83 (40.5 %) patients and were fatal in 4 patients (2.0 %). Multivariate analysis identified double-lung transplantation (p = 0.012) and early (1993-1998) transplantation period (p = 0.008) as independent risk factors for developing major GI complications. Forty-three (21 %) patients required surgery such as colectomy, cholecystectomy, and fundoplication in 6.8, 6.3, and 3.9 % of the patients, respectively. Multivariate analysis identified Charlson comorbidity index of ≥3 as an independent risk factor for developing GI complications requiring surgery (p = 0.015). CONCLUSION: GI complications after lung transplantation are common. Outcome was rather encouraging in the setting of our transplant center.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Gemcitabine plus cisplatin (GC) has been adopted as a neoadjuvant regimen for muscle-invasive bladder cancer despite the lack of Level I evidence in this setting. METHODS: Data were collected using an electronic data-capture platform from 28 international centers. Eligible patients had clinical T-classification 2 (cT2) through cT4aN0M0 urothelial cancer of the bladder and received neoadjuvant GC or methotrexate, vinblastine, doxorubicin, plus cisplatin (MVAC) before undergoing cystectomy. Logistic regression was used to compute propensity scores as the predicted probabilities of patients being assigned to MVAC versus GC given their baseline characteristics. These propensity scores were then included in a new logistic regression model to estimate an adjusted odds ratio comparing the odds of attaining a pathologic complete response (pCR) between patients who received MVAC and those who received GC. RESULTS: In total, 212 patients (146 patients in the GC cohort and 66 patients in the MVAC cohort) met criteria for inclusion in the analysis. The majority of patients in the MVAC cohort (77%) received dose-dense MVAC. The median age of patients was 63 years, they were predominantly men (74%), and they received a median of 3 cycles of neoadjuvant chemotherapy. The pCR rate was 29% in the MVAC cohort and 31% in the GC cohort. There was no significant difference in the pCR rate when adjusted for propensity scores between the 2 regimens (odds ratio, 0.91; 95% confidence interval, 0.48-1.72; P = .77). In an exploratory analysis evaluating survival, the hazard ratio comparing hazard rates for MVAC versus GC adjusted for propensity scores was not statistically significant (hazard ratio, 0.78; 95% confidence interval, 0.40-1.54; P = .48). CONCLUSIONS: Patients who received neoadjuvant GC and MVAC achieved comparable pCR rates in the current analysis, providing evidence to support what has become routine practice. Cancer 2015;121:2586-2593. © 2015 American Cancer Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: We present the retrospective analysis of a single-institution experience for radiosurgery (RS) in brain metastasis (BM) with Gamma Knife (GK) and Linac. Methods: From July 2010 to July 2012, 28 patients (with 83 lesions) had RS with GK and 35 patients (with 47 lesions) with Linac. The primary outcome was the local progression-free survival (LPFS). The secondary outcome was the overall survival (OS). Apart a standard statistical analysis, we included a Cox regression model with shared frailty, to modulate the within-patient correlation (preliminary evaluation showed a significant frailty effect, meaning that the correlation within patient could be ignored). Results: The mean follow-up period was 11.7 months (median 7.9, 1.7-22.7) for GK and 18.1 (median 17, 7.5-28.7) for Linac. The median number of lesions per patient was 2.5 (1-9) in GK compared with 1 (1-3) in Linac. There were more radioresistant lesions (melanoma) and more lesions located in functional areas for the GK group. The median dose was 24 Gy (GK) compared with 20 Gy (Linac). The LPFS actuarial rate was as follows: for GK at 3, 6, 9, 12, and 17 months: 96.96, 96.96, 96.96, 88.1, and 81.5%, and remained stable till 32 months; for Linac at 3, 6, 12, 17, 24, and 33 months, it was 91.5, 91.5, 91.5, 79.9, 55.5, and 17.1%, respectively (p = 0.03, chi-square test). After the Cox regression analysis with shared frailty, the p-value was not statistically significant between groups. The median overall survival was 9.7 months for GK and 23.6 months for Linac group. Uni- and multivariate analysis showed a lower GPA score and noncontrolled systemic status were associated with lower OS. Cox regression analysis adjusting for these two parameters showed comparable OS rate. Conclusions: In this comparative report between GK and Linac, preliminary analysis showed that more difficult cases are treated by GK, with patients harboring more lesions, radioresistant tumors, and highly functional located. The groups look, in this sense, very heterogeneous at baseline. After a Cox frailty model, the LPFS rates seemed very similar (p < 0.05). The OS was similar, after adjusting for systemic status and GPA score (p < 0.05). The technical reasons for choosing GK instead of Linac were the anatomical location related to highly functional areas, histology, technical limitations of Linac movements, especially lower posterior fossa locations, or closeness of multiple lesions to highly functional areas optimal dosimetry with Linac

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Delirium and frailty - both potentially reversible geriatric syndromes - are seldom studied together, although they often occur jointly in older patients discharged from hospitals. This study aimed to explore the relationship between delirium and frailty in older adults discharged from hospitals. METHODS: Of the 221 patients aged >65 years, who were invited to participate, only 114 gave their consent to participate in this study. Delirium was assessed using the confusion assessment method, in which patients were classified dichotomously as delirious or nondelirious according to its algorithm. Frailty was assessed using the Edmonton Frailty Scale, which classifies patients dichotomously as frail or nonfrail. In addition to the sociodemographic characteristics, covariates such as scores from the Mini-Mental State Examination, Instrumental Activities of Daily Living scale, and Cumulative Illness Rating Scale for Geriatrics and details regarding polymedication were collected. A multidimensional linear regression model was used for analysis. RESULTS: Almost 20% of participants had delirium (n=22), and 76.3% were classified as frail (n=87); 31.5% of the variance in the delirium score was explained by frailty (R (2)=0.315). Age; polymedication; scores of the Confusion Assessment Method (CAM), instrumental activities of daily living, and Cumulative Illness Rating Scale for Geriatrics; and frailty increased the predictability of the variance of delirium by 32% to 64% (R (2)=0.64). CONCLUSION: Frailty is strongly related to delirium in older patients after discharge from the hospital.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a cohort study of 182 consecutive patients with active endogenous Cushing's syndrome, the only predictor of fracture occurrence after adjustment for age, gender bone mineral density (BMD) and trabecular bone score (TBS) was 24-h urinary free cortisol (24hUFC) levels with a threshold of 1472 nmol/24 h (odds ratio, 3.00 (95 % confidence interval (CI), 1.52-5.92); p = 0.002). INTRODUCTION: The aim was to estimate the risk factors for fracture in subjects with endogenous Cushing's syndrome (CS) and to evaluate the value of the TBS in these patients. METHODS: All enrolled patients with CS (n = 182) were interviewed in relation to low-traumatic fractures and underwent lateral X-ray imaging from T4 to L5. BMD measurements were performed using a DXA Prodigy device (GEHC Lunar, Madison, Wisconsin, USA). The TBS was derived retrospectively from existing BMD scans, blinded to clinical outcome, using TBS iNsight software v2.1 (Medimaps, Merignac, France). Urinary free cortisol (24hUFC) was measured by immunochemiluminescence assay (reference range, 60-413 nmol/24 h). RESULTS: Among enrolled patients with CS (149 females; 33 males; mean age, 37.8 years (95 % confidence interval, 34.2-39.1); 24hUFC, 2370 nmol/24 h (2087-2632), fractures were confirmed in 81 (44.5 %) patients, with 70 suffering from vertebral fractures, which were multiple in 53 cases; 24 patients reported non-vertebral fractures. The mean spine TBS was 1.207 (1.187-1.228), and TBS Z-score was -1.86 (-2.07 to -1.65); area under the curve (AUC) was used to predict fracture (mean spine TBS) = 0.548 (95 % CI, 0.454-0.641)). In the final regression model, the only predictor of fracture occurrence was 24hUFC levels (p = 0.001), with an increase of 1.041 (95 % CI, 1.019-1.063), calculated for every 100 nmol/24-h cortisol elevation (AUC (24hUFC) = 0.705 (95 % CI, 0.629-0.782)). CONCLUSIONS: Young patients with CS have a low TBS. However, the only predictor of low traumatic fracture is the severity of the disease itself, indicated by high 24hUFC levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIMS: There is no standard test to determine the fatigue resistance of denture teeth. With the increasing number of patients with implant-retained dentures the mechanical strength of the denture teeth requires more attention and valid laboratory test set-ups. The purpose of the present study was to determine the fatigue resistance of various denture teeth using a dynamic load testing machine. METHODS: Four denture teeth were used: Bonartic II (Candulor), Physiodens (Vita), SR Phonares II (Ivoclar Vivadent) and Trubyte (Dentsply). For dynamic load testing, first upper molars with a similar shape and cusp inclination were selected. The molar teeth were embedded in cylindrical steel molds with denture base material (ProBase, Ivoclar Vivadent). Dynamic fatigue loading was carried out on the mesio-buccal cusp at a 45° angle using dynamic testing machines and 2,000,000 cycles at 2Hz in water (37°C). Three specimens per group and load were submitted to decreasing load levels (at least 4) until all the three specimens no longer showed any failures. All the specimens were evaluated under a stereo microscope (20× magnification). The number of cycles reached before observing a failure, and its dependence on the load and on the material, has been modeled using a parametric survival regression model with a lognormal distribution. This allowed to estimate the fatigue resistance for a given material as the maximal load for which one would observe less than 1% failure after 2,000,000 cycles. RESULTS: The failure pattern was similar for all denture teeth, showing a large chipping of the loaded mesio-buccal cusp. In our regression model, there were statistically significant differences among the different materials, with SR Phonares II and Bonartic II showing a higher resistance than Physiodens and Trubyte, the fatigue resistance being estimated at around 110N for the former two, and at about 60N for the latter two materials. CONCLUSION: The fatigue resistance may be a useful parameter to assess and to compare the clinical risk of chipping and fracture of denture tooth materials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wastewater-based epidemiology consists in acquiring relevant information about the lifestyle and health status of the population through the analysis of wastewater samples collected at the influent of a wastewater treatment plant. Whilst being a very young discipline, it has experienced an astonishing development since its firs application in 2005. The possibility to gather community-wide information about drug use has been among the major field of application. The wide resonance of the first results sparked the interest of scientists from various disciplines. Since then, research has broadened in innumerable directions. Although being praised as a revolutionary approach, there was a need to critically assess its added value, with regard to the existing indicators used to monitor illicit drug use. The main, and explicit, objective of this research was to evaluate the added value of wastewater-based epidemiology with regards to two particular, although interconnected, dimensions of illicit drug use. The first is related to trying to understand the added value of the discipline from an epidemiological, or societal, perspective. In other terms, to evaluate if and how it completes our current vision about the extent of illicit drug use at the population level, and if it can guide the planning of future prevention measures and drug policies. The second dimension is the criminal one, with a particular focus on the networks which develop around the large demand in illicit drugs. The goal here was to assess if wastewater-based epidemiology, combined to indicators stemming from the epidemiological dimension, could provide additional clues about the structure of drug distribution networks and the size of their market. This research had also an implicit objective, which focused on initiating the path of wastewater- based epidemiology at the Ecole des Sciences Criminelles of the University of Lausanne. This consisted in gathering the necessary knowledge about the collection, preparation, and analysis of wastewater samples and, most importantly, to understand how to interpret the acquired data and produce useful information. In the first phase of this research, it was possible to determine that ammonium loads, measured directly in the wastewater stream, could be used to monitor the dynamics of the population served by the wastewater treatment plant. Furthermore, it was shown that on the long term, the population did not have a substantial impact on consumption patterns measured through wastewater analysis. Focussing on methadone, for which precise prescription data was available, it was possible to show that reliable consumption estimates could be obtained via wastewater analysis. This allowed to validate the selected sampling strategy, which was then used to monitor the consumption of heroin, through the measurement of morphine. The latter, in combination to prescription and sales data, provided estimates of heroin consumption in line with other indicators. These results, combined to epidemiological data, highlighted the good correspondence between measurements and expectations and, furthermore, suggested that the dark figure of heroin users evading harm-reduction programs, which would thus not be measured by conventional indicators, is likely limited. In the third part, which consisted in a collaborative study aiming at extensively investigating geographical differences in drug use, wastewater analysis was shown to be a useful complement to existing indicators. In particular for stigmatised drugs, such as cocaine and heroin, it allowed to decipher the complex picture derived from surveys and crime statistics. Globally, it provided relevant information to better understand the drug market, both from an epidemiological and repressive perspective. The fourth part focused on cannabis and on the potential of combining wastewater and survey data to overcome some of their respective limitations. Using a hierarchical inference model, it was possible to refine current estimates of cannabis prevalence in the metropolitan area of Lausanne. Wastewater results suggested that the actual prevalence is substantially higher compared to existing figures, thus supporting the common belief that surveys tend to underestimate cannabis use. Whilst being affected by several biases, the information collected through surveys allowed to overcome some of the limitations linked to the analysis of cannabis markers in wastewater (i.e., stability and limited excretion data). These findings highlighted the importance and utility of combining wastewater-based epidemiology to existing indicators about drug use. Similarly, the fifth part of the research was centred on assessing the potential uses of wastewater-based epidemiology from a law enforcement perspective. Through three concrete examples, it was shown that results from wastewater analysis can be used to produce highly relevant intelligence, allowing drug enforcement to assess the structure and operations of drug distribution networks and, ultimately, guide their decisions at the tactical and/or operational level. Finally, the potential to implement wastewater-based epidemiology to monitor the use of harmful, prohibited and counterfeit pharmaceuticals was illustrated through the analysis of sibutramine, and its urinary metabolite, in wastewater samples. The results of this research have highlighted that wastewater-based epidemiology is a useful and powerful approach with numerous scopes. Faced with the complexity of measuring a hidden phenomenon like illicit drug use, it is a major addition to the panoply of existing indicators. -- L'épidémiologie basée sur l'analyse des eaux usées (ou, selon sa définition anglaise, « wastewater-based epidemiology ») consiste en l'acquisition d'informations portant sur le mode de vie et l'état de santé d'une population via l'analyse d'échantillons d'eaux usées récoltés à l'entrée des stations d'épuration. Bien qu'il s'agisse d'une discipline récente, elle a vécu des développements importants depuis sa première mise en oeuvre en 2005, notamment dans le domaine de l'analyse des résidus de stupéfiants. Suite aux retombées médiatiques des premiers résultats de ces analyses de métabolites dans les eaux usées, de nombreux scientifiques provenant de différentes disciplines ont rejoint les rangs de cette nouvelle discipline en développant plusieurs axes de recherche distincts. Bien que reconnu pour son coté objectif et révolutionnaire, il était nécessaire d'évaluer sa valeur ajoutée en regard des indicateurs couramment utilisés pour mesurer la consommation de stupéfiants. En se focalisant sur deux dimensions spécifiques de la consommation de stupéfiants, l'objectif principal de cette recherche était focalisé sur l'évaluation de la valeur ajoutée de l'épidémiologie basée sur l'analyse des eaux usées. La première dimension abordée était celle épidémiologique ou sociétale. En d'autres termes, il s'agissait de comprendre si et comment l'analyse des eaux usées permettait de compléter la vision actuelle sur la problématique, ainsi que déterminer son utilité dans la planification des mesures préventives et des politiques en matière de stupéfiants actuelles et futures. La seconde dimension abordée était celle criminelle, en particulier, l'étude des réseaux qui se développent autour du trafic de produits stupéfiants. L'objectif était de déterminer si cette nouvelle approche combinée aux indicateurs conventionnels, fournissait de nouveaux indices quant à la structure et l'organisation des réseaux de distribution ainsi que sur les dimensions du marché. Cette recherche avait aussi un objectif implicite, développer et d'évaluer la mise en place de l'épidémiologie basée sur l'analyse des eaux usées. En particulier, il s'agissait d'acquérir les connaissances nécessaires quant à la manière de collecter, traiter et analyser des échantillons d'eaux usées, mais surtout, de comprendre comment interpréter les données afin d'en extraire les informations les plus pertinentes. Dans la première phase de cette recherche, il y pu être mis en évidence que les charges en ammonium, mesurées directement dans les eaux usées permettait de suivre la dynamique des mouvements de la population contributrice aux eaux usées de la station d'épuration de la zone étudiée. De plus, il a pu être démontré que, sur le long terme, les mouvements de la population n'avaient pas d'influence substantielle sur le pattern de consommation mesuré dans les eaux usées. En se focalisant sur la méthadone, une substance pour laquelle des données précises sur le nombre de prescriptions étaient disponibles, il a pu être démontré que des estimations exactes sur la consommation pouvaient être tirées de l'analyse des eaux usées. Ceci a permis de valider la stratégie d'échantillonnage adoptée, qui, par le bais de la morphine, a ensuite été utilisée pour suivre la consommation d'héroïne. Combinée aux données de vente et de prescription, l'analyse de la morphine a permis d'obtenir des estimations sur la consommation d'héroïne en accord avec des indicateurs conventionnels. Ces résultats, combinés aux données épidémiologiques ont permis de montrer une bonne adéquation entre les projections des deux approches et ainsi démontrer que le chiffre noir des consommateurs qui échappent aux mesures de réduction de risque, et qui ne seraient donc pas mesurés par ces indicateurs, est vraisemblablement limité. La troisième partie du travail a été réalisée dans le cadre d'une étude collaborative qui avait pour but d'investiguer la valeur ajoutée de l'analyse des eaux usées à mettre en évidence des différences géographiques dans la consommation de stupéfiants. En particulier pour des substances stigmatisées, telles la cocaïne et l'héroïne, l'approche a permis d'objectiver et de préciser la vision obtenue avec les indicateurs traditionnels du type sondages ou les statistiques policières. Globalement, l'analyse des eaux usées s'est montrée être un outil très utile pour mieux comprendre le marché des stupéfiants, à la fois sous l'angle épidémiologique et répressif. La quatrième partie du travail était focalisée sur la problématique du cannabis ainsi que sur le potentiel de combiner l'analyse des eaux usées aux données de sondage afin de surmonter, en partie, leurs limitations. En utilisant un modèle d'inférence hiérarchique, il a été possible d'affiner les actuelles estimations sur la prévalence de l'utilisation de cannabis dans la zone métropolitaine de la ville de Lausanne. Les résultats ont démontré que celle-ci est plus haute que ce que l'on s'attendait, confirmant ainsi l'hypothèse que les sondages ont tendance à sous-estimer la consommation de cannabis. Bien que biaisés, les données récoltées par les sondages ont permis de surmonter certaines des limitations liées à l'analyse des marqueurs du cannabis dans les eaux usées (i.e., stabilité et manque de données sur l'excrétion). Ces résultats mettent en évidence l'importance et l'utilité de combiner les résultats de l'analyse des eaux usées aux indicateurs existants. De la même façon, la cinquième partie du travail était centrée sur l'apport de l'analyse des eaux usées du point de vue de la police. Au travers de trois exemples, l'utilisation de l'indicateur pour produire du renseignement concernant la structure et les activités des réseaux de distribution de stupéfiants, ainsi que pour guider les choix stratégiques et opérationnels de la police, a été mise en évidence. Dans la dernière partie, la possibilité d'utiliser cette approche pour suivre la consommation de produits pharmaceutiques dangereux, interdits ou contrefaits, a été démontrée par l'analyse dans les eaux usées de la sibutramine et ses métabolites. Les résultats de cette recherche ont mis en évidence que l'épidémiologie par l'analyse des eaux usées est une approche pertinente et puissante, ayant de nombreux domaines d'application. Face à la complexité de mesurer un phénomène caché comme la consommation de stupéfiants, la valeur ajoutée de cette approche a ainsi pu être démontrée.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Current cancer mortality statistics are important for public health decision making and resource allocation. Age standardized rates and numbers of deaths are predicted for 2016 in the European Union. PATIENTS AND METHODS: Population and death certification data for stomach, colorectum, pancreas, lung, breast, uterus, prostate, leukemia and total cancers were obtained from the World Health Organisation database and Eurostat. Figures were derived for the EU, France, Germany, Italy, Poland, Spain and the UK. Projected numbers of deaths by age group were obtained for 2016 by linear regression on estimated numbers of deaths over the most recent time period identified by a joinpoint regression model. RESULTS: Projected total cancer mortality trends for 2016 in the EU are favourable in both sexes with rates of 133.5/100,000 men and 85.2/100,000 women (8% and 3% falls since 2011, due to population ageing) corresponding to 753,600 and 605,900 deaths in men and women for a total number of 1,359,500 projected cancer deaths (+3% compared to 2011). In men lung, colorectal and prostate cancer fell 11%, 5% and 8% since 2011. Breast and colorectal cancer trends in women are favourable (8% and 7% falls, respectively), but lung and Pancreatic cancer rates rose 5% and 4% since 2011 reaching rates of 14.4 and 5.6/100,000 women. Leukemia shows favourable projected mortality for both sexes and all age groups with stronger falls in the younger age groups, rates are 4.0/100,000 men and 2.5/100,000 women, with respectively falls of 14% and 12%. CONCLUSION: The 2016 predictions for EU cancer mortality confirm the favourable trends in rates particularly for men. Lung cancer is likely to remain the leading site for female cancer rates. Continuing falls in mortality, larger in children and young adults, are predicted in leukemia, essentially due to advancements in management and therapy, and their subsequent adoption across Europe.