926 resultados para smooth transition regression model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Due to the underlying diseases and the need for immunosuppression, patients after lung transplantation are particularly at risk for gastrointestinal (GI) complications that may negatively influence long-term outcome. The present study assessed the incidences and impact of GI complications after lung transplantation and aimed to identify risk factors. METHODS: Retrospective analysis of all 227 consecutively performed single- and double-lung transplantations at the University hospitals of Lausanne and Geneva was performed between January 1993 and December 2010. Logistic regressions were used to test the effect of potentially influencing variables on the binary outcomes overall, severe, and surgery-requiring complications, followed by a multiple logistic regression model. RESULTS: Final analysis included 205 patients for the purpose of the present study, and 22 patients were excluded due to re-transplantation, multiorgan transplantation, or incomplete datasets. GI complications were observed in 127 patients (62 %). Gastro-esophageal reflux disease was the most commonly observed complication (22.9 %), followed by inflammatory or infectious colitis (20.5 %) and gastroparesis (10.7 %). Major GI complications (Dindo/Clavien III-V) were observed in 83 (40.5 %) patients and were fatal in 4 patients (2.0 %). Multivariate analysis identified double-lung transplantation (p = 0.012) and early (1993-1998) transplantation period (p = 0.008) as independent risk factors for developing major GI complications. Forty-three (21 %) patients required surgery such as colectomy, cholecystectomy, and fundoplication in 6.8, 6.3, and 3.9 % of the patients, respectively. Multivariate analysis identified Charlson comorbidity index of ≥3 as an independent risk factor for developing GI complications requiring surgery (p = 0.015). CONCLUSION: GI complications after lung transplantation are common. Outcome was rather encouraging in the setting of our transplant center.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Gemcitabine plus cisplatin (GC) has been adopted as a neoadjuvant regimen for muscle-invasive bladder cancer despite the lack of Level I evidence in this setting. METHODS: Data were collected using an electronic data-capture platform from 28 international centers. Eligible patients had clinical T-classification 2 (cT2) through cT4aN0M0 urothelial cancer of the bladder and received neoadjuvant GC or methotrexate, vinblastine, doxorubicin, plus cisplatin (MVAC) before undergoing cystectomy. Logistic regression was used to compute propensity scores as the predicted probabilities of patients being assigned to MVAC versus GC given their baseline characteristics. These propensity scores were then included in a new logistic regression model to estimate an adjusted odds ratio comparing the odds of attaining a pathologic complete response (pCR) between patients who received MVAC and those who received GC. RESULTS: In total, 212 patients (146 patients in the GC cohort and 66 patients in the MVAC cohort) met criteria for inclusion in the analysis. The majority of patients in the MVAC cohort (77%) received dose-dense MVAC. The median age of patients was 63 years, they were predominantly men (74%), and they received a median of 3 cycles of neoadjuvant chemotherapy. The pCR rate was 29% in the MVAC cohort and 31% in the GC cohort. There was no significant difference in the pCR rate when adjusted for propensity scores between the 2 regimens (odds ratio, 0.91; 95% confidence interval, 0.48-1.72; P = .77). In an exploratory analysis evaluating survival, the hazard ratio comparing hazard rates for MVAC versus GC adjusted for propensity scores was not statistically significant (hazard ratio, 0.78; 95% confidence interval, 0.40-1.54; P = .48). CONCLUSIONS: Patients who received neoadjuvant GC and MVAC achieved comparable pCR rates in the current analysis, providing evidence to support what has become routine practice. Cancer 2015;121:2586-2593. © 2015 American Cancer Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: We present the retrospective analysis of a single-institution experience for radiosurgery (RS) in brain metastasis (BM) with Gamma Knife (GK) and Linac. Methods: From July 2010 to July 2012, 28 patients (with 83 lesions) had RS with GK and 35 patients (with 47 lesions) with Linac. The primary outcome was the local progression-free survival (LPFS). The secondary outcome was the overall survival (OS). Apart a standard statistical analysis, we included a Cox regression model with shared frailty, to modulate the within-patient correlation (preliminary evaluation showed a significant frailty effect, meaning that the correlation within patient could be ignored). Results: The mean follow-up period was 11.7 months (median 7.9, 1.7-22.7) for GK and 18.1 (median 17, 7.5-28.7) for Linac. The median number of lesions per patient was 2.5 (1-9) in GK compared with 1 (1-3) in Linac. There were more radioresistant lesions (melanoma) and more lesions located in functional areas for the GK group. The median dose was 24 Gy (GK) compared with 20 Gy (Linac). The LPFS actuarial rate was as follows: for GK at 3, 6, 9, 12, and 17 months: 96.96, 96.96, 96.96, 88.1, and 81.5%, and remained stable till 32 months; for Linac at 3, 6, 12, 17, 24, and 33 months, it was 91.5, 91.5, 91.5, 79.9, 55.5, and 17.1%, respectively (p = 0.03, chi-square test). After the Cox regression analysis with shared frailty, the p-value was not statistically significant between groups. The median overall survival was 9.7 months for GK and 23.6 months for Linac group. Uni- and multivariate analysis showed a lower GPA score and noncontrolled systemic status were associated with lower OS. Cox regression analysis adjusting for these two parameters showed comparable OS rate. Conclusions: In this comparative report between GK and Linac, preliminary analysis showed that more difficult cases are treated by GK, with patients harboring more lesions, radioresistant tumors, and highly functional located. The groups look, in this sense, very heterogeneous at baseline. After a Cox frailty model, the LPFS rates seemed very similar (p < 0.05). The OS was similar, after adjusting for systemic status and GPA score (p < 0.05). The technical reasons for choosing GK instead of Linac were the anatomical location related to highly functional areas, histology, technical limitations of Linac movements, especially lower posterior fossa locations, or closeness of multiple lesions to highly functional areas optimal dosimetry with Linac

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The decision to settle a motor insurance claim by either negotiation or trial is analysed. This decision may depend on how risk and confrontation adverse or pessimistic the claimant is. The extent to which these behavioural features of the claimant might influence the final compensation amount are examined. An empirical analysis, fitting a switching regression model to a Spanish database, is conducted in order to analyze whether the choice of the conflict resolution procedure is endogenous to the compensation outcomes. The results show that compensations awarded by courts are always higher, although 95% of cases are settled by negotiation. We show that this is because claimants are adverse to risk and confrontation, and are pessimistic about their chances at trial. By contrast, insurers are risk - confrontation neutral and more objective in relation to the expected trial compensation. During the negotiation insurers accept to pay the subjective compensation values of claimants, since these values are lower than their estimates of compensations at trial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hospital expenses are a major cost driver of healthcare systems in Europe, with motor injuries being the leading mechanism of hospitalizations. This paper investigates the injury characteristics which explain the hospitalization of victims of traffic accidents that took place in Spain. Using a motor insurance database with 16.081 observations a generalized Tobit regression model is applied to analyse the factors that influence both the likelihood of being admitted to hospital after a motor collision and the length of hospital stay in the event of admission. The consistency of Tobit estimates relies on the normality of perturbation terms. Here a semi-parametric regression model was fitted to test the consistency of estimates, concluding that a normal distribution of errors cannot be rejected. Among other results, it was found that older men with fractures and injuries located in the head and lower torso are more likely to be hospitalized after the collision, and that they also have a longer expected length of hospital recovery stay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Delirium and frailty - both potentially reversible geriatric syndromes - are seldom studied together, although they often occur jointly in older patients discharged from hospitals. This study aimed to explore the relationship between delirium and frailty in older adults discharged from hospitals. METHODS: Of the 221 patients aged >65 years, who were invited to participate, only 114 gave their consent to participate in this study. Delirium was assessed using the confusion assessment method, in which patients were classified dichotomously as delirious or nondelirious according to its algorithm. Frailty was assessed using the Edmonton Frailty Scale, which classifies patients dichotomously as frail or nonfrail. In addition to the sociodemographic characteristics, covariates such as scores from the Mini-Mental State Examination, Instrumental Activities of Daily Living scale, and Cumulative Illness Rating Scale for Geriatrics and details regarding polymedication were collected. A multidimensional linear regression model was used for analysis. RESULTS: Almost 20% of participants had delirium (n=22), and 76.3% were classified as frail (n=87); 31.5% of the variance in the delirium score was explained by frailty (R (2)=0.315). Age; polymedication; scores of the Confusion Assessment Method (CAM), instrumental activities of daily living, and Cumulative Illness Rating Scale for Geriatrics; and frailty increased the predictability of the variance of delirium by 32% to 64% (R (2)=0.64). CONCLUSION: Frailty is strongly related to delirium in older patients after discharge from the hospital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a cohort study of 182 consecutive patients with active endogenous Cushing's syndrome, the only predictor of fracture occurrence after adjustment for age, gender bone mineral density (BMD) and trabecular bone score (TBS) was 24-h urinary free cortisol (24hUFC) levels with a threshold of 1472 nmol/24 h (odds ratio, 3.00 (95 % confidence interval (CI), 1.52-5.92); p = 0.002). INTRODUCTION: The aim was to estimate the risk factors for fracture in subjects with endogenous Cushing's syndrome (CS) and to evaluate the value of the TBS in these patients. METHODS: All enrolled patients with CS (n = 182) were interviewed in relation to low-traumatic fractures and underwent lateral X-ray imaging from T4 to L5. BMD measurements were performed using a DXA Prodigy device (GEHC Lunar, Madison, Wisconsin, USA). The TBS was derived retrospectively from existing BMD scans, blinded to clinical outcome, using TBS iNsight software v2.1 (Medimaps, Merignac, France). Urinary free cortisol (24hUFC) was measured by immunochemiluminescence assay (reference range, 60-413 nmol/24 h). RESULTS: Among enrolled patients with CS (149 females; 33 males; mean age, 37.8 years (95 % confidence interval, 34.2-39.1); 24hUFC, 2370 nmol/24 h (2087-2632), fractures were confirmed in 81 (44.5 %) patients, with 70 suffering from vertebral fractures, which were multiple in 53 cases; 24 patients reported non-vertebral fractures. The mean spine TBS was 1.207 (1.187-1.228), and TBS Z-score was -1.86 (-2.07 to -1.65); area under the curve (AUC) was used to predict fracture (mean spine TBS) = 0.548 (95 % CI, 0.454-0.641)). In the final regression model, the only predictor of fracture occurrence was 24hUFC levels (p = 0.001), with an increase of 1.041 (95 % CI, 1.019-1.063), calculated for every 100 nmol/24-h cortisol elevation (AUC (24hUFC) = 0.705 (95 % CI, 0.629-0.782)). CONCLUSIONS: Young patients with CS have a low TBS. However, the only predictor of low traumatic fracture is the severity of the disease itself, indicated by high 24hUFC levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: There is no standard test to determine the fatigue resistance of denture teeth. With the increasing number of patients with implant-retained dentures the mechanical strength of the denture teeth requires more attention and valid laboratory test set-ups. The purpose of the present study was to determine the fatigue resistance of various denture teeth using a dynamic load testing machine. METHODS: Four denture teeth were used: Bonartic II (Candulor), Physiodens (Vita), SR Phonares II (Ivoclar Vivadent) and Trubyte (Dentsply). For dynamic load testing, first upper molars with a similar shape and cusp inclination were selected. The molar teeth were embedded in cylindrical steel molds with denture base material (ProBase, Ivoclar Vivadent). Dynamic fatigue loading was carried out on the mesio-buccal cusp at a 45° angle using dynamic testing machines and 2,000,000 cycles at 2Hz in water (37°C). Three specimens per group and load were submitted to decreasing load levels (at least 4) until all the three specimens no longer showed any failures. All the specimens were evaluated under a stereo microscope (20× magnification). The number of cycles reached before observing a failure, and its dependence on the load and on the material, has been modeled using a parametric survival regression model with a lognormal distribution. This allowed to estimate the fatigue resistance for a given material as the maximal load for which one would observe less than 1% failure after 2,000,000 cycles. RESULTS: The failure pattern was similar for all denture teeth, showing a large chipping of the loaded mesio-buccal cusp. In our regression model, there were statistically significant differences among the different materials, with SR Phonares II and Bonartic II showing a higher resistance than Physiodens and Trubyte, the fatigue resistance being estimated at around 110N for the former two, and at about 60N for the latter two materials. CONCLUSION: The fatigue resistance may be a useful parameter to assess and to compare the clinical risk of chipping and fracture of denture tooth materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Current cancer mortality statistics are important for public health decision making and resource allocation. Age standardized rates and numbers of deaths are predicted for 2016 in the European Union. PATIENTS AND METHODS: Population and death certification data for stomach, colorectum, pancreas, lung, breast, uterus, prostate, leukemia and total cancers were obtained from the World Health Organisation database and Eurostat. Figures were derived for the EU, France, Germany, Italy, Poland, Spain and the UK. Projected numbers of deaths by age group were obtained for 2016 by linear regression on estimated numbers of deaths over the most recent time period identified by a joinpoint regression model. RESULTS: Projected total cancer mortality trends for 2016 in the EU are favourable in both sexes with rates of 133.5/100,000 men and 85.2/100,000 women (8% and 3% falls since 2011, due to population ageing) corresponding to 753,600 and 605,900 deaths in men and women for a total number of 1,359,500 projected cancer deaths (+3% compared to 2011). In men lung, colorectal and prostate cancer fell 11%, 5% and 8% since 2011. Breast and colorectal cancer trends in women are favourable (8% and 7% falls, respectively), but lung and Pancreatic cancer rates rose 5% and 4% since 2011 reaching rates of 14.4 and 5.6/100,000 women. Leukemia shows favourable projected mortality for both sexes and all age groups with stronger falls in the younger age groups, rates are 4.0/100,000 men and 2.5/100,000 women, with respectively falls of 14% and 12%. CONCLUSION: The 2016 predictions for EU cancer mortality confirm the favourable trends in rates particularly for men. Lung cancer is likely to remain the leading site for female cancer rates. Continuing falls in mortality, larger in children and young adults, are predicted in leukemia, essentially due to advancements in management and therapy, and their subsequent adoption across Europe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The objective of the present study was to evaluate current radiographic parameters designed to investigate adenoid hypertrophy and nasopharyngeal obstruction, and to present an alternative radiographic assessment method. Materials and Methods In order to do so, children (4 to14 years old) who presented with nasal obstruction or oral breathing complaints were submitted to cavum radiographic examination. One hundred and twenty records were evaluated according to quantitative radiographic parameters, and data were correlated with a gold-standard videonasopharyngoscopic study, in relation to the percentage of choanal obstruction. Subsequently, a regression analysis was performed in order to create an original model so the percentage of the choanal obstruction could be predicted. Results The quantitative parameters demonstrated moderate, if not weak correlation with the real percentage of choanal obstruction. The regression model (110.119*A/N) demonstrated a satisfactory ability to “predict” the actual percentage of choanal obstruction. Conclusion Since current adenoid quantitative radiographic parameters present limitations, the model presented by the present study might be considered as an alternative assessment method in cases where videonasopharyngoscopic evaluation is unavailable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabecular bone score (TBS) is a gray-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a bone mineral density (BMD)-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual-level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables, and outcomes during follow-up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities, and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1 SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% confidence interval [CI] 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR = 1.32, 95% CI 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95% CI 1.65-1.87 versus 1.70, 95% CI 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. © 2015 American Society for Bone and Mineral Research.