910 resultados para Negative Binomial Regression Model (NBRM)
                                
Resumo:
BACKGROUND: The aim of the current study was to assess whether widely used nutritional parameters are correlated with the nutritional risk score (NRS-2002) to identify postoperative morbidity and to evaluate the role of nutritionists in nutritional assessment. METHODS: A randomized trial on preoperative nutritional interventions (NCT00512213) provided the study cohort of 152 patients at nutritional risk (NRS-2002 ≥3) with a comprehensive phenotyping including diverse nutritional parameters (n=17), elaborated by nutritional specialists, and potential demographic and surgical (n=5) confounders. Risk factors for overall, severe (Dindo-Clavien 3-5) and infectious complications were identified by univariate analysis; parameters with P<0.20 were then entered in a multiple logistic regression model. RESULTS: Final analysis included 140 patients with complete datasets. Of these, 61 patients (43.6%) were overweight, and 72 patients (51.4%) experienced at least one complication of any degree of severity. Univariate analysis identified a correlation between few (≤3) active co-morbidities (OR=4.94; 95% CI: 1.47-16.56, p=0.01) and overall complications. Patients screened as being malnourished by nutritional specialists presented less overall complications compared to the not malnourished (OR=0.47; 95% CI: 0.22-0.97, p=0.043). Severe postoperative complications occurred more often in patients with low lean body mass (OR=1.06; 95% CI: 1-1.12, p=0.028). Few (≤3) active co-morbidities (OR=8.8; 95% CI: 1.12-68.99, p=0.008) were related with postoperative infections. Patients screened as being malnourished by nutritional specialists presented less infectious complications (OR=0.28; 95% CI: 0.1-0.78), p=0.014) as compared to the not malnourished. Multivariate analysis identified few co-morbidities (OR=6.33; 95% CI: 1.75-22.84, p=0.005), low weight loss (OR=1.08; 95% CI: 1.02-1.14, p=0.006) and low hemoglobin concentration (OR=2.84; 95% CI: 1.22-6.59, p=0.021) as independent risk factors for overall postoperative complications. Compliance with nutritional supplements (OR=0.37; 95% CI: 0.14-0.97, p=0.041) and supplementation of malnourished patients as assessed by nutritional specialists (OR=0.24; 95% CI: 0.08-0.69, p=0.009) were independently associated with decreased infectious complications. CONCLUSIONS: Nutritional support based upon NRS-2002 screening might result in overnutrition, with potentially deleterious clinical consequences. We emphasize the importance of detailed assessment of the nutritional status by a dedicated specialist before deciding on early nutritional intervention for patients with an initial NRS-2002 score of ≥3.
                                
Resumo:
BACKGROUND AND PURPOSE: To compare safety and efficacy of bridging approach with intravenous (IV) thrombolysis in patients with acute anterior strokes and proximal occlusions. PATIENTS AND METHODS: Consecutive patients with ischemic anterior strokes admitted within a 4 h 30 min window in two different centers were included. The first center performed IV therapy (alteplase 0.6 mg/kg) during 30 min and, in absence of clinical improvement, mechanical thrombectomy with flow restoration using a Solitaire stent (StS); the second carried out IV thrombolysis (alteplase 0.9 mg/kg) alone. Only T, M1 or M2 occlusions present on CT angiography were considered. Endpoints were clinical outcome and mortality at 3 months. RESULTS: There were 63 patients in the bridging and 163 in the IV group. No significant differences regarding baseline characteristics were observed. At 3 months, 46% (n = 29) of the patients treated in the combined and 23% (n = 38) of those treated in the IV group had a modified Rankin scale (mRS) of 0-1 (P < 0.001). A statistical significant difference was observed for all sites of occlusion. In a logistic regression model, National Institute of Health Stroke Scale (NIHSS) and bridging therapy were independent predictors of good outcome (respectively, P = 0.001 and P = 0.0018). Symptomatic hemorrhage was documented in 6.3% vs 3.7% in the bridging and in the IV group, respectively (P = 0.32). There was no difference in mortality. CONCLUSIONS: Our results suggest that patients treated with a bridging approach were more likely to have minimal or no deficit at all at 3 months as compared to the IV treated group.
                                
Resumo:
BACKGROUND AND PURPOSE: Statins display anti-inflammatory and anti-epileptogenic properties in animal models, and may reduce the epilepsy risk in elderly humans; however, a possible modulating role on outcome in patients with status epilepticus (SE) has not been assessed. METHODS: This cohort study was based on a prospective registry including all consecutive adults with incident SE treated in our center between April 2006 and September 2012. SE outcome was categorized at hospital discharge into 'return to baseline', 'new disability' and 'mortality'. The role of potential predictors, including statins treatment on admission, was evaluated using a multinomial logistic regression model. RESULTS: Amongst 427 patients identified, information on statins was available in 413 (97%). Mean age was 60.9 (±17.8) years; 201 (49%) were women; 211 (51%) had a potentially fatal SE etiology; and 191 (46%) experienced generalized-convulsive or non-convulsive SE in coma. Statins (simvastatin, atorvastatin or pravastatin) were prescribed prior to admission in 76 (18%) subjects, mostly elderly. Whilst 208 (50.4%) patients returned to baseline, 58 (14%) died. After adjustment for established SE outcome predictors (age, etiology, SE severity score), statins correlated significantly with lower mortality (relative risk ratio 0.38, P = 0.046). CONCLUSION: This study suggests for the first time that exposure to statins before an SE episode is related to its outcome, involving a possible anti-epileptogenic role. Other studies are needed to confirm this intriguing finding.
                                
Resumo:
OBJECTIVE: Alcohol expectancies (AEs) are positively associated with drinking behaviors, whereas the use of protective behavioural strategies (PBS) is negatively related to alcohol outcomes among young adults. PBS have been shown to weaken relationships between some alcohol risk factors and alcohol outcomes. This study aimed to examine longitudinally the moderating effect of PBS on the relationships between AEs and alcohol outcomes among young adults. METHOD: Participants (N = 188; 61.7% female) were U.S. young adults participating in a larger longitudinal study. Measures of PBS, AEs, alcohol use, and related consequences were used from the baseline and 12-month follow-up assessments. RESULTS: Negative binomial hurdle models found that PBS (total score) significantly moderated the relationship between positive AEs and consequences, such that among high school seniors endorsing higher positive AEs, those using more PBS in high school reported fewer negative consequences 1 year later. PBS (Manner of Drinking) also moderated the relationship between negative AEs and alcohol use, revealing the use of PBS in high school as having a protective function against later drinking among participants with high positive AEs. Last, PBS (Serious Harm Reduction) significantly moderated the associations between positive AEs and alcohol use and between negative AEs and consequences, such that participants with higher AEs and higher PBS use in high school were at greatest risk for drinking and experiencing negative consequences later. CONCLUSIONS: Overall, these findings suggest that PBS use may be protective by weakening relationships between positive AEs and alcohol outcomes. Limitations and future directions are discussed.
                                
Resumo:
BACKGROUND: Artemether-lumefantrine is the most widely used artemisinin-based combination therapy for malaria, although treatment failures occur in some regions. We investigated the effect of dosing strategy on efficacy in a pooled analysis from trials done in a wide range of malaria-endemic settings. METHODS: We searched PubMed for clinical trials that enrolled and treated patients with artemether-lumefantrine and were published from 1960 to December, 2012. We merged individual patient data from these trials by use of standardised methods. The primary endpoint was the PCR-adjusted risk of Plasmodium falciparum recrudescence by day 28. Secondary endpoints consisted of the PCR-adjusted risk of P falciparum recurrence by day 42, PCR-unadjusted risk of P falciparum recurrence by day 42, early parasite clearance, and gametocyte carriage. Risk factors for PCR-adjusted recrudescence were identified using Cox's regression model with frailty shared across the study sites. FINDINGS: We included 61 studies done between January, 1998, and December, 2012, and included 14 327 patients in our analyses. The PCR-adjusted therapeutic efficacy was 97·6% (95% CI 97·4-97·9) at day 28 and 96·0% (95·6-96·5) at day 42. After controlling for age and parasitaemia, patients prescribed a higher dose of artemether had a lower risk of having parasitaemia on day 1 (adjusted odds ratio [OR] 0·92, 95% CI 0·86-0·99 for every 1 mg/kg increase in daily artemether dose; p=0·024), but not on day 2 (p=0·69) or day 3 (0·087). In Asia, children weighing 10-15 kg who received a total lumefantrine dose less than 60 mg/kg had the lowest PCR-adjusted efficacy (91·7%, 95% CI 86·5-96·9). In Africa, the risk of treatment failure was greatest in malnourished children aged 1-3 years (PCR-adjusted efficacy 94·3%, 95% CI 92·3-96·3). A higher artemether dose was associated with a lower gametocyte presence within 14 days of treatment (adjusted OR 0·92, 95% CI 0·85-0·99; p=0·037 for every 1 mg/kg increase in total artemether dose). INTERPRETATION: The recommended dose of artemether-lumefantrine provides reliable efficacy in most patients with uncomplicated malaria. However, therapeutic efficacy was lowest in young children from Asia and young underweight children from Africa; a higher dose regimen should be assessed in these groups. FUNDING: Bill & Melinda Gates Foundation.
                                
Resumo:
The aim of the present study was to determine the impact of trabecular bone score on the probability of fracture above that provided by the clinical risk factors utilized in FRAX. We performed a retrospective cohort study of 33,352 women aged 40-99 years from the province of Manitoba, Canada, with baseline measurements of lumbar spine trabecular bone score (TBS) and FRAX risk variables. The analysis was cohort-specific rather than based on the Canadian version of FRAX. The associations between trabecular bone score, the FRAX risk factors and the risk of fracture or death were examined using an extension of the Poisson regression model and used to calculate 10-year probabilities of fracture with and without TBS and to derive an algorithm to adjust fracture probability to take account of the independent contribution of TBS to fracture and mortality risk. During a mean follow-up of 4.7 years, 1754 women died and 1639 sustained one or more major osteoporotic fractures excluding hip fracture and 306 women sustained one or more hip fracture. When fully adjusted for FRAX risk variables, TBS remained a statistically significant predictor of major osteoporotic fractures excluding hip fracture (HR/SD 1.18, 95 % CI 1.12-1.24), death (HR/SD 1.20, 95 % CI 1.14-1.26) and hip fracture (HR/SD 1.23, 95 % CI 1.09-1.38). Models adjusting major osteoporotic fracture and hip fracture probability were derived, accounting for age and trabecular bone score with death considered as a competing event. Lumbar spine texture analysis using TBS is a risk factor for osteoporotic fracture and a risk factor for death. The predictive ability of TBS is independent of FRAX clinical risk factors and femoral neck BMD. Adjustment of fracture probability to take account of the independent contribution of TBS to fracture and mortality risk requires validation in independent cohorts.
                                
Resumo:
Sähkönkulutuksen lyhyen aikavälin ennustamista on tutkittu jo pitkään. Pohjoismaisien sähkömarkkinoiden vapautuminen on vaikuttanut sähkönkulutuksen ennustamiseen. Aluksi työssä perehdyttiin aiheeseen liittyvään kirjallisuuteen. Sähkönkulutuksen käyttäytymistä tutkittiin eri aikoina. Lämpötila tilastojen käyttökelpoisuutta arvioitiin sähkönkulutusennustetta ajatellen. Kulutus ennusteet tehtiin tunneittain ja ennustejaksona käytettiin yhtä viikkoa. Työssä tutkittiin sähkönkulutuksen- ja lämpötiladatan saatavuutta ja laatua Nord Poolin markkina-alueelta. Syötettävien tietojen ominaisuudet vaikuttavat tunnittaiseen sähkönkulutuksen ennustamiseen. Sähkönkulutuksen ennustamista varten mallinnettiin kaksi lähestymistapaa. Testattavina malleina käytettiin regressiomallia ja autoregressiivistä mallia (autoregressive model, ARX). Mallien parametrit estimoitiin pienimmän neliösumman menetelmällä. Tulokset osoittavat että kulutus- ja lämpötiladata on tarkastettava jälkikäteen koska reaaliaikaisen syötetietojen laatu on huonoa. Lämpötila vaikuttaa kulutukseen talvella, mutta se voidaan jättää huomiotta kesäkaudella. Regressiomalli on vakaampi kuin ARX malli. Regressiomallin virhetermi voidaan mallintaa aikasarjamallia hyväksikäyttäen.
                                
Resumo:
Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.
                                
Resumo:
BACKGROUND: Due to the underlying diseases and the need for immunosuppression, patients after lung transplantation are particularly at risk for gastrointestinal (GI) complications that may negatively influence long-term outcome. The present study assessed the incidences and impact of GI complications after lung transplantation and aimed to identify risk factors. METHODS: Retrospective analysis of all 227 consecutively performed single- and double-lung transplantations at the University hospitals of Lausanne and Geneva was performed between January 1993 and December 2010. Logistic regressions were used to test the effect of potentially influencing variables on the binary outcomes overall, severe, and surgery-requiring complications, followed by a multiple logistic regression model. RESULTS: Final analysis included 205 patients for the purpose of the present study, and 22 patients were excluded due to re-transplantation, multiorgan transplantation, or incomplete datasets. GI complications were observed in 127 patients (62 %). Gastro-esophageal reflux disease was the most commonly observed complication (22.9 %), followed by inflammatory or infectious colitis (20.5 %) and gastroparesis (10.7 %). Major GI complications (Dindo/Clavien III-V) were observed in 83 (40.5 %) patients and were fatal in 4 patients (2.0 %). Multivariate analysis identified double-lung transplantation (p = 0.012) and early (1993-1998) transplantation period (p = 0.008) as independent risk factors for developing major GI complications. Forty-three (21 %) patients required surgery such as colectomy, cholecystectomy, and fundoplication in 6.8, 6.3, and 3.9 % of the patients, respectively. Multivariate analysis identified Charlson comorbidity index of ≥3 as an independent risk factor for developing GI complications requiring surgery (p = 0.015). CONCLUSION: GI complications after lung transplantation are common. Outcome was rather encouraging in the setting of our transplant center.
                                
Resumo:
BACKGROUND: Gemcitabine plus cisplatin (GC) has been adopted as a neoadjuvant regimen for muscle-invasive bladder cancer despite the lack of Level I evidence in this setting. METHODS: Data were collected using an electronic data-capture platform from 28 international centers. Eligible patients had clinical T-classification 2 (cT2) through cT4aN0M0 urothelial cancer of the bladder and received neoadjuvant GC or methotrexate, vinblastine, doxorubicin, plus cisplatin (MVAC) before undergoing cystectomy. Logistic regression was used to compute propensity scores as the predicted probabilities of patients being assigned to MVAC versus GC given their baseline characteristics. These propensity scores were then included in a new logistic regression model to estimate an adjusted odds ratio comparing the odds of attaining a pathologic complete response (pCR) between patients who received MVAC and those who received GC. RESULTS: In total, 212 patients (146 patients in the GC cohort and 66 patients in the MVAC cohort) met criteria for inclusion in the analysis. The majority of patients in the MVAC cohort (77%) received dose-dense MVAC. The median age of patients was 63 years, they were predominantly men (74%), and they received a median of 3 cycles of neoadjuvant chemotherapy. The pCR rate was 29% in the MVAC cohort and 31% in the GC cohort. There was no significant difference in the pCR rate when adjusted for propensity scores between the 2 regimens (odds ratio, 0.91; 95% confidence interval, 0.48-1.72; P = .77). In an exploratory analysis evaluating survival, the hazard ratio comparing hazard rates for MVAC versus GC adjusted for propensity scores was not statistically significant (hazard ratio, 0.78; 95% confidence interval, 0.40-1.54; P = .48). CONCLUSIONS: Patients who received neoadjuvant GC and MVAC achieved comparable pCR rates in the current analysis, providing evidence to support what has become routine practice. Cancer 2015;121:2586-2593. © 2015 American Cancer Society.
                                
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
                                
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
                                
Resumo:
Objectives: We present the retrospective analysis of a single-institution experience for radiosurgery (RS) in brain metastasis (BM) with Gamma Knife (GK) and Linac. Methods: From July 2010 to July 2012, 28 patients (with 83 lesions) had RS with GK and 35 patients (with 47 lesions) with Linac. The primary outcome was the local progression-free survival (LPFS). The secondary outcome was the overall survival (OS). Apart a standard statistical analysis, we included a Cox regression model with shared frailty, to modulate the within-patient correlation (preliminary evaluation showed a significant frailty effect, meaning that the correlation within patient could be ignored). Results: The mean follow-up period was 11.7 months (median 7.9, 1.7-22.7) for GK and 18.1 (median 17, 7.5-28.7) for Linac. The median number of lesions per patient was 2.5 (1-9) in GK compared with 1 (1-3) in Linac. There were more radioresistant lesions (melanoma) and more lesions located in functional areas for the GK group. The median dose was 24 Gy (GK) compared with 20 Gy (Linac). The LPFS actuarial rate was as follows: for GK at 3, 6, 9, 12, and 17 months: 96.96, 96.96, 96.96, 88.1, and 81.5%, and remained stable till 32 months; for Linac at 3, 6, 12, 17, 24, and 33 months, it was 91.5, 91.5, 91.5, 79.9, 55.5, and 17.1%, respectively (p = 0.03, chi-square test). After the Cox regression analysis with shared frailty, the p-value was not statistically significant between groups. The median overall survival was 9.7 months for GK and 23.6 months for Linac group. Uni- and multivariate analysis showed a lower GPA score and noncontrolled systemic status were associated with lower OS. Cox regression analysis adjusting for these two parameters showed comparable OS rate. Conclusions: In this comparative report between GK and Linac, preliminary analysis showed that more difficult cases are treated by GK, with patients harboring more lesions, radioresistant tumors, and highly functional located. The groups look, in this sense, very heterogeneous at baseline. After a Cox frailty model, the LPFS rates seemed very similar (p < 0.05). The OS was similar, after adjusting for systemic status and GPA score (p < 0.05). The technical reasons for choosing GK instead of Linac were the anatomical location related to highly functional areas, histology, technical limitations of Linac movements, especially lower posterior fossa locations, or closeness of multiple lesions to highly functional areas optimal dosimetry with Linac
                                
Resumo:
Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.
                                
Resumo:
The decision to settle a motor insurance claim by either negotiation or trial is analysed. This decision may depend on how risk and confrontation adverse or pessimistic the claimant is. The extent to which these behavioural features of the claimant might influence the final compensation amount are examined. An empirical analysis, fitting a switching regression model to a Spanish database, is conducted in order to analyze whether the choice of the conflict resolution procedure is endogenous to the compensation outcomes. The results show that compensations awarded by courts are always higher, although 95% of cases are settled by negotiation. We show that this is because claimants are adverse to risk and confrontation, and are pessimistic about their chances at trial. By contrast, insurers are risk - confrontation neutral and more objective in relation to the expected trial compensation. During the negotiation insurers accept to pay the subjective compensation values of claimants, since these values are lower than their estimates of compensations at trial.
 
                    