989 resultados para Severity Scoring Systems
Resumo:
SynopsisBackgroundCellulite refers to skin relief alterations in womens thighs and buttocks, causing dissatisfaction and search for treatment. Its physiopathology is complex and not completely understood. Many therapeutic options have been reported with no scientific evidence about benefits. The majority of the studies are not controlled nor randomized; most efficacy endpoints are subjective, like not well-standardized photographs and investigator opinion. Objective measures could improve severity assessment. Our purpose was to correlate non-invasive instrumental measures and standardized clinical evaluation.MethodsTwenty six women presenting cellulite on buttocks, aged from 25 to 41, were evaluated by: body mass index; standardized photography analysis (10-point severity and 5-point photonumeric scales) by five dermatologists; cutometry and high-frequency ultrasonography (dermal density and dermis/hypodermis interface length). Quality of life impact was assessed. Correlations between clinical and instrumental parameters were performed.ResultsGood agreement among dermatologists and main investigator perceptions was detected. Positive correlations: body mass index and clinical scores; ultrasonographic measures. Negative correlation: cutometry and clinical scores. Quality of life score was correlated to dermal collagen density.ConclusionCellulite caused impact in quality of life. Poor correlation between objective measures and clinical evaluation was detected. Cellulite severity assessment is a challenge, and objective parameters should be optimized for clinical trials.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background: The prevalence and severity of tooth wear and dental erosion is rising in children and there is no consensus about an index to be employed. Aim: To assess the reliability of an epidemiological scoring system dental wear index (DWI) to measure tooth wear and dental erosive wear. Design: An epidemiological cross-sectional survey was conducted to evaluate and compare tooth wear and dental erosion using the dental wear index and erosion wear index (EWI). The study was conducted with randomised samples of 2,371 children aged between 4 years and 12 years selected from the State of São Paulo, Brazil. Records were used for calculating tooth wear and dental erosion; the incisal edge and canine cusp were excluded. Results: As the schoolchildren's ages increased the severity of primary tooth wear increased in canines (P = 0.0001, OR = 0.34) and molars (P = 0.0001, OR = 2.47) and erosion wear increased in incisal/occlusal (P = 0.0001, OR = 5.18) and molars (P = 0.0001, OR = 2.47). There was an increased prevalence of wear in the permanent teeth of older schoolchildren, particularly on the incisal/occlusal surfaces (P = 0.0001, OR = 7.03). Conclusion: The prevalence of tooth wear and dental erosion increased as age increased in children. The epidemiological scoring system Dental Wear Index is able to measure both tooth wear and dental erosive wear. This index should be used to monitor the progression of non-carious lesions and to evaluate the levels of disease in the population.
Resumo:
OBJECTIVE: This study proposes a new approach that considers uncertainty in predicting and quantifying the presence and severity of diabetic peripheral neuropathy. METHODS: A rule-based fuzzy expert system was designed by four experts in diabetic neuropathy. The model variables were used to classify neuropathy in diabetic patients, defining it as mild, moderate, or severe. System performance was evaluated by means of the Kappa agreement measure, comparing the results of the model with those generated by the experts in an assessment of 50 patients. Accuracy was evaluated by an ROC curve analysis obtained based on 50 other cases; the results of those clinical assessments were considered to be the gold standard. RESULTS: According to the Kappa analysis, the model was in moderate agreement with expert opinions. The ROC analysis (evaluation of accuracy) determined an area under the curve equal to 0.91, demonstrating very good consistency in classifying patients with diabetic neuropathy. CONCLUSION: The model efficiently classified diabetic patients with different degrees of neuropathy severity. In addition, the model provides a way to quantify diabetic neuropathy severity and allows a more accurate patient condition assessment.
Resumo:
Introduction The survival of patients admitted to an emergency department is determined by the severity of acute illness and the quality of care provided. The high number and the wide spectrum of severity of illness of admitted patients make an immediate assessment of all patients unrealistic. The aim of this study is to evaluate a scoring system based on readily available physiological parameters immediately after admission to an emergency department (ED) for the purpose of identification of at-risk patients. Methods This prospective observational cohort study includes 4,388 consecutive adult patients admitted via the ED of a 960-bed tertiary referral hospital over a period of six months. Occurrence of each of seven potential vital sign abnormalities (threat to airway, abnormal respiratory rate, oxygen saturation, systolic blood pressure, heart rate, low Glasgow Coma Scale and seizures) was collected and added up to generate the vital sign score (VSS). VSSinitial was defined as the VSS in the first 15 minutes after admission, VSSmax as the maximum VSS throughout the stay in ED. Occurrence of single vital sign abnormalities in the first 15 minutes and VSSinitial and VSSmax were evaluated as potential predictors of hospital mortality. Results Logistic regression analysis identified all evaluated single vital sign abnormalities except seizures and abnormal respiratory rate to be independent predictors of hospital mortality. Increasing VSSinitial and VSSmax were significantly correlated to hospital mortality (odds ratio (OR) 2.80, 95% confidence interval (CI) 2.50 to 3.14, P < 0.0001 for VSSinitial; OR 2.36, 95% CI 2.15 to 2.60, P < 0.0001 for VSSmax). The predictive power of VSS was highest if collected in the first 15 minutes after ED admission (log rank Chi-square 468.1, P < 0.0001 for VSSinitial;,log rank Chi square 361.5, P < 0.0001 for VSSmax). Conclusions Vital sign abnormalities and VSS collected in the first minutes after ED admission can identify patients at risk of an unfavourable outcome.
Resumo:
OBJECTIVE: To assess the relationship between early laboratory parameters, disease severity, type of management (surgical or conservative) and outcome in necrotizing enterocolitis (NEC). STUDY DESIGN: Retrospective collection and analysis of data from infants treated in a single tertiary care center (1980 to 2002). Data were collected on disease severity (Bell stage), birth weight (BW), gestational age (GA) and pre-intervention laboratory parameters (leukocyte and platelet counts, hemoglobin, lactate, C-reactive protein). RESULTS: Data from 128 infants were sufficient for analysis. Factors significantly associated with survival were Bell stage (P<0.05), lactate (P<0.05), BW and GA (P<0.01, P<0.001, respectively). From receiver operating characteristics curves, the highest predictive value resulted from a score with 0 to 8 points combining BW, Bell stage, lactate and platelet count (P<0.001). At a cutoff level of 4.5 sensitivity and specificity for predicting survival were 0.71 and 0.72, respectively. CONCLUSION: Some single parameters were associated with poor outcome in NEC. Optimal risk stratification was achieved by combining several parameters in a score.
Resumo:
OBJECTIVES: We sought to compare the diagnostic performance of screen-film radiography, storage-phosphor radiography, and a flat-panel detector system in detecting forearm fractures and to classify distal radius fractures according to the Müller-AO and Frykman classifications compared with the true extent, depicted by anatomic preparation. MATERIALS AND METHODS: A total of 71 cadaver arms were fractured in a material testing machine creating different fractures of the radius and ulna as well as of the carpal bones. Radiographs of the complete forearm were evaluated by 3 radiologists, and anatomic preparation was used as standard of reference in a receiver operating curve analysis. RESULTS: The highest diagnostic performance was obtained for the detection of distal radius fractures with area under the receiver operating curve (AUC) values of 0.959 for screen-film radiography, 0.966 for storage-phosphor radiography, and 0.971 for the flat-panel detector system (P > 0.05). Exact classification was slightly better for the Frykman (kappa values of 0.457-0.478) compared with the Müller-AO classification (kappa values of 0.404-0.447), but agreement can be considered as moderate for both classifications. CONCLUSIONS: The 3 imaging systems showed a comparable diagnostic performance in detecting forearm fractures. A high diagnostic performance was demonstrated for distal radius fractures and conventional radiography can be routinely performed for fracture detection. However, compared with anatomic preparation, depiction of the true extent of distal radius fractures was limited and the severity of distal radius fractures tends to be underestimated.
Resumo:
BACKGROUND Prophylactic measures are key components of dairy herd mastitis control programs, but some are only relevant in specific housing systems. To assess the association between management practices and mastitis incidence, data collected in 2011 by a survey among 979 randomly selected Swiss dairy farms, and information from the regular test day recordings from 680 of these farms was analyzed. RESULTS The median incidence of farmer-reported clinical mastitis (ICM) was 11.6 (mean 14.7) cases per 100 cows per year. The median annual proportion of milk samples with a composite somatic cell count (PSCC) above 200,000 cells/ml was 16.1 (mean 17.3) %. A multivariable negative binomial regression model was fitted for each of the mastitis indicators for farms with tie-stall and free-stall housing systems separately to study the effect of other (than housing system) management practices on the ICM and PSCC events (above 200,000 cells/ml). The results differed substantially by housing system and outcome. In tie-stall systems, clinical mastitis incidence was mainly affected by region (mountainous production zone; incidence rate ratio (IRR) = 0.73), the dairy herd replacement system (1.27) and farmers age (0.81). The proportion of high SCC was mainly associated with dry cow udder controls (IRR = 0.67), clean bedding material at calving (IRR = 1.72), using total merit values to select bulls (IRR = 1.57) and body condition scoring (IRR = 0.74). In free-stall systems, the IRR for clinical mastitis was mainly associated with stall climate/temperature (IRR = 1.65), comfort mats as resting surface (IRR = 0.75) and when no feed analysis was carried out (IRR = 1.18). The proportion of high SSC was only associated with hand and arm cleaning after calving (IRR = 0.81) and beef producing value to select bulls (IRR = 0.66). CONCLUSIONS There were substantial differences in identified risk factors in the four models. Some of the factors were in agreement with the reported literature while others were not. This highlights the multifactorial nature of the disease and the differences in the risks for both mastitis manifestations. Attempting to understand these multifactorial associations for mastitis within larger management groups continues to play an important role in mastitis control programs.
Resumo:
BACKGROUND: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Resumo:
BACKGROUND No reliable tool to predict outcome of acute kidney injury (AKI) exists. HYPOTHESIS A statistically derived scoring system can accurately predict outcome in dogs with AKI managed with hemodialysis. ANIMALS One hundred and eighty-two client-owned dogs with AKI. METHODS Logistic regression analyses were performed initially on clinical variables available on the 1st day of hospitalization for relevance to outcome. Variables with P< or = .1 were considered for further analyses. Continuous variables outside the reference range were divided into quartiles to yield quartile-specific odds ratios (ORs) for survival. Models were developed by incorporating weighting factors assigned to each quartile based on the OR, using either the integer value of the OR (Model A) or the exact OR (Models B or C, when the etiology was known). A predictive score for each model was calculated for each dog by summing all weighting factors. In Model D, actual values for continuous variables were used in a logistic regression model. Receiver-operating curve analyses were performed to assess sensitivities, specificities, and optimal cutoff points for all models. RESULTS Higher scores were associated with decreased probability of survival (P < .001). Models A, B, C, and D correctly classified outcomes in 81, 83, 87, and 76% of cases, respectively, and optimal sensitivities/specificities were 77/85, 81/85, 83/90 and 92/61%, respectively. CONCLUSIONS AND CLINICAL RELEVANCE The models allowed outcome prediction that corresponded with actual outcome in our cohort. However, each model should be validated further in independent cohorts. The models may also be useful to assess AKI severity.
Resumo:
Histopathologic determination of tumor regression provides important prognostic information for locally advanced gastroesophageal carcinomas after neoadjuvant treatment. Regression grading systems mostly refer to the amount of therapy-induced fibrosis in relation to residual tumor or the estimated percentage of residual tumor in relation to the former tumor site. Although these methods are generally accepted, currently there is no common standard for reporting tumor regression in gastroesophageal cancers. We compared the application of these 2 major principles for assessment of tumor regression: hematoxylin and eosin-stained slides from 89 resection specimens of esophageal adenocarcinomas following neoadjuvant chemotherapy were independently reviewed by 3 pathologists from different institutions. Tumor regression was determined by the 5-tiered Mandard system (fibrosis/tumor relation) and the 4-tiered Becker system (residual tumor in %). Interobserver agreement for the Becker system showed better weighted κ values compared with the Mandard system (0.78 vs. 0.62). Evaluation of the whole embedded tumor site showed improved results (Becker: 0.83; Mandard: 0.73) as compared with only 1 representative slide (Becker: 0.68; Mandard: 0.71). Modification into simplified 3-tiered systems showed comparable interobserver agreement but better prognostic stratification for both systems (log rank Becker: P=0.015; Mandard P=0.03), with independent prognostic impact for overall survival (modified Becker: P=0.011, hazard ratio=3.07; modified Mandard: P=0.023, hazard ratio=2.72). In conclusion, both systems provide substantial to excellent interobserver agreement for estimation of tumor regression after neoadjuvant chemotherapy in esophageal adenocarcinomas. A simple 3-tiered system with the estimation of residual tumor in % (complete regression/1% to 50% residual tumor/>50% residual tumor) maintains the highest reproducibility and prognostic value.
Resumo:
We present and test a conceptual and methodological approach for interdisciplinary sustainability assessments of water governance systems based on what we call the sustainability wheel. The approach combines transparent identification of sustainability principles, their regional contextualization through sub-principles (indicators), and the scoring of these indicators through deliberative dialogue within an interdisciplinary team of researchers, taking into account their various qualitative and quantitative research results. The approach was applied to a sustainability assessment of a complex water governance system in the Swiss Alps. We conclude that the applied approach is advantageous for structuring complex and heterogeneous knowledge, gaining a holistic and comprehensive perspective on water sustainability, and communicating this perspective to stakeholders.
Resumo:
Chrysophyte cysts are recognized as powerful proxies of cold-season temperatures. In this paper we use the relationship between chrysophyte assemblages and the number of days below 4 °C (DB4 °C) in the epilimnion of a lake in northern Poland to develop a transfer function and to reconstruct winter severity in Poland for the last millennium. DB4 °C is a climate variable related to the length of the winter. Multivariate ordination techniques were used to study the distribution of chrysophytes from sediment traps of 37 low-land lakes distributed along a variety of environmental and climatic gradients in northern Poland. Of all the environmental variables measured, stepwise variable selection and individual Redundancy analyses (RDA) identified DB4 °C as the most important variable for chrysophytes, explaining a portion of variance independent of variables related to water chemistry (conductivity, chlorides, K, sulfates), which were also important. A quantitative transfer function was created to estimate DB4 °C from sedimentary assemblages using partial least square regression (PLS). The two-component model (PLS-2) had a coefficient of determination of View the MathML sourceRcross2 = 0.58, with root mean squared error of prediction (RMSEP, based on leave-one-out) of 3.41 days. The resulting transfer function was applied to an annually-varved sediment core from Lake Żabińskie, providing a new sub-decadal quantitative reconstruction of DB4 °C with high chronological accuracy for the period AD 1000–2010. During Medieval Times (AD 1180–1440) winters were generally shorter (warmer) except for a decade with very long and severe winters around AD 1260–1270 (following the AD 1258 volcanic eruption). The 16th and 17th centuries and the beginning of the 19th century experienced very long severe winters. Comparison with other European cold-season reconstructions and atmospheric indices for this region indicates that large parts of the winter variability (reconstructed DB4 °C) is due to the interplay between the oscillations of the zonal flow controlled by the North Atlantic Oscillation (NAO) and the influence of continental anticyclonic systems (Siberian High, East Atlantic/Western Russia pattern). Differences with other European records are attributed to geographic climatological differences between Poland and Western Europe (Low Countries, Alps). Striking correspondence between the combined volcanic and solar forcing and the DB4 °C reconstruction prior to the 20th century suggests that winter climate in Poland responds mostly to natural forced variability (volcanic and solar) and the influence of unforced variability is low.
Resumo:
AIMS Information on tumour border configuration (TBC) in colorectal cancer (CRC) is currently not included in most pathology reports, owing to lack of reproducibility and/or established evaluation systems. The aim of this study was to investigate whether an alternative scoring system based on the percentage of the infiltrating component may represent a reliable method for assessing TBC. METHODS AND RESULTS Two hundred and fifteen CRCs with complete clinicopathological data were evaluated by two independent observers, both 'traditionally' by assigning the tumours into pushing/infiltrating/mixed categories, and alternatively by scoring the percentage of infiltrating margin. With the pushing/infiltrating/mixed pattern method, interobserver agreement (IOA) was moderate (κ = 0.58), whereas with the percentage of infiltrating margins method, IOA was excellent (intraclass correlation coefficient of 0.86). A higher percentage of infiltrating margin correlated with adverse features such as higher grade (P = 0.0025), higher pT (P = 0.0007), pN (P = 0.0001) and pM classification (P = 0.0063), high-grade tumour budding (P < 0.0001), lymphatic invasion (P < 0.0001), vascular invasion (P = 0.0032), and shorter survival (P = 0.0008), and was significantly associated with an increased probability of lymph node metastasis (P < 0.001). CONCLUSIONS Information on TBC gives additional prognostic value to pathology reports on CRC. The novel proposed scoring system, by using the percentage of infiltrating margin, outperforms the 'traditional' way of reporting TBC. Additionally, it is reproducible and simple to apply, and can therefore be easily integrated into daily diagnostic practice.
Resumo:
Maritime accidents involving ships carrying passengers may pose a high risk with respect to human casualties. For effective risk mitigation, an insight into the process of risk escalation is needed. This requires a proactive approach when it comes to risk modelling for maritime transportation systems. Most of the existing models are based on historical data on maritime accidents, and thus they can be considered reactive instead of proactive. This paper introduces a systematic, transferable and proactive framework estimating the risk for maritime transportation systems, meeting the requirements stemming from the adopted formal definition of risk. The framework focuses on ship-ship collisions in the open sea, with a RoRo/Passenger ship (RoPax) being considered as the struck ship. First, it covers an identification of the events that follow a collision between two ships in the open sea, and, second, it evaluates the probabilities of these events, concluding by determining the severity of a collision. The risk framework is developed with the use of Bayesian Belief Networks and utilizes a set of analytical methods for the estimation of the risk model parameters. The model can be run with the use of GeNIe software package. Finally, a case study is presented, in which the risk framework developed here is applied to a maritime transportation system operating in the Gulf of Finland (GoF). The results obtained are compared to the historical data and available models, in which a RoPax was involved in a collision, and good agreement with the available records is found.