989 resultados para comparison of tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: International nutritional screening tools are recommended for screening hospitalized patients for nutritional risk, but no tool has been specifically evaluated in the Brazilian population. The aim of this study was to identify the most appropriate nutritional screening tool for predicting unfavorable clinical outcomes in patients admitted to a Brazilian public university hospital. Methods: The Nutritional Risk Screening 2002 (NRS 2002), Mini-Nutritional Assessment-Short Form (MNA-SF), and Malnutrition Universal Screening Tool (MUST) were administered to 705 patients within 48 h of hospital admission. Tool performance in predicting complications, very long length of hospital stay (LOS), and death was analyzed using receiver operating characteristic curves. Results: NRS 2002, MUST, and MNA-SF identified nutritional risk in 27.9%, 39.6%, and 73.2% of the patients, respectively. NRS 2002 (complications: 0.6531; very long LOS: 0.6508; death: 0.7948) and MNA-SF(complications: 0.6495; very long LOS: 0.6197; death: 0.7583) had largest areas under the ROC curve compared to MUST (complications: 0.6036; very long LOS: 0.6109; death: 0.6363). For elderly patients, NRS 2002 was not significantly different than MNA-SF (P>0.05) for predicting outcomes. Conclusion: Considering current criteria for nutritional risk, NRS 2002 and MNA-SF have similar performance to predict outcomes but NRS 2002 seems to provide a best yield. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High precision manufacturers continuously seek out disruptive technologies to improve the quality, cost, and delivery of their products. With the advancement of machine tool and measurement technology many companies are ready to capitalise on the opportunity of on-machine measurement (OMM). Coupled with business case, manufacturing engineers are now questioning whether OMM can soon eliminate the need for post-process inspection systems. Metrologists will however argue that the machining environment is too hostile and that there are numerous process variables which need consideration before traceable measurement on-the-machine can be achieved. In this paper we test the measurement capability of five new multi-axis machine tools enabled as OMM systems via on-machine probing. All systems are tested under various operating conditions in order to better understand the effects of potentially significant variables. This investigation has found that key process variables such as machine tool warm-up and tool-change cycles can have an effect on machine tool measurement repeatability. New data presented here is important to many manufacturers whom are considering utilising their high precision multi-axis machine tools for both the creation and verification of their products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Human immunodeficiency virus (HIV)associated lipodystrophy syndrome (LS) includes body composition and metabolic alterations. Lack of validated criteria and tools make difficult to evaluate body composition in this group. Objective: The aim of the study was to compare different methods to evaluate body composition between Brazilians HIV subjects with (HIV+LIPO+) or without LS (HIV+LIPO-) and healthy subjects (Control). Methods: in a cross-sectional analyses, body composition was measured by bioelectrical impedance analysis (BIA), skinfold thickness (SF) and dual-energy x-ray absorptiometry (DXA) in 10 subjects from HIV+LIPO+ group; 22 subjects from HIV+LIPO- group and 12 from Control group. Results: There were no differences in age and body mass index (BMI) between groups. The fat mass (FM) (%) estimated by SF did not correlate with DXA in HIV+LIPO+ group (r = 0,46/p >0,05) and had fair agreement in both HIV groups (HIV+LIPO+ =0,35/ HIV+ LIPO- = 0,40). BIA had significant correlation in all groups (p < 0,05) and strong agreement, meanly in HIV groups, for FM (HIV+LIPO+ = 0,79/ HIV+LIPO- = 0,85/Control = 0,60) and for fat free mass (FFM) (HIV+LIPO+ = 0,93/ HIV+LIPO- = 0,92 / Control = 0,73). Discussion: Total fat mass can be measured by BIA with good precision, but not by SF in HIV-infected patients with LS. Segmental BIA, triciptal SF, circumferences of arms, waist and legs maybe alternatives that need more studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an increasingly competitive and globalized world, companies need effective training methodologies and tools for their employees. However, selecting the most suitable ones is not an easy task. It depends on the requirements of the target group (namely time restrictions), on the specificities of the contents, etc. This is typically the case for training in Lean, the waste elimination manufacturing philosophy. This paper presents and compares two different approaches to lean training methodologies and tools: a simulation game based on a single realistic manufacturing platform, involving production and assembly operations that allows learning by playing; and a digital game that helps understand lean tools. This paper shows that both tools have advantages in terms of trainee motivation and knowledge acquisition. Furthermore, they can be used in a complementary way, reinforcing the acquired knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Epidemiological studies on amebiasis have been reassessed since Entamoeba histolytica and E. dispar were first recognized as distinct species. Because the morphological similarity of these species renders microscopic diagnosis unreliable, additional tools are required to discriminate between Entamoeba species. The objectives of our study were to compare microscopy with ELISA kit (IVD®) results, to diagnose E. histolytica infection, and to determine the prevalence of amebiasis in a sample of students from southeastern Brazil. Methods: In this study, diagnosis was based on microscopy due to its capacity for revealing potential cysts/trophozoites and on two commercial kits for antigen detection in stool samples. Results: For 1,403 samples collected from students aged 6 to 14 years who were living in Divinópolis, Minas Gerais, Brazil, microscopy underestimated the number of individuals infected with E. histolytica/E. dispar (5.7% prevalence) compared with the ELISA kit (IVD®)-based diagnoses (15.7% for E. histolytica/E. dispar). A comparison of the ELISA (IVD®) and light microscopy results returned a 20% sensitivity, 97% specificity, low positive predictive value, and high negative predictive value for microscopy. An ELISA kit (TechLab®) that was specific for E. histolytica detected a 3.1% (43/1403) prevalence for E. histolytica infection. Conclusions: The ELISA kit (IVD®) can be used as an alternative screening tool. The high prevalence of E. histolytica infection detected in this study warrants the implementation of actions directed toward health promotion and preventive measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grasslands in semi-arid regions, like Mongolian steppes, are facing desertification and degradation processes, due to climate change. Mongolia’s main economic activity consists on an extensive livestock production and, therefore, it is a concerning matter for the decision makers. Remote sensing and Geographic Information Systems provide the tools for advanced ecosystem management and have been widely used for monitoring and management of pasture resources. This study investigates which is the higher thematic detail that is possible to achieve through remote sensing, to map the steppe vegetation, using medium resolution earth observation imagery in three districts (soums) of Mongolia: Dzag, Buutsagaan and Khureemaral. After considering different thematic levels of detail for classifying the steppe vegetation, the existent pasture types within the steppe were chosen to be mapped. In order to investigate which combination of data sets yields the best results and which classification algorithm is more suitable for incorporating these data sets, a comparison between different classification methods were tested for the study area. Sixteen classifications were performed using different combinations of estimators, Landsat-8 (spectral bands and Landsat-8 NDVI-derived) and geophysical data (elevation, mean annual precipitation and mean annual temperature) using two classification algorithms, maximum likelihood and decision tree. Results showed that the best performing model was the one that incorporated Landsat-8 bands with mean annual precipitation and mean annual temperature (Model 13), using the decision tree. For maximum likelihood, the model that incorporated Landsat-8 bands with mean annual precipitation (Model 5) and the one that incorporated Landsat-8 bands with mean annual precipitation and mean annual temperature (Model 13), achieved the higher accuracies for this algorithm. The decision tree models consistently outperformed the maximum likelihood ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A table showing a comparison and classification of tools (intelligent tutoring systems) for e-learning of Logic at a college level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant advances were made in the diagnosis of filariasis in the 1990s with the emergence of three new alternative tools: ultrasound and tests to detect circulating antigen using two monoclonal antibodies, Og4C3 and AD12-ICT-card. This study aimed to identify which of these methods is the most sensitive for diagnosis of infection. A total of 256 individuals, all male and carrying microfilariae (1-15,679 MF/mL), diagnosed by nocturnal venous blood samples, were tested by all three techniques. The tests for circulating filarial antigen concurred 100% and correctly identified 246/256 (96.69%) of the positive individuals, while ultrasound detected only 186/256 (73.44%). Of the circulating antigen tests, ICT-card was the most convenient method for identification of Wuchereria bancrofti carriers. It was easy to perform, practical and quick.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to evaluate the use of conventional polymerase chain reaction (cPCR) and real-time quantitative PCR (qPCR) in the diagnosis of human strongyloidiasis from stool samples in tropical areas. Stool samples were collected from individuals and were determined to be positive for Strongyloides stercoralis (group I), negative for S. stercoralis (group II) and positive for other enteroparasite species (group III). DNA specific to S. stercoralis was found in 76.7% of group I samples by cPCR and in 90% of group I samples by qPCR. The results show that molecular methods can be used as alternative tools for detecting S. stercoralis in human stool samples in tropical areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.