885 resultados para fuzzy based evaluation method
Resumo:
ABSTRACTINTRODUCTION: In the Americas, mucosal leishmaniasis is primarily associated with infection by Leishmania (Viannia) braziliensis. However, Leishmania (Viannia) guyanensis is another important cause of this disease in the Brazilian Amazon. In this study, we aimed at detecting Leishmaniadeoxyribonucleic acid (DNA) within paraffin-embedded fragments of mucosal tissues, and characterizing the infecting parasite species.METHODS: We evaluated samples collected from 114 patients treated at a reference center in the Brazilian Amazon by polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP) analyses.RESULTS: Direct examination of biopsy imprints detected parasites in 10 of the 114 samples, while evaluation of hematoxylin and eosin-stained slides detected amastigotes in an additional 17 samples. Meanwhile, 31/114 samples (27.2%) were positive for Leishmania spp. kinetoplast deoxyribonucleic acid (kDNA) by PCR analysis. Of these, 17 (54.8%) yielded amplification of the mini-exon PCR target, thereby allowing for PCR-RFLP-based identification. Six of the samples were identified as L. (V.) braziliensis, while the remaining 11 were identified as L. (V.) guyanensis.CONCLUSIONS: The results of this study demonstrate the feasibility of applying molecular techniques for the diagnosis of human parasites within paraffin-embedded tissues. Moreover, our findings confirm that L. (V.) guyanensisis a relevant causative agent of mucosal leishmaniasis in the Brazilian Amazon.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
Background: Retrospective analyses suggest that personalized PK-based dosage might be useful for imatinib, as treatment response correlates with trough concentrations (Cmin) in cancer patients. Our objectives were to improve the interpretation of randomly measured concentrations and to confirm its efficiency before evaluating the clinical usefulness of systematic PK-based dosage in chronic myeloid leukemia patients. Methods and Results: A Bayesian method was validated for the prediction of individual Cmin on the basis of a single random observation, and was applied in a prospective multicenter randomized controlled clinical trial. 28 out of 56 patients were enrolled in the systematic dosage individualization arm and had 44 follow-up visits (their clinical follow-up is ongoing). PK-dose-adjustments were proposed in 39% having predicted Cmin significantly away from the target (1000 ng/ml). Recommendations were taken up by physicians in 57%, patients were considered non-compliant in 27%. Median Cmin at study inclusion was 754 ng/ml and differed significantly from the target (p=0.02, Wilcoxon test). On follow-up, Cmin was 984 ng/ml (p=0.82) in the compliant group. CV decreased from 46% to 27% (p=0.02, F-test). Conclusion: PK-based (Bayesian) dosage adjustment is able to bring individual drug exposure closer to a given therapeutic target. Its influence on therapeutic response remains to be evaluated.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
RATIONALE AND OBJECTIVE:. The information assessment method (IAM) permits health professionals to systematically document the relevance, cognitive impact, use and health outcomes of information objects delivered by or retrieved from electronic knowledge resources. The companion review paper (Part 1) critically examined the literature, and proposed a 'Push-Pull-Acquisition-Cognition-Application' evaluation framework, which is operationalized by IAM. The purpose of the present paper (Part 2) is to examine the content validity of the IAM cognitive checklist when linked to email alerts. METHODS: A qualitative component of a mixed methods study was conducted with 46 doctors reading and rating research-based synopses sent on email. The unit of analysis was a doctor's explanation of a rating of one item regarding one synopsis. Interviews with participants provided 253 units that were analysed to assess concordance with item definitions. RESULTS AND CONCLUSION: The content relevance of seven items was supported. For three items, revisions were needed. Interviews suggested one new item. This study has yielded a 2008 version of IAM.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
PURPOSE: Few studies compare the variabilities that characterize environmental (EM) and biological monitoring (BM) data. Indeed, comparing their respective variabilities can help to identify the best strategy for evaluating occupational exposure. The objective of this study is to quantify the biological variability associated with 18 bio-indicators currently used in work environments. METHOD: Intra-individual (BV(intra)), inter-individual (BV(inter)), and total biological variability (BV(total)) were quantified using validated physiologically based toxicokinetic (PBTK) models coupled with Monte Carlo simulations. Two environmental exposure profiles with different levels of variability were considered (GSD of 1.5 and 2.0). RESULTS: PBTK models coupled with Monte Carlo simulations were successfully used to predict the biological variability of biological exposure indicators. The predicted values follow a lognormal distribution, characterized by GSD ranging from 1.1 to 2.3. Our results show that there is a link between biological variability and the half-life of bio-indicators, since BV(intra) and BV(total) both decrease as the biological indicator half-lives increase. BV(intra) is always lower than the variability in the air concentrations. On an individual basis, this means that the variability associated with the measurement of biological indicators is always lower than the variability characterizing airborne levels of contaminants. For a group of workers, BM is less variable than EM for bio-indicators with half-lives longer than 10-15 h. CONCLUSION: The variability data obtained in the present study can be useful in the development of BM strategies for exposure assessment and can be used to calculate the number of samples required for guiding industrial hygienists or medical doctors in decision-making.
Resumo:
OBJECTIVE: To extract and to validate a brief version of the DISCERN which could identify mental health-related websites with good content quality. METHOD: The present study is based on the analysis of data issued from six previous studies which used DISCERN and a standardized tool for the evaluation of content quality (evidence-based health information) of 388 mental health-related websites. After extracting the Brief DISCERN, several psychometric properties (content validity through a Factor analysis, internal consistency by the Cronbach's alpha index, predictive validity through the diagnostic tests, concurrent validity by the strength of association between the Brief DISCERN and the original DISCERN scores) were investigated to ascertain its general applicability. RESULTS: A Brief DISCERN composed of two factors and six items was extracted from the original 16 items version of the DISCERN. Cronbach's alpha coefficients were more than acceptable for the complete questionnaire (alpha=0.74) and for the two distinct domains: treatments information (alpha=0.87) and reliability (alpha=0.83). Sensibility and specificity of the Brief DISCERN cut-off score > or =16 in the detection of good content quality websites were 0.357 and 0.945, respectively. Its predictive positive and negative values were 0.98 and 0.83, respectively. A statistically significant linear correlation was found between the total scores of the Brief DISCERN and those of the original DISCERN (r=0.84 and p<0.0005). CONCLUSION: The Brief DISCERN seems to be a reliable and valid instrument able to discriminate between websites with good and poor content quality. PRACTICE IMPLICATIONS: The Brief DISCERN is a simple tool which could facilitate the identification of good information on the web by patients and general consumers.
Resumo:
One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.
Resumo:
Health assessment and medical surveillance of workers exposed to combustion nanoparticles are challenging. The aim was to evaluate the feasibility of using exhaled breath condensate (EBC) from healthy volunteers for (1) assessing the lung deposited dose of combustion nanoparticles and (2) determining the resulting oxidative stress by measuring hydrogen peroxide (H2O2) and malondialdehyde (MDA). Methods: Fifteen healthy nonsmoker volunteers were exposed to three different levels of sidestream cigarette smoke under controlled conditions. EBC was repeatedly collected before, during, and 1 and 2 hr after exposure. Exposure variables were measured by direct reading instruments and by active sampling. The different EBC samples were analyzed for particle number concentration (light-scattering-based method) and for selected compounds considered oxidative stress markers. Results: Subjects were exposed to an average airborne concentration up to 4.3×10(5) particles/cm(3) (average geometric size ∼60-80 nm). Up to 10×10(8) particles/mL could be measured in the collected EBC with a broad size distribution (50(th) percentile ∼160 nm), but these biological concentrations were not related to the exposure level of cigarette smoke particles. Although H2O2 and MDA concentrations in EBC increased during exposure, only H2O2 showed a transient normalization 1 hr after exposure and increased afterward. In contrast, MDA levels stayed elevated during the 2 hr post exposure. Conclusions: The use of diffusion light scattering for particle counting proved to be sufficiently sensitive to detect objects in EBC, but lacked the specificity for carbonaceous tobacco smoke particles. Our results suggest two phases of oxidation markers in EBC: first, the initial deposition of particles and gases in the lung lining liquid, and later the start of oxidative stress with associated cell membrane damage. Future studies should extend the follow-up time and should remove gases or particles from the air to allow differentiation between the different sources of H2O2 and MDA.
Resumo:
PLFC is a first-order possibilistic logic dealing with fuzzy constants and fuzzily restricted quantifiers. The refutation proof method in PLFC is mainly based on a generalized resolution rule which allows an implicit graded unification among fuzzy constants. However, unification for precise object constants is classical. In order to use PLFC for similarity-based reasoning, in this paper we extend a Horn-rule sublogic of PLFC with similarity-based unification of object constants. The Horn-rule sublogic of PLFC we consider deals only with disjunctive fuzzy constants and it is equipped with a simple and efficient version of PLFC proof method. At the semantic level, it is extended by equipping each sort with a fuzzy similarity relation, and at the syntactic level, by fuzzily “enlarging” each non-fuzzy object constant in the antecedent of a Horn-rule by means of a fuzzy similarity relation.
Resumo:
Diplomityön tavoitteena oli arvioida sähköisen oppimisen soveltuvuutta kohdeyrityksessä ja selvittää, voidaanko luokkahuonekoulutusta korvata sähköisen oppimisen kursseilla. Tietojärjestelmän raportoinnista tehtiin sähköisen oppimisen kurssi, joka oli koekäytössä. Koekäytön jälkeen tehtiin käyttäjäkysely, kerättiin käyttötietoja kurssista ja tehtiin haastatteluja. Koekäyttäjien kokemuksista tehdyn arvioinnin perusteella sähköinen oppiminen soveltuu käytettäväksi selkeiden asioiden koulutukseen kohdeyrityksessä, mutta se ei voi kokonaan korvata luokkahuonekoulutusta. Luokkahuonekoulutuksessa tulisi keskittyä monimutkaisempiin asioihin ja ongelmanratkaisuun. Positiivisten tulosten perusteella sähköisen oppimisen kehittämistä päätettiin jatkaa yrityksessä. Sähköisen oppimisen kurssin avulla saadaan kustannussäästöjä kohdeyrityksessä, kun käyttäjämäärä on suurempi kuin 66. Jos koko koekäytössä olleen kurssin kohdeyleisö suorittaa kurssin sähköisesti, ovat kustannukset vain noin 15% vastaavista kustannuksista luokkahuoneessa järjestettynä. Lisäksi sähköisen oppimisen tehokkuutta tutkittiin ja koekäytössä olleen kurssin arvioitiin olevan positiivinen työssä kehitetyn Consensus-mallin mukaan.
Resumo:
Polygala cyparissias is a plant widespread in Southern Latin America. Recently, we demonstrated the gastroprotective activity of the extract, as well as for one of the isolated metabolites-1,7-dihydroxy-2,3-methylenedioxyxanthone (MDX). In this study, a HPLC method for the quantification of MDX was validated. The HPLC method was linear (0.5-24 µg mL-1 of MDX) with good accuracy, precision and robustness. The content of MDX in the extracts from whole and different parts of the plant ranged from 0 to 5.4 mg g-1 and the gastroprotective index ranged from 72.1 to 99.1%. Thus, the method might be used for the standardization of the extracts based on the MDX marker.
Resumo:
This article describes the isolation and identification of flavonoids in the hydroethanolic extract of the aerial parts from Tonina fluviatilis and evaluation of their antiradical activity. A method based on HPLC-DAD was developed and validated for detecting and quantifying flavonoids in hydroethanolic extracts. The flavonoids identified and quantified in the extract were 6,7-dimethoxyquercetin-3-O-β-D-glucopyranoside (1), 6-hydroxy-7-methoxyquercetin-3-O-β-D-glucopyranoside (2), and 6-methoxyquercetin-3-O-β-D-glucopyranoside (3). The developed method presented good validation parameters, showing that the results obtained are consistent and can be used in ensuring the quantification of these constituents in the extracts. Compounds 2 and 3 showed strong antiradical activity when compared with the positive controls (quercetin and gallic acid).
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.