12 resultados para fuzzy based evaluation method
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
A new concept for in vitro visual evaluation of sun protection factor (SPF) of cosmetic formulations based on a supramolecular ultraviolet (UV) dosimeter was clearly demonstrated. The method closely parallels the method validated for in vivo evaluation and relies on the determination of the slightest perceptible bleaching of an iron-complex dye/nanocrystallinetitanium dioxide interface (UV dosimeter) in combination with an artificial skin substrate simulating the actual human skin in the presence and absence of a cosmetic formulation. The successful evaluation of SPF was ensured by the similarity of the erythema response of our dosimeter and human skin to UV light irradiation. A good linear correlation of in vitro and in vivo data up to SPF 40 confirmed the effectiveness of such a simple, cheap, and fast method. In short, here we unravel a convenient and accessible visual FPS evaluation method that can help improving the control on cosmetic products contributing to the reduction of skin cancer, one of the critical public health issues nowadays. (C) 2011 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 101:726732, 2012
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency`s technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.
Resumo:
Traditional abduction imposes as a precondition the restriction that the background information may not derive the goal data. In first-order logic such precondition is, in general, undecidable. To avoid such problem, we present a first-order cut-based abduction method, which has KE-tableaux as its underlying inference system. This inference system allows for the automation of non-analytic proofs in a tableau setting, which permits a generalization of traditional abduction that avoids the undecidable precondition problem. After demonstrating the correctness of the method, we show how this method can be dynamically iterated in a process that leads to the construction of non-analytic first-order proofs and, in some terminating cases, to refutations as well.
Resumo:
The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Hierarchical multi-label classification is a complex classification task where the classes involved in the problem are hierarchically structured and each example may simultaneously belong to more than one class in each hierarchical level. In this paper, we extend our previous works, where we investigated a new local-based classification method that incrementally trains a multi-layer perceptron for each level of the classification hierarchy. Predictions made by a neural network in a given level are used as inputs to the neural network responsible for the prediction in the next level. We compare the proposed method with one state-of-the-art decision-tree induction method and two decision-tree induction methods, using several hierarchical multi-label classification datasets. We perform a thorough experimental analysis, showing that our method obtains competitive results to a robust global method regarding both precision and recall evaluation measures.
Resumo:
Background: Great efforts have been made to increase accessibility of HIV antiretroviral therapy (ART) in low and middle-income countries. The threat of wide-scale emergence of drug resistance could severely hamper ART scale-up efforts. Population-based surveillance of transmitted HIV drug resistance ensures the use of appropriate first-line regimens to maximize efficacy of ART programs where drug options are limited. However, traditional HIV genotyping is extremely expensive, providing a cost barrier to wide-scale and frequent HIV drug resistance surveillance. Methods/Results: We have developed a low-cost laboratory-scale next-generation sequencing-based genotyping method to monitor drug resistance. We designed primers specifically to amplify protease and reverse transcriptase from Brazilian HIV subtypes and developed a multiplexing scheme using multiplex identifier tags to minimize cost while providing more robust data than traditional genotyping techniques. Using this approach, we characterized drug resistance from plasma in 81 HIV infected individuals collected in Sao Paulo, Brazil. We describe the complexities of analyzing next-generation sequencing data and present a simplified open-source workflow to analyze drug resistance data. From this data, we identified drug resistance mutations in 20% of treatment naive individuals in our cohort, which is similar to frequencies identified using traditional genotyping in Brazilian patient samples. Conclusion: The developed ultra-wide sequencing approach described here allows multiplexing of at least 48 patient samples per sequencing run, 4 times more than the current genotyping method. This method is also 4-fold more sensitive (5% minimal detection frequency vs. 20%) at a cost 3-5 x less than the traditional Sanger-based genotyping method. Lastly, by using a benchtop next-generation sequencer (Roche/454 GS Junior), this approach can be more easily implemented in low-resource settings. This data provides proof-of-concept that next-generation HIV drug resistance genotyping is a feasible and low-cost alternative to current genotyping methods and may be particularly beneficial for in-country surveillance of transmitted drug resistance.
Resumo:
The Strategic Environmental Assessment (SEA) of the sugar and alcohol sector guides a territorial and sectoral planning that benefits most of the local society and supports this economic activity in all its stages. In this way, the present work aims to determine an index of aggregation of the indicators generated in the baseline of the SEA process, called Index of Sustainability of Expansion of the Sugar and Alcohol Sector (IScana). For this, it was used the normalization of the indicators of each city by the fuzzy logic and attribution of weights by the Analytic Hierarchy Process (AHP). Then, the IScana values had been spatialized in the region of 'Grande Dourados'-Mato Grosso do Sul State. The northern portion concentrated the highest values of IScana, 0.48 and 0.55, referring to the cities of Nova Alvorada do Sul and Rio Brilhante, while, in the central portion, the city of Dourados presented the lowest value, 0.10. The selection of the set of indicators forming the IScana, and their relative importance, was satisfactory for the application of fuzzy logic and AHP techniques. The IScana index supplies objective information regarding the diagnosis of the region for the application of SEA.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
Lipid peroxidation (LPO) has been associated with periodontal disease, and the evaluation of malondialdehyde (MDA) in the gingival crevicular fluid (GCF), an inflammatory exudate from the surrounding tissue of the periodontium, may be useful to clarify the role of LPO in the pathogenesis of periodontal disease. We describe the validation of a method to measure MDA in the GCF using high-performance liquid chromatography. MDA calibration curves were prepared with phosphate-buffered solution spiked with increasing known concentrations of MDA. Healthy and diseased GCF samples were collected from the same patient to avoid interindividual variability. MDA response was linear in the range measured, and excellent agreement was observed between added and detected concentrations of MDA. Samples' intra- and interday coefficients of variation were below 6.3% and 12.4%, respectively. The limit of quantitation (signal/noise = 5) was 0.03 mu M. When the validated method was applied to the GCF, excellent agreement was observed in the MDA quantitation from healthy and diseased sites, and diseased sites presented more MDA than healthy sites (P < 0.05). In this study, a validated method for MDA quantitation in GCF was established with satisfactory sensitivity, precision, and accuracy. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Abstract Background Spotted cDNA microarrays generally employ co-hybridization of fluorescently-labeled RNA targets to produce gene expression ratios for subsequent analysis. Direct comparison of two RNA samples in the same microarray provides the highest level of accuracy; however, due to the number of combinatorial pair-wise comparisons, the direct method is impractical for studies including large number of individual samples (e.g., tumor classification studies). For such studies, indirect comparisons using a common reference standard have been the preferred method. Here we evaluated the precision and accuracy of reconstructed ratios from three indirect methods relative to ratios obtained from direct hybridizations, herein considered as the gold-standard. Results We performed hybridizations using a fixed amount of Cy3-labeled reference oligonucleotide (RefOligo) against distinct Cy5-labeled targets from prostate, breast and kidney tumor samples. Reconstructed ratios between all tissue pairs were derived from ratios between each tissue sample and RefOligo. Reconstructed ratios were compared to (i) ratios obtained in parallel from direct pair-wise hybridizations of tissue samples, and to (ii) reconstructed ratios derived from hybridization of each tissue against a reference RNA pool (RefPool). To evaluate the effect of the external references, reconstructed ratios were also calculated directly from intensity values of single-channel (One-Color) measurements derived from tissue sample data collected in the RefOligo experiments. We show that the average coefficient of variation of ratios between intra- and inter-slide replicates derived from RefOligo, RefPool and One-Color were similar and 2 to 4-fold higher than ratios obtained in direct hybridizations. Correlation coefficients calculated for all three tissue comparisons were also similar. In addition, the performance of all indirect methods in terms of their robustness to identify genes deemed as differentially expressed based on direct hybridizations, as well as false-positive and false-negative rates, were found to be comparable. Conclusion RefOligo produces ratios as precise and accurate as ratios reconstructed from a RNA pool, thus representing a reliable alternative in reference-based hybridization experiments. In addition, One-Color measurements alone can reconstruct expression ratios without loss in precision or accuracy. We conclude that both methods are adequate options in large-scale projects where the amount of a common reference RNA pool is usually restrictive.
Resumo:
Background Genotyping of hepatitis C virus (HCV) has become an essential tool for prognosis and prediction of treatment duration. The aim of this study was to compare two HCV genotyping methods: reverse hybridization line probe assay (LiPA v.1) and partial sequencing of the NS5B region. Methods Plasma of 171 patients with chronic hepatitis C were screened using both a commercial method (LiPA HCV Versant, Siemens, Tarrytown, NY, USA) and different primers targeting the NS5B region for PCR amplification and sequencing analysis. Results Comparison of the HCV genotyping methods showed no difference in the classification at the genotype level. However, a total of 82/171 samples (47.9%) including misclassification, non-subtypable, discrepant and inconclusive results were not classified by LiPA at the subtype level but could be discriminated by NS5B sequencing. Of these samples, 34 samples of genotype 1a and 6 samples of genotype 1b were classified at the subtype level using sequencing of NS5B. Conclusions Sequence analysis of NS5B for genotyping HCV provides precise genotype and subtype identification and an accurate epidemiological representation of circulating viral strains.