871 resultados para Automatic adjustment
Resumo:
This paper presents a Computer Aided Diagnosis (CAD) system that automatically classifies microcalcifications detected on digital mammograms into one of the five types proposed by Michele Le Gal, a classification scheme that allows radiologists to determine whether a breast tumor is malignant or not without the need for surgeries. The developed system uses a combination of wavelets and Artificial Neural Networks (ANN) and is executed on an Altera DE2-115 Development Kit, a kit containing a Field-Programmable Gate Array (FPGA) that allows the system to be smaller, cheaper and more energy efficient. Results have shown that the system was able to correctly classify 96.67% of test samples, which can be used as a second opinion by radiologists in breast cancer early diagnosis. (C) 2013 The Authors. Published by Elsevier B.V.
Resumo:
Image categorization by means of bag of visual words has received increasing attention by the image processing and vision communities in the last years. In these approaches, each image is represented by invariant points of interest which are mapped to a Hilbert Space representing a visual dictionary which aims at comprising the most discriminative features in a set of images. Notwithstanding, the main problem of such approaches is to find a compact and representative dictionary. Finding such representative dictionary automatically with no user intervention is an even more difficult task. In this paper, we propose a method to automatically find such dictionary by employing a recent developed graph-based clustering algorithm called Optimum-Path Forest, which does not make any assumption about the visual dictionary's size and is more efficient and effective than the state-of-the-art techniques used for dictionary generation.
Resumo:
Princeton WordNet (WN.Pr) lexical database has motivated efficient compilations of bulky relational lexicons since its inception in the 1980's. The EuroWordNet project, the first multilingual initiative built upon WN.Pr, opened up ways of building individual wordnets, and interrelating them by means of the so-called Inter-Lingual-Index, an unstructured list of the WN.Pr synsets. Other important initiative, relying on a slightly different method of building multilingual wordnets, is the MultiWordNet project, where the key strategy is building language specific wordnets keeping as much as possible of the semantic relations available in the WN.Pr. This paper, in particular, stresses that the additional advantage of using WN.Pr lexical database as a resource for building wordnets for other languages is to explore possibilities of implementing an automatic procedure to map the WN.Pr conceptual relations as hyponymy, co-hyponymy, troponymy, meronymy, cause, and entailment onto the lexical database of the wordnet under construction, a viable possibility, for those are language-independent relations that hold between lexicalized concepts, not between lexical units. Accordingly, combining methods from both initiatives, this paper presents the ongoing implementation of the WN.Br lexical database and the aforementioned automation procedure illustrated with a sample of the automatic encoding of the hyponymy and co-hyponymy relations.
Resumo:
This paper reports a research to evaluate the potential and the effects of use of annotated Paraconsistent logic in automatic indexing. This logic attempts to deal with contradictions, concerned with studying and developing inconsistency-tolerant systems of logic. This logic, being flexible and containing logical states that go beyond the dichotomies yes and no, permits to advance the hypothesis that the results of indexing could be better than those obtained by traditional methods. Interactions between different disciplines, as information retrieval, automatic indexing, information visualization, and nonclassical logics were considered in this research. From the methodological point of view, an algorithm for treatment of uncertainty and imprecision, developed under the Paraconsistent logic, was used to modify the values of the weights assigned to indexing terms of the text collections. The tests were performed on an information visualization system named Projection Explorer (PEx), created at Institute of Mathematics and Computer Science (ICMC - USP Sao Carlos), with available source code. PEx uses traditional vector space model to represent documents of a collection. The results were evaluated by criteria built in the information visualization system itself, and demonstrated measurable gains in the quality of the displays, confirming the hypothesis that the use of the para-analyser under the conditions of the experiment has the ability to generate more effective clusters of similar documents. This is a point that draws attention, since the constitution of more significant clusters can be used to enhance information indexing and retrieval. It can be argued that the adoption of non-dichotomous (non-exclusive) parameters provides new possibilities to relate similar information.
Localização automática de pontos de controle em imagens aéreas baseada em cenas terrestres verticais
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%.
Resumo:
In this paper we presente a classification system that uses a combination of texture features from stromal regions: Haralick features and Local Binary Patterns (LBP) in wavelet domain. The system has five steps for classification of the tissues. First, the stromal regions were detected and extracted using segmentation techniques based on thresholding and RGB colour space. Second, the Wavelet decomposition was applied in the extracted regions to obtain the Wavelet coefficients. Third, the Haralick and LBP features were extracted from the coefficients. Fourth, relevant features were selected using the ANOVA statistical method. The classication (fifth step) was performed with Radial Basis Function (RBF) networks. The system was tested in 105 prostate images, which were divided into three groups of 35 images: normal, hyperplastic and cancerous. The system performance was evaluated using the area under the ROC curve and resulted in 0.98 for normal versus cancer, 0.95 for hyperplasia versus cancer and 0.96 for normal versus hyperplasia. Our results suggest that texture features can be used as discriminators for stromal tissues prostate images. Furthermore, the system was effective to classify prostate images, specially the hyperplastic class which is the most difficult type in diagnosis and prognosis.
Resumo:
One approach to verify the adequacy of estimation methods of reference evapotranspiration is the comparison with the Penman-Monteith method, recommended by the United Nations of Food and Agriculture Organization - FAO, as the standard method for estimating ET0. This study aimed to compare methods for estimating ET0, Makkink (MK), Hargreaves (HG) and Solar Radiation (RS), with Penman-Monteith (PM). For this purpose, we used daily data of global solar radiation, air temperature, relative humidity and wind speed for the year 2010, obtained through the automatic meteorological station, with latitude 18° 91' 66 S, longitude 48° 25' 05 W and altitude of 869m, at the National Institute of Meteorology situated in the Campus of Federal University of Uberlandia - MG, Brazil. Analysis of results for the period were carried out in daily basis, using regression analysis and considering the linear model y = ax, where the dependent variable was the method of Penman-Monteith and the independent, the estimation of ET0 by evaluated methods. Methodology was used to check the influence of standard deviation of daily ET0 in comparison of methods. The evaluation indicated that methods of Solar Radiation and Penman-Monteith cannot be compared, yet the method of Hargreaves indicates the most efficient adjustment to estimate ETo.
Automatic method to classify images based on multiscale fractal descriptors and paraconsistent logic
Resumo:
In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Bestfirst Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Most consumers consider the fat of chicken meat undesirable for a healthy diet, due to the high levels of saturated fatty acids and cholesterol. The purpose of this experiment was to investigate the influence of changes in dietary metabolizable energy level, associated with a proportional nutrient density variation, on broiler chickens performance and on the lipid composition of meat. Males and females Cobb 500 broilers were evaluated separately. Performance evaluation followed a completely randomized design with factorial 6x3 arrangement - six energy levels (2,800, 2,900, 3,000, 3,100, 3,200 and 3,300 kcal/kg) and three slaughter ages (42, 49 and 56 days). Response surface methodology was used to establish a mathematical model to explain live weight, feed intake and feed conversion behavior. Total lipids and cholesterol were determined in skinned breast meat and in thigh meat, with and without skin. For lipid composition analysis, a 3x3x2 factorial arrangement in a completely randomized design - three ration’s metabolizable energy levels (2,800, 3,000 and 3,300 kcal/kg), three slaughter ages (42, 49 and 56 days) and two sexes - was used. The reduction in the diet metabolizable energy up to close to 3,000 kcal/kg did not affect live weight but, below this value, the live weight decreased. Feed intake was lower when the dietary energy level was higher. Feed conversion was favored in a direct proportion to the increase of the energy level of the diet. The performance of all birds was within the range considered appropriate for the lineage. Breast meat had less total lipids and cholesterol than thigh meat. Thigh with skin had more than the double of total lipids of skinned thigh, but the cholesterol content did not differ with the removal of the skin, suggesting that cholesterol content is not associated with the subcutaneous fat. Intramuscular fat content was lower in the meat from birds fed diets with lower energy level. These results may help to define the most appropriate nutritional management. Despite the decrease in bird’s productive performance, the restriction of energy in broiler chickens feed may be a viable alternative, if the consumers are willing to pay more for meat with less fat.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The American Recovery and Reinvestment Act (ARRA) of 2009 has re-authorized and modified the Trade Adjustment Assistance for Farmers program. The statute authorizes an appropriation of not more than $90 million per year for the next three fiscal years. The TAA for Farmers program helps producers of raw agricultural commodities (farmers, ranchers or fishermen) who have experienced significant declines in price or production, adjust to the changing economic environment brought on by import competition. The program provides benefits to eligible producers in the form of educational assistance, as well as up to $12,000 per producer in cash benefits to help create and implement business adjustment plans.
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Resumo:
The attributes describing a data set may often be arranged in meaningful subsets, each of which corresponds to a different aspect of the data. An unsupervised algorithm (SCAD) that simultaneously performs fuzzy clustering and aspects weighting was proposed in the literature. However, SCAD may fail and halt given certain conditions. To fix this problem, its steps are modified and then reordered to reduce the number of parameters required to be set by the user. In this paper we prove that each step of the resulting algorithm, named ASCAD, globally minimizes its cost-function with respect to the argument being optimized. The asymptotic analysis of ASCAD leads to a time complexity which is the same as that of fuzzy c-means. A hard version of the algorithm and a novel validity criterion that considers aspect weights in order to estimate the number of clusters are also described. The proposed method is assessed over several artificial and real data sets.