891 resultados para Computer Imaging, Vision, Pattern Recognition and Graphics
Resumo:
The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from a training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multilevel design approach to deal with the issue of designing large neighborhood-based operators. The main idea is inspired by stacked generalization (a multilevel classifier design approach) and consists of, at each training level, combining the outcomes of the previous level operators. The final operator is a multilevel operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperform the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multilevel approach to obtain better results.
Resumo:
Spodoptera frugiperda beta-1,3-glucanase (SLam) was purified from larval midgut. It has a molecular mass of 37.5 kDa, an alkaline optimum pH of 9.0, is active against beta-1,3-glucan (laminarin), but cannot hydrolyze yeast beta-1,3-1,6-glucan or other polysaccharides. The enzyme is an endoglucanase with low processivity (0.4), and is not inhibited by high concentrations of substrate. In contrast to other digestive beta-1,3-glucanases from insects, SLam is unable to lyse Saccharomyces cerevisae cells. The cDNA encoding SLam was cloned and sequenced, showing that the protein belongs to glycosyl hydrolase family 16 as other insect glucanases and glucan-binding proteins. Multiple sequence alignment of beta-1,3-glucanases and beta-glucan-binding protein supports the assumption that the beta-1,3-glucanase gene duplicated in the ancestor of mollusks and arthropods. One copy originated the derived beta-1,3-glucanases by the loss of an extended N-terminal region and the beta-glucan-binding proteins by the loss of the catalytic residues. SLam homology modeling suggests that E228 may affect the ionization of the catalytic residues, thus displacing the enzyme pH optimum. SLam antiserum reacts with a single protein in the insect midgut. Immunocytolocalization shows that the enzyme is present in secretory vesicles and glycocalyx from columnar cells. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A low-cost method is proposed to classify wine and whisky samples using a disposable voltammetric electronic tongue that was fabricated using gold and copper substrates and a pattern recognition technique (Principal Component Analysis). The proposed device was successfully used to discriminate between expensive and cheap whisky samples and to detect adulteration processes using only a copper electrode. For wines, the electronic tongue was composed of copper and gold working electrodes and was able to classify three different brands of wine and to make distinctions regarding the wine type, i.e., dry red, soft red, dry white and soft white brands. Crown Copyright (C) 2011 Published by Elsevier B.V. All rights reserved.
Resumo:
An analytical procedure for the separation and quantification of ethyl acetate, ethyl butyrate, ethyl hexanoate, ethyl lactate, ethyl octanoate, ethyl nonanoate, ethyl decanoate, isoamyl octanoate, and ethyl laurate in cachaca, rum, and whisky by direct injection gas chromatography-mass spectrometry was developed. The analytical method is simple, selective, and appropriated for the determination of esters in distilled spirits. The limit of detection ranged from 29 (ethyl hexanoate) to 530 (ethyl acetate) mu g L-1, whereas the standard deviation for repeatability was between 0.774% (ethyl hexanoate) and 5.05% (isoamyl octanoate). Relative standard deviation values for accuracy vary from 90.3 to 98.5% for ethyl butyrate and ethyl acetate, respectively. Ethyl acetate was shown to be the major ester in cachaca (median content of 22.6 mg 100 mL(-1) anhydrous alcohol), followed by ethyl lactate (median content of 8.32 mg 100 mL(-1) anhydrous alcohol). Cachaca produced in copper and hybrid alembic present a higher content of ethyl acetate and ethyl lactate than those produced in a stainless-steel column, whereas cachaca produced by distillation in a stainless-steel column present a higher content of ethyl octanoate, ethyl decanoate, and ethyl laurate. As expected, ethyl acetate is the major ester in whiskey and rum, followed by ethyl lactate for samples of rum. Nevertheless, whiskey samples exhibit ethyl lactate at contents lower or at the same order of magnitude of the fatty esters.
Resumo:
In this work, two different docking programs were used, AutoDock and FlexX, which use different types of scoring functions and searching methods. The docking poses of all quinone compounds studied stayed in the same region in the trypanothione reductase. This region is a hydrophobic pocket near to Phe396, Pro398 and Leu399 amino acid residues. The compounds studied displays a higher affinity in trypanothione reductase (TR) than glutathione reductase (GR), since only two out of 28 quinone compounds presented more favorable docking energy in the site of human enzyme. The interaction of quinone compounds with the TR enzyme is in agreement with other studies, which showed different binding sites from the ones formed by cysteines 52 and 58. To verify the results obtained by docking, we carried out a molecular dynamics simulation with the compounds that presented the highest and lowest docking energies. The results showed that the root mean square deviation (RMSD) between the initial and final pose were very small. In addition, the hydrogen bond pattern was conserved along the simulation. In the parasite enzyme, the amino acid residues Leu399, Met400 and Lys402 are replaced in the human enzyme by Met406, Tyr407 and Ala409, respectively. In view of the fact that Leu399 is an amino acid of the Z site, this difference could be explored to design selective inhibitors of TR.
Resumo:
This masters thesis describes the development of signal processing and patternrecognition in monitoring Parkison’s disease. It involves the development of a signalprocess algorithm and passing it into a pattern recogniton algorithm also. Thesealgorithms are used to determine , predict and make a conclusion on the study ofparkison’s disease. We get to understand the nature of how the parkinson’s disease isin humans.
Resumo:
This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers’ tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification. Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera’s algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day. Approximately 97% successful segmentation rate was achieved using this algorithm.Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim’s shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment’s orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but ?-SVM gives better results in some case.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, as well as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of(human) raters’ visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithm which quantifies the vegetation cover was able to process 98% of the im-age data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%.Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance.A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust monitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed form the foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.
Resumo:
In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of pattern recognition". Based on the results of this research, we explore a change of perspective. The idea of "pattern recognition" presupposes that the processing of relevant information is on "patterns" (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.
Resumo:
In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of 'pattern recognition'. Based on the results of this research, we explore a change of perspective. The idea of 'pattern recognition' presupposes that the processing of relevant information is on 'patterns' (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Chemical sensors made from nanostructured films of poly(o-ethoxyaniline) POEA and poly(sodium 4-styrene sulfonate) PSS are produced and used to detect and distinguish 4 chemicals in solution at 20 mM, including sucrose, NaCl, HCl, and caffeine. These substances are used in order to mimic the 4 basic tastes recognized by humans, namely sweet, salty, sour, and bitter, respectively. The sensors are produced by the deposition of POEA/PSS films at the top of interdigitated microelectrodes via the layer-by-layer technique, using POEA solutions containing different dopant acids. Besides the different characteristics of the POEA/PSS films investigated by UV-Vis and Raman spectroscopies, and by atomic force microscopy.. it is observed that their electrical response to the different chemicals in liquid media is very fast, in the order of seconds, systematical, reproducible, and extremely dependent on the type of acid used for film fabrication. The responses of the as-prepared sensors are reproducible and repetitive after many cycles of operation. Furthermore, the use of an "electronic tongue" composed by an array of these sensors and principal component analysis as pattern recognition tool allows one to reasonably distinguish test solutions according to their chemical composition. (c) 2007 Published by Elsevier B.V.