866 resultados para Computer Vision and Pattern Recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

2D electrophoresis is a well-known method for protein separation which is extremely useful in the field of proteomics. Each spot in the image represents a protein accumulation and the goal is to perform a differential analysis between pairs of images to study changes in protein content. It is thus necessary to register two images by finding spot correspondences. Although it may seem a simple task, generally, the manual processing of this kind of images is very cumbersome, especially when strong variations between corresponding sets of spots are expected (e.g. strong non-linear deformations and outliers). In order to solve this problem, this paper proposes a new quadratic assignment formulation together with a correspondence estimation algorithm based on graph matching which takes into account the structural information between the detected spots. Each image is represented by a graph and the task is to find a maximum common subgraph. Successful experimental results using real data are presented, including an extensive comparative performance evaluation with ground-truth data. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from a training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multilevel design approach to deal with the issue of designing large neighborhood-based operators. The main idea is inspired by stacked generalization (a multilevel classifier design approach) and consists of, at each training level, combining the outcomes of the previous level operators. The final operator is a multilevel operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperform the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multilevel approach to obtain better results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spodoptera frugiperda beta-1,3-glucanase (SLam) was purified from larval midgut. It has a molecular mass of 37.5 kDa, an alkaline optimum pH of 9.0, is active against beta-1,3-glucan (laminarin), but cannot hydrolyze yeast beta-1,3-1,6-glucan or other polysaccharides. The enzyme is an endoglucanase with low processivity (0.4), and is not inhibited by high concentrations of substrate. In contrast to other digestive beta-1,3-glucanases from insects, SLam is unable to lyse Saccharomyces cerevisae cells. The cDNA encoding SLam was cloned and sequenced, showing that the protein belongs to glycosyl hydrolase family 16 as other insect glucanases and glucan-binding proteins. Multiple sequence alignment of beta-1,3-glucanases and beta-glucan-binding protein supports the assumption that the beta-1,3-glucanase gene duplicated in the ancestor of mollusks and arthropods. One copy originated the derived beta-1,3-glucanases by the loss of an extended N-terminal region and the beta-glucan-binding proteins by the loss of the catalytic residues. SLam homology modeling suggests that E228 may affect the ionization of the catalytic residues, thus displacing the enzyme pH optimum. SLam antiserum reacts with a single protein in the insect midgut. Immunocytolocalization shows that the enzyme is present in secretory vesicles and glycocalyx from columnar cells. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A low-cost method is proposed to classify wine and whisky samples using a disposable voltammetric electronic tongue that was fabricated using gold and copper substrates and a pattern recognition technique (Principal Component Analysis). The proposed device was successfully used to discriminate between expensive and cheap whisky samples and to detect adulteration processes using only a copper electrode. For wines, the electronic tongue was composed of copper and gold working electrodes and was able to classify three different brands of wine and to make distinctions regarding the wine type, i.e., dry red, soft red, dry white and soft white brands. Crown Copyright (C) 2011 Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analytical procedure for the separation and quantification of ethyl acetate, ethyl butyrate, ethyl hexanoate, ethyl lactate, ethyl octanoate, ethyl nonanoate, ethyl decanoate, isoamyl octanoate, and ethyl laurate in cachaca, rum, and whisky by direct injection gas chromatography-mass spectrometry was developed. The analytical method is simple, selective, and appropriated for the determination of esters in distilled spirits. The limit of detection ranged from 29 (ethyl hexanoate) to 530 (ethyl acetate) mu g L-1, whereas the standard deviation for repeatability was between 0.774% (ethyl hexanoate) and 5.05% (isoamyl octanoate). Relative standard deviation values for accuracy vary from 90.3 to 98.5% for ethyl butyrate and ethyl acetate, respectively. Ethyl acetate was shown to be the major ester in cachaca (median content of 22.6 mg 100 mL(-1) anhydrous alcohol), followed by ethyl lactate (median content of 8.32 mg 100 mL(-1) anhydrous alcohol). Cachaca produced in copper and hybrid alembic present a higher content of ethyl acetate and ethyl lactate than those produced in a stainless-steel column, whereas cachaca produced by distillation in a stainless-steel column present a higher content of ethyl octanoate, ethyl decanoate, and ethyl laurate. As expected, ethyl acetate is the major ester in whiskey and rum, followed by ethyl lactate for samples of rum. Nevertheless, whiskey samples exhibit ethyl lactate at contents lower or at the same order of magnitude of the fatty esters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This masters thesis describes the development of signal processing and patternrecognition in monitoring Parkison’s disease. It involves the development of a signalprocess algorithm and passing it into a pattern recogniton algorithm also. Thesealgorithms are used to determine , predict and make a conclusion on the study ofparkison’s disease. We get to understand the nature of how the parkinson’s disease isin humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of pattern recognition". Based on the results of this research, we explore a change of perspective. The idea of "pattern recognition" presupposes that the processing of relevant information is on "patterns" (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of 'pattern recognition'. Based on the results of this research, we explore a change of perspective. The idea of 'pattern recognition' presupposes that the processing of relevant information is on 'patterns' (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many creative and technical areas, professionals make use of paper sketches for developing and expressing concepts and models. Paper offers an almost constraint free environment where they have as much freedom to express themselves as they need. However, paper does have some disadvantages, such as size and not being able to manipulate the content (other than remove it or scratch it), which can be overcome by creating systems that can offer the same freedom people have from paper but none of the disadvantages and limitations. Only in recent years has the technology become massively available that allows doing precisely that, with the development in touch‐sensitive screens that also have the ability to interact with a stylus. In this project a prototype was created with the objective of finding a set of the most useful and usable interactions, which are composed of combinations of multi‐touch and pen. The project selected Computer Aided Software Engineering (CASE) tools as its application domain, because it addresses a solid and well‐defined discipline with still sufficient room for new developments. This was the result from the area research conducted to find an application domain, which involved analyzing sketching tools from several possible areas and domains. User studies were conducted using Model Driven Inquiry (MDI) to have a better understanding of the human sketch creation activities and concepts devised. Then the prototype was implemented, through which it was possible to execute user evaluations of the interaction concepts created. Results validated most interactions, in the face of limited testing only being possible at the time. Users had more problems using the pen, however handwriting and ink recognition were very effective, and users quickly learned the manipulations and gestures from the Natural User Interface (NUI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)