985 resultados para Structural feature
Resumo:
Background: The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence-and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results: GANN ( available at http://bioinformatics.org.au/gann) is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion: GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.
Resumo:
Relaxin- 3 is the most recently discovered member of the relaxin family of peptide hormones. In contrast to relaxin- 1 and - 2, whose main functions are associated with pregnancy, relaxin- 3 is involved in neuropeptide signaling in the brain. Here, we report the solution structure of human relaxin- 3, the first structure of a relaxin family member to be solved by NMR methods. Overall, relaxin- 3 adopts an insulin- like fold, but the structure differs crucially from the crystal structure of human relaxin- 2 near the B- chain terminus. In particular, the B- chain C terminus folds back, allowing Trp(B27) to interact with the hydrophobic-core. This interaction partly blocks the conserved RXXXRXXI motif identified as a determinant for the interaction with the relaxin receptor LGR7 and may account for the lower affinity of relaxin- 3 relative to relaxin for this receptor. This structural feature is likely important for the activation of its endogenous receptor, GPCR135.
Resumo:
The concept of ontological security has a remarkable echo in the current sociology to describe emotional status of men of late modernity. However, the concept created by Giddens in the eighties has been little used in empirical research covering various sources of risk or uncertainty. In this paper, a scale for ontological security is proposed. To do this, we start from the results of a research focused on the relationship between risk, uncertainty and vulnerability in the context of the economic crisis in Spain. These results were produced through nine focus groups and a telephone survey with standardized questionnaire applied to a national sample of 2,408 individuals over 18 years. This work is divided into three main sections. In the fi rst, a scale has been built from the results of the application of different items present in the questionnaire used. The second part explores the relationships of the scale obtained with the variables further approximate the emotional dimensions of individuals. The third part observes the variables that contribute to changes in the scale: These variables show the structural feature of the ontological security.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Features of homologous relationship of proteins can provide us a general picture of protein universe, assist protein design and analysis, and further our comprehension of the evolution of organisms. Here we carried Out a Study of the evolution Of protein molecules by investigating homologous relationships among residue segments. The motive was to identify detailed topological features of homologous relationships for short residue segments in the whole protein universe. Based on the data of a large number of non-redundant Proteins, the universe of non-membrane polypeptide was analyzed by considering both residue mutations and structural conservation. By connecting homologous segments with edges, we obtained a homologous relationship network of the whole universe of short residue segments, which we named the graph of polypeptide relationships (GPR). Since the network is extremely complicated for topological transitions, to obtain an in-depth understanding, only subgraphs composed of vital nodes of the GPR were analyzed. Such analysis of vital subgraphs of the GPR revealed a donut-shaped fingerprint. Utilization of this topological feature revealed the switch sites (where the beginning of exposure Of previously hidden "hot spots" of fibril-forming happens, in consequence a further opportunity for protein aggregation is Provided; 188-202) of the conformational conversion of the normal alpha-helix-rich prion protein PrPC to the beta-sheet-rich PrPSc that is thought to be responsible for a group of fatal neurodegenerative diseases, transmissible spongiform encephalopathies. Efforts in analyzing other proteins related to various conformational diseases are also introduced. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper presents an approach for structural health monitoring (SHM) by using adaptive filters. The experimental signals from different structural conditions provided by piezoelectric actuators/sensors bonded in the test structure are modeled by a discrete-time recursive least square (RLS) filter. The biggest advantage to use a RLS filter is the clear possibility to perform an online SHM procedure since that the identification is also valid for non-stationary linear systems. An online damage-sensitive index feature is computed based on autoregressive (AR) portion of coefficients normalized by the square root of the sum of the square of them. The proposed method is then utilized in a laboratory test involving an aeronautical panel coupled with piezoelectric sensors/actuators (PZTs) in different positions. A hypothesis test employing the t-test is used to obtain the damage decision. The proposed algorithm was able to identify and localize the damages simulated in the structure. The results have shown the applicability and drawbacks the method and the paper concludes with suggestions to improve it. ©2010 Society for Experimental Mechanics Inc.
Resumo:
The mortality caused by snakebites is more damaging than many tropical diseases, such as dengue haemorrhagic fever, cholera, leishmaniasis, schistosomiasis and Chagas disease. For this reason, snakebite envenoming adversely affects health services of tropical and subtropical countries and is recognized as a neglected disease by the World Health Organization. One of the main components of snake venoms is the Lys49-phospholipases A2, which is catalytically inactive but possesses other toxic and pharmacological activities. Preliminary studies with MjTX-I from Bothrops moojeni snake venom revealed intriguing new structural and functional characteristics compared to other bothropic Lys49-PLA2s. We present in this article a comprehensive study with MjTX-I using several techniques, including crystallography, small angle X-ray scattering, analytical size-exclusion chromatography, dynamic light scattering, myographic studies, bioinformatics and molecular phylogenetic analyses.Based in all these experiments we demonstrated that MjTX-I is probably a unique Lys49-PLA2, which may adopt different oligomeric forms depending on the physical-chemical environment. Furthermore, we showed that its myotoxic activity is dramatically low compared to other Lys49-PLA2s, probably due to the novel oligomeric conformations and important mutations in the C-terminal region of the protein. The phylogenetic analysis also showed that this toxin is clearly distinct from other bothropic Lys49-PLA2s, in conformity with the peculiar oligomeric characteristics of MjTX-I and possible emergence of new functionalities inresponse to environmental changes and adaptation to new preys. © 2013 Salvador et al.
Resumo:
Hybrid face recognition, using image (2D) and structural (3D) information, has explored the fusion of Nearest Neighbour classifiers. This paper examines the effectiveness of feature modelling for each individual modality, 2D and 3D. Furthermore, it is demonstrated that the fusion of feature modelling techniques for the 2D and 3D modalities yields performance improvements over the individual classifiers. By fusing the feature modelling classifiers for each modality with equal weights the average Equal Error Rate improves from 12.60% for the 2D classifier and 12.10% for the 3D classifier to 7.38% for the Hybrid 2D+3D clasiffier.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques.
Resumo:
In recent years, the Web 2.0 has provided considerable facilities for people to create, share and exchange information and ideas. Upon this, the user generated content, such as reviews, has exploded. Such data provide a rich source to exploit in order to identify the information associated with specific reviewed items. Opinion mining has been widely used to identify the significant features of items (e.g., cameras) based upon user reviews. Feature extraction is the most critical step to identify useful information from texts. Most existing approaches only find individual features about a product without revealing the structural relationships between the features which usually exist. In this paper, we propose an approach to extract features and feature relationships, represented as a tree structure called feature taxonomy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature taxonomy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that our proposed approach is able to capture the product features and relations effectively.
Resumo:
As of today, opinion mining has been widely used to iden- tify the strength and weakness of products (e.g., cameras) or services (e.g., services in medical clinics or hospitals) based upon people's feed- back such as user reviews. Feature extraction is a crucial step for opinion mining which has been used to collect useful information from user reviews. Most existing approaches only find individual features of a product without the structural relationships between the features which usually exists. In this paper, we propose an approach to extract features and feature relationship, represented as tree structure called a feature hi- erarchy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature hierarchy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that the proposed feature extraction approach can identify more correct features than the baseline model. Even though the datasets used in the experiment are about cameras, our work can be ap- plied to generate features about a service such as the services in hospitals or clinics.