964 resultados para norm-based coding


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many vision problems deal with high-dimensional data, such as motion segmentation and face clustering. However, these high-dimensional data usually lie in a low-dimensional structure. Sparse representation is a powerful principle for solving a number of clustering problems with high-dimensional data. This principle is motivated from an ideal modeling of data points according to linear algebra theory. However, real data in computer vision are unlikely to follow the ideal model perfectly. In this paper, we exploit the mixed norm regularization for sparse subspace clustering. This regularization term is a convex combination of the l1norm, which promotes sparsity at the individual level and the block norm l2/1 which promotes group sparsity. Combining these powerful regularization terms will provide a more accurate modeling, subsequently leading to a better solution for the affinity matrix used in sparse subspace clustering. This could help us achieve better performance on motion segmentation and face clustering problems. This formulation also caters for different types of data corruptions. We derive a provably convergent algorithm based on the alternating direction method of multipliers (ADMM) framework, which is computationally efficient, to solve the formulation. We demonstrate that this formulation outperforms other state-of-arts on both motion segmentation and face clustering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a popular heuristic to the matrix rank minimization problem, nuclear norm minimization attracts intensive research attentions. Matrix factorization based algorithms can reduce the expensive computation cost of SVD for nuclear norm minimization. However, most matrix factorization based algorithms fail to provide the theoretical guarantee for convergence caused by their non-unique factorizations. This paper proposes an efficient and accurate Linearized Grass-mannian Optimization (Lingo) algorithm, which adopts matrix factorization and Grassmann manifold structure to alternatively minimize the subproblems. More specially, linearization strategy makes the auxiliary variables unnecessary and guarantees the close-form solution for low periteration complexity. Lingo then converts linearized objective function into a nuclear norm minimization over Grass-mannian manifold, which could remedy the non-unique of solution for the low-rank matrix factorization. Extensive comparison experiments demonstrate the accuracy and efficiency of Lingo algorithm. The global convergence of Lingo is guaranteed with theoretical proof, which also verifies the effectiveness of Lingo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spectral unmixing (SU) is an emerging problem in the remote sensing image processing. Since both the endmember signatures and their abundances have nonnegative values, it is a natural choice to employ the attractive nonnegative matrix factorization (NMF) methods to solve this problem. Motivated by that the abundances are sparse, the NMF with local smoothness constraint (NMF-LSC) is proposed in this paper. In the proposed method, the smoothness constraint is utilized to impose the sparseness, instead of the traditional L1-norm which is restricted by the underlying column-sum-to-one requirement of the to the abundance matrix. Simulations show the advantages of our algorithm over the compared methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exchange rate misalignment assessment is becoming more relevant in recent period particularly after the nancial crisis of 2008. There are di erent methodologies to address real exchange rate misalignment. The real exchange misalignment is de ned as the di erence between actual real e ective exchange rate and some equilibrium norm. Di erent norms are available in the literature. Our paper aims to contribute to the literature by showing that Behavioral Equilibrium Exchange Rate approach (BEER) adopted by Clark & MacDonald (1999), Ubide et al. (1999), Faruqee (1994), Aguirre & Calderón (2005) and Kubota (2009) among others can be improved in two following manners. The rst one consists of jointly modeling real e ective exchange rate, trade balance and net foreign asset position. The second one has to do with the possibility of explicitly testing over identifying restrictions implied by economic theory and allowing the analyst to show that these restrictions are not falsi ed by the empirical evidence. If the economic based identifying restrictions are not rejected it is also possible to decompose exchange rate misalignment in two pieces, one related to long run fundamentals of exchange rate and the other related to external account imbalances. We also discuss some necessary conditions that should be satis ed for disrcarding trade balance information without compromising exchange rate misalignment assessment. A statistical (but not a theoretical) identifying strategy for calculating exchange rate misalignment is also discussed. We illustrate the advantages of our approach by analyzing the Brazilian case. We show that the traditional approach disregard important information of external accounts equilibrium for this economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The genome-wide identification of both morbid genes, i.e., those genes whose mutations cause hereditary human diseases, and druggable genes, i.e., genes coding for proteins whose modulation by small molecules elicits phenotypic effects, requires experimental approaches that are time-consuming and laborious. Thus, a computational approach which could accurately predict such genes on a genome-wide scale would be invaluable for accelerating the pace of discovery of causal relationships between genes and diseases as well as the determination of druggability of gene products.Results: In this paper we propose a machine learning-based computational approach to predict morbid and druggable genes on a genome-wide scale. For this purpose, we constructed a decision tree-based meta-classifier and trained it on datasets containing, for each morbid and druggable gene, network topological features, tissue expression profile and subcellular localization data as learning attributes. This meta-classifier correctly recovered 65% of known morbid genes with a precision of 66% and correctly recovered 78% of known druggable genes with a precision of 75%. It was than used to assign morbidity and druggability scores to genes not known to be morbid and druggable and we showed a good match between these scores and literature data. Finally, we generated decision trees by training the J48 algorithm on the morbidity and druggability datasets to discover cellular rules for morbidity and druggability and, among the rules, we found that the number of regulating transcription factors and plasma membrane localization are the most important factors to morbidity and druggability, respectively.Conclusions: We were able to demonstrate that network topological features along with tissue expression profile and subcellular localization can reliably predict human morbid and druggable genes on a genome-wide scale. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing morbidity and druggability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data mining of Eucalyptus ESTs genome finds four clusters (EGCEST2257E11.g, EGBGRT3213F11.g, and EGCCFB1223H11.g) from highly conservative 14-3-3 protein family which modulates a wide variety of cellular processes. Multiple alignments were built from twenty four sequences of 14-3-3 proteins searched into the GenBank databases and into the four pools of Eucalyptus genome programs. The alignment has shown two regions highly conservative on the sequences corresponding to the motifs of protein phosphorylation and nine highly conservative regions on the sequence corresponding to the linkage regions of alpha helices structure based on three dimensional of dimer functional structure. The differences of amino acid into the structural and functional domains of 14-3-3 plant protein were identified and can explain the functional diversity of different isoforms. The phylogenic protein trees were built by the maximum parsimony and neighborjoining procedures of Clustal X alignments and PAUP software for phylogenic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear Matrix Inequalities (LMIs) is a powerful too] that has been used in many areas ranging from control engineering to system identification and structural design. There are many factors that make LMI appealing. One is the fact that a lot of design specifications and constrains can be formulated as LMIs [1]. Once formulated in terms of LMIs a problem can be solved efficiently by convex optimization algorithms. The basic idea of the LMI method is to formulate a given problem as an optimization problem with linear objective function and linear matrix inequalities constrains. An intelligent structure involves distributed sensors and actuators and a control law to apply localized actions, in order to minimize or reduce the response at selected conditions. The objective of this work is to implement techniques of control based on LMIs applied to smart structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays there is great interest in damage identification using non destructive tests. Predictive maintenance is one of the most important techniques that are based on analysis of vibrations and it consists basically of monitoring the condition of structures or machines. A complete procedure should be able to detect the damage, to foresee the probable time of occurrence and to diagnosis the type of fault in order to plan the maintenance operation in a convenient form and occasion. In practical problems, it is frequent the necessity of getting the solution of non linear equations. These processes have been studied for a long time due to its great utility. Among the methods, there are different approaches, as for instance numerical methods (classic), intelligent methods (artificial neural networks), evolutions methods (genetic algorithms), and others. The characterization of damages, for better agreement, can be classified by levels. A new one uses seven levels of classification: detect the existence of the damage; detect and locate the damage; detect, locate and quantify the damages; predict the equipment's working life; auto-diagnoses; control for auto structural repair; and system of simultaneous control and monitoring. The neural networks are computational models or systems for information processing that, in a general way, can be thought as a device black box that accepts an input and produces an output. Artificial neural nets (ANN) are based on the biological neural nets and possess habilities for identification of functions and classification of standards. In this paper a methodology for structural damages location is presented. This procedure can be divided on two phases. The first one uses norms of systems to localize the damage positions. The second one uses ANN to quantify the severity of the damage. The paper concludes with a numerical application in a beam like structure with five cases of structural damages with different levels of severities. The results show the applicability of the presented methodology. A great advantage is the possibility of to apply this approach for identification of simultaneous damages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automotive parts manufacture by machining process using silicon nitride-based ceramic tool development in Brazil already is a reality. Si 3N4-based ceramic cutting tools offer a high productivity due to their excellent hot hardness, which allows high cutting speeds. Under such conditions the cutting tool must be resistant to a combination of mechanical, thermal and chemical attacks. Silicon nitride based ceramic materials constitute a mature technology with a very broad base of current and potential applications. The best opportunities for Si3N 4-based ceramics include ballistic armor, composite automotive brakes, diesel particulate filters, joint replacement products and others. The goal of this work was to show latter advance in silicon nitride manufacture and its recent evolution on machining process of gray cast iron, compacted graphite iron and Ti-6Al-4V. Materials characterization and machining tests were analyzed by X-Ray Diffraction, Scanning Electron Microscopy, Vickers hardness and toughness fracture and technical norm. In recent works the authors has been proved to advance in microstructural, mechanical and physic properties control. These facts prove that silicon nitride-based ceramic has enough resistance to withstand the impacts inherent to the machining of gray cast iron (CI), compacted graphite iron (CGI) and Ti-6Al-4V (6-4). Copyright © 2008 SAE International.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reaction norm models have been widely used to study genotype by environment interaction (G × E) in animal breeding. The objective of this study was to describe environmental sensitivity across first lactation in Brazilian Holstein cows using a reaction norm approach. A total of 50,168 individual monthly test day (TD) milk yields (10 test days) from 7476 complete first lactations of Holstein cattle were analyzed. The statistical models for all traits (10 TDs and for 305-day milk yield) included the fixed effects of contemporary group, age of cow (linear and quadratic effects), and days in milk (linear effect), except for 305-day milk yield. A hierarchical reaction norm model (HRNM) based on the unknown covariate was used. The present study showed the presence of G × E in milk yield across first lactation of Holstein cows. The variation in the heritability estimates implies differences in the response to selection depending on the environment where the animals of this population are evaluated. In the average environment, the heritabilities for all traits were rather similar, in range from 0.02 to 0.63. The scaling effect of G × E predominated throughout most of lactation. Particularly during the first 2 months of lactation, G × E caused reranking of breeding values. It is therefore important to include the environmental sensitivity of animals according to the phase of lactation in the genetic evaluations of Holstein cattle in tropical environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior studies of phylogenetic relationships among phocoenids based on morphology and molecular sequence data conflict and yield unresolved relationships among species. This study evaluates a comprehensive set of cranial, postcranial, and soft anatomical characters to infer interrelationships among extant species and several well-known fossil phocoenids, using two different methods to analyze polymorphic data: polymorphic coding and frequency step matrix. Our phylogenetic results confirmed phocoenid monophyly. The division of Phocoenidae into two subfamilies previously proposed was rejected, as well as the alliance of the two extinct genera Salumiphocaena and Piscolithax with Phocoena dioptrica and Phocoenoides dalli. Extinct phocoenids are basal to all extant species. We also examined the origin and distribution of porpoises within the context of this phylogenetic framework. Phocoenid phylogeny together with available geologic evidence suggests that the early history of phocoenids was centered in the North Pacific during the middle Miocene, with subsequent dispersal into the southern hemisphere in the middle Pliocene. A cooling period in the Pleistocene allowed dispersal of the southern ancestor of Phocoena sinusinto the North Pacific (Gulf of California).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intron splicing is one of the most important steps involved in the maturation process of a pre-mRNA. Although the sequence profiles around the splice sites have been studied extensively, the levels of sequence identity between the exonic sequences preceding the donor sites and the intronic sequences preceding the acceptor sites has not been examined as thoroughly. In this study we investigated identity patterns between the last 15 nucleotides of the exonic sequence preceding the 5' splice site and the intronic sequence preceding the 3' splice site in a set of human protein-coding genes that do not exhibit intron retention. We found that almost 60% of consecutive exons and introns in human protein-coding genes share at least two identical nucleotides at their 3' ends and, on average, the sequence identity length is 2.47 nucleotides. Based on our findings we conclude that the 3' ends of exons and introns tend to have longer identical sequences within a gene than when being taken from different genes. Our results hold even if the pairs are non-consecutive in the transcription order. (C) 2012 Elsevier Ltd. All rights reserved.