928 resultados para Iterative Implementation Model
Resumo:
This paper addresses the problem of model reduction for uncertain discrete-time systems with convex bounded (polytope type) uncertainty. A reduced order precisely known model is obtained in such a way that the H2 and/or the H∞ guaranteed norm of the error between the original (uncertain) system and the reduced one is minimized. The optimization problems are formulated in terms of coupled (non-convex) LMIs - Linear Matrix Inequalities, being solved through iterative algorithms. Examples illustrate the results.
Resumo:
The iterative quadratic maximum likelihood IQML and the method of direction estimation MODE are well known high resolution direction-of-arrival DOA estimation methods. Their solutions lead to an optimization problem with constraints. The usual linear constraint presents a poor performance for certain DOA values. This work proposes a new linear constraint applicable to both DOA methods and compare their performance with two others: unit norm and usual linear constraint. It is shown that the proposed alternative performs better than others constraints. The resulting computational complexity is also investigated.
Resumo:
Linear mixed effects models have been widely used in analysis of data where responses are clustered around some random effects, so it is not reasonable to assume independence between observations in the same cluster. In most biological applications, it is assumed that the distributions of the random effects and of the residuals are Gaussian. This makes inferences vulnerable to the presence of outliers. Here, linear mixed effects models with normal/independent residual distributions for robust inferences are described. Specific distributions examined include univariate and multivariate versions of the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted and Markov chain Monte Carlo is used to carry out the posterior analysis. The procedures are illustrated using birth weight data on rats in a texicological experiment. Results from the Gaussian and robust models are contrasted, and it is shown how the implementation can be used for outlier detection. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process in linear mixed models, and they are easily implemented using data augmentation and MCMC techniques.
Resumo:
Purpose - The purpose of this paper is to present designs for an accelerated life test (ALT). Design/methodology/approach - Bayesian methods and simulation Monte Carlo Markov Chain (MCMC) methods were used. Findings - In the paper a Bayesian method based on MCMC for ALT under EW distribution (for life time) and Arrhenius models (relating the stress variable and parameters) was proposed. The paper can conclude that it is a reasonable alternative to the classical statistical methods since the implementation of the proposed method is simple, not requiring advanced computational understanding and inferences on the parameters can be made easily. By the predictive density of a future observation, a procedure was developed to plan ALT and also to verify if the conformance fraction of the manufactured process reaches some desired level of quality. This procedure is useful for statistical process control in many industrial applications. Research limitations/implications - The results may be applied in a semiconductor manufacturer. Originality/value - The Exponentiated-Weibull-Arrhenius model has never before been used to plan an ALT. © Emerald Group Publishing Limited.
Resumo:
Includes bibliography
Resumo:
When searching for prospective novel peptides, it is difficult to determine the biological activity of a peptide based only on its sequence. The trial and error approach is generally laborious, expensive and time consuming due to the large number of different experimental setups required to cover a reasonable number of biological assays. To simulate a virtual model for Hymenoptera insects, 166 peptides were selected from the venoms and hemolymphs of wasps, bees and ants and applied to a mathematical model of multivariate analysis, with nine different chemometric components: GRAVY, aliphaticity index, number of disulfide bonds, total residues, net charge, pI value, Boman index, percentage of alpha helix, and flexibility prediction. Principal component analysis (PCA) with non-linear iterative projections by alternating least-squares (NIPALS) algorithm was performed, without including any information about the biological activity of the peptides. This analysis permitted the grouping of peptides in a way that strongly correlated to the biological function of the peptides. Six different groupings were observed, which seemed to correspond to the following groups: chemotactic peptides, mastoparans, tachykinins, kinins, antibiotic peptides, and a group of long peptides with one or two disulfide bonds and with biological activities that are not yet clearly defined. The partial overlap between the mastoparans group and the chemotactic peptides, tachykinins, kinins and antibiotic peptides in the PCA score plot may be used to explain the frequent reports in the literature about the multifunctionality of some of these peptides. The mathematical model used in the present investigation can be used to predict the biological activities of novel peptides in this system, and it may also be easily applied to other biological systems. © 2011 Elsevier Inc.
Resumo:
Background: The sequencing and publication of the cattle genome and the identification of single nucleotide polymorphism (SNP) molecular markers have provided new tools for animal genetic evaluation and genomic-enhanced selection. These new tools aim to increase the accuracy and scope of selection while decreasing generation interval. The objective of this study was to evaluate the enhancement of accuracy caused by the use of genomic information (Clarifide® - Pfizer) on genetic evaluation of Brazilian Nellore cattle. Review: The application of genome-wide association studies (GWAS) is recognized as one of the most practical approaches to modern genetic improvement. Genomic selection is perhaps most suited to the improvement of traits with low heritability in zebu cattle. The primary interest in livestock genomics has been to estimate the effects of all the markers on the chip, conduct cross-validation to determine accuracy, and apply the resulting information in GWAS either alone [9] or in combination with bull test and pedigree-based genetic evaluation data. The cost of SNP50K genotyping however limits the commercial application of GWAS based on all the SNPs on the chip. However, reasonable predictability and accuracy can be achieved in GWAS by using an assay that contains an optimally selected predictive subset of markers, as opposed to all the SNPs on the chip. The best way to integrate genomic information into genetic improvement programs is to have it included in traditional genetic evaluations. This approach combines traditional expected progeny differences based on phenotype and pedigree with the genomic breeding values based on the markers. Including the different sources of information into a multiple trait genetic evaluation model, for within breed dairy cattle selection, is working with excellent results. However, given the wide genetic diversity of zebu breeds, the high-density panel used for genomic selection in dairy cattle (Ilumina Bovine SNP50 array) appears insufficient for across-breed genomic predictions and selection in beef cattle. Today there is only one breed-specific targeted SNP panel and genomic predictions developed using animals across the entire population of the Nellore breed (www.pfizersaudeanimal.com), which enables genomically - enhanced selection. Genomic profiles are a way to enhance our current selection tools to achieve more accurate predictions for younger animals. Material and Methods: We analyzed the age at first calving (AFC), accumulated productivity (ACP), stayability (STAY) and heifer pregnancy at 30 months (HP30) in Nellore cattle fitting two different animal models; 1) a traditional single trait model, and 2) a two-trait model where the genomic breeding value or molecular value prediction (MVP) was included as a correlated trait. All mixed model analyses were performed using the statistical software ASREML 3.0. Results: Genetic correlation estimates between AFC, ACP, STAY, HP30 and respective MVPs ranged from 0.29 to 0.46. Results also showed an increase of 56%, 36%, 62% and 19% in estimated accuracy of AFC, ACP, STAY and HP30 when MVP information was included in the animal model. Conclusion: Depending upon the trait, integration of MVP information into genetic evaluation resulted in increased accuracy of 19% to 62% as compared to accuracy from traditional genetic evaluation. GE-EPD will be an effective tool to enable faster genetic improvement through more dependable selection of young animals.
Resumo:
Includes bibliography
Resumo:
Based on literature review, electronic systems design employ largely top-down methodology. The top-down methodology is vital for success in the synthesis and implementation of electronic systems. In this context, this paper presents a new computational tool, named BD2XML, to support electronic systems design. From a block diagram system of mixed-signal is generated object code in XML markup language. XML language is interesting because it has great flexibility and readability. The BD2XML was developed with object-oriented paradigm. It was used the AD7528 converter modeled in MATLAB / Simulink as a case study. The MATLAB / Simulink was chosen as a target due to its wide dissemination in academia and industry. From this case study it is possible to demonstrate the functionality of the BD2XML and make it a reflection on the design challenges. Therefore, an automatic tool for electronic systems design reduces the time and costs of the design.
Resumo:
Gesture-based applications have particularities, since users interact in a natural way, much as they interact in the non-digital world. Hence, new requirements are needed on the software design process. This paper shows a software development process model for these applications, including requirement specification, design, implementation, and testing procedures. The steps and activities of the proposed model were tested through a game case study, which is a puzzle game. The puzzle is completed when all pieces of a painting are correctly positioned by the drag and drop action of users hand gesture. It also shows the results obtained of applying a heuristic evaluation on this game. © 2012 IEEE.
Resumo:
Currently, many museums, botanic gardens and herbariums keep data of biological collections and using computational tools researchers digitalize and provide access to their data using data portals. The replication of databases in portals can be accomplished through the use of protocols and data schema. However, the implementation of this solution demands a large amount of time, concerning both the transfer of fragments of data and processing data within the portal. With the growth of data digitalization in institutions, this scenario tends to be increasingly exacerbated, making it hard to maintain the records updated on the portals. As an original contribution, this research proposes analysing the data replication process to evaluate the performance of portals. The Inter-American Biodiversity Information Network (IABIN) biodiversity data portal of pollinators was used as a study case, which supports both situations: conventional data replication of records of specimen occurrences and interactions between them. With the results of this research, it is possible to simulate a situation before its implementation, thus predicting the performance of replication operations. Additionally, these results may contribute to future improvements to this process, in order to decrease the time required to make the data available in portals. © Rinton Press.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Este artigo apresenta um modelo para o fomento da autonomia dos aprendentes, mostra alguns resultados alcançados na aplicação desse modelo e discute desafios ainda a serem enfrentados. O modelo comporta a investigação de áreas problemáticas do processo individual de aprendizagem de cada sujeito, a identificação de seus estilos preferenciais de aprender, o uso de ferramentas tecnológicas para melhorar a autonomia na aprendizagem, o desenvolvimento de um leque maior de estratégias de aprendizagem de línguas e a implementação de rotinas de auto-monitoramento e auto-avaliação. Este modelo tem sido aplicado nos últimos três anos com alunos de Letras cursando Licenciaturas em Alemão, Francês ou Inglês. Três ordens de resultados emergem dos dados da pesquisa: primeiramente, o modelo provou sua eficácia em prover um andaime para a aprendizagem autônoma de línguas dos alunos; em segundo lugar, as experiências de aprendizagem autônoma vividas pelos futuros professores de línguas poderão ser espelhadas em suas vidas profissionais futuras com seus próprios alunos; finalmente, os dados emanados dos participantes da pesquisa podem lançar uma luz sobre a variedade de maneiras pelas quais as pessoas aprendem no contexto local.
Resumo:
Valency is an inherent property of nominalizations representing higher-order entities, and as such it should be included in their underlying representation. On the basis of this assumption, I postulate that cases of non-overt arguments, which are very common in Brazilian Portuguese and in many other languages of the world, should be considered a special type of valency realization. This paper aims to give empirical support to this postulate by showing that non-overt arguments are both semantically and pragmatically motivated. The semantic and pragmatic motivations for non-overt arguments may be accounted for by the dynamic implementation of the FDG model. I argue that the way valency is realized by means of non-overt arguments suggests a strong parallelism between nominalizations and other types of non-finite embedded constructions – like infinitival and participial ones. By providing empirical evidence for this parallelism I arrive at the conclusion that there are at least three kinds of non-finite embedded constructions, rather than only two, as suggested by Dik (1997).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)