991 resultados para Error-resilient Applications
Resumo:
This paper studies semistability of the recursive Kalman filter in the context of linear time-varying (LTV), possibly nondetectable systems with incorrect noise information. Semistability is a key property, as it ensures that the actual estimation error does not diverge exponentially. We explore structural properties of the filter to obtain a necessary and sufficient condition for the filter to be semistable. The condition does not involve limiting gains nor the solution of Riccati equations, as they can be difficult to obtain numerically and may not exist. We also compare semistability with the notions of stability and stability w.r.t. the initial error covariance, and we show that semistability in a sense makes no distinction between persistent and nonpersistent incorrect noise models, as opposed to stability. In the linear time invariant scenario we obtain algebraic, easy to test conditions for semistability and stability, which complement results available in the context of detectable systems. Illustrative examples are included.
Resumo:
This paper studies a nonlinear, discrete-time matrix system arising in the stability analysis of Kalman filters. These systems present an internal coupling between the state components that gives rise to complex dynamic behavior. The problem of partial stability, which requires that a specific component of the state of the system converge exponentially, is studied and solved. The convergent state component is strongly linked with the behavior of Kalman filters, since it can be used to provide bounds for the error covariance matrix under uncertainties in the noise measurements. We exploit the special features of the system-mainly the connections with linear systems-to obtain an algebraic test for partial stability. Finally, motivated by applications in which polynomial divergence of the estimates is acceptable, we study and solve a partial semistability problem.
Resumo:
Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
Nitrogen is the nutrient that is most absorbed by the corn crop, with the most complex management, and has the highest share on the cost of corn production. The objective of this work was to evaluate the economic viability of different rates and split-applications of nitrogen fertilization, as such as urea, in the corn crop in a eutrophic Red Latosol (Oxisol). The study was carried out in the Experimental Station of the Regional Pole of the Sao Paulo Northwest Agribusiness Development (APTA), in Votuporanga, State of Sao Paulo, Brazil. The experimental design was randomized complete blocks with nine treatments and four replications, consisting of five N rates: 0, 55, 95, 135 and 175 kg ha(-1), 15 kg ha-l applied in the seeding and the remainder in top dressing: 40 and 80 kg ha(-1) N at forty days after seeding (DAS), or 1/2 + 1/2 at 20 and 40 DAS; 120 kg ha-1 N split in 1/2 + 1/2 or 1/3 + 1/3 + 1/3 at 20, 40 or 60 DAS; 160 kg ha(-1) N split in 1/4 + 3/8 + 3/8 or 114 + 1/4 + 1/4 + 1/4 at 20, 40, 60 and 80 DAS. The application of 135 kg ha-l of N split in three times provided the best benefit/cost ratio. The non-application of N provided the lowest economic return, proving to be unviable.
Resumo:
A long-term field experiment was carried out in the experiment farm of the Sao Paulo State University, Brazil, to evaluate the phytoavailability of Zn, Cd and Pb in a Typic Eutrorthox soil treated with sewage sludge for nine consecutive years, using the sequential extraction and organic matter fractionation methods. During 2005-2006, maize (Zea mays L.) was used as test plants and the experimental design was in randomized complete blocks with four treatments and five replicates. The treatments consisted of four sewage sludge rates (in a dry basis): 0.0 (control, with mineral fertilization), 45.0, 90.0 and 127.5 t ha(-1), annually for nine years. Before maize sowing, the sewage sludge was manually applied to the soil and incorporated at 10 cm depth. Soil samples (0-20 cm layer) for Zn, Cd and Pb analysis were collected 60 days after sowing. The successive applications of sewage sludge to the soil did not affect heavy metal (Cd and Pb) fractions in the soil, with exception of Zn fractions. The Zn, Cd and Pb distributions in the soil were strongly associated with humin and residual fractions, which are characterized by stable chemical bonds. Zinc, Cd and Pb in the soil showed low phytoavailability after nine-year successive applications of sewage sludge to the soil.
Resumo:
Science is a fundamental human activity and we trust its results because it has several error-correcting mechanisms. It is subject to experimental tests that are replicated by independent parts. Given the huge amount of information available and the information asymetry between producers and users of knowledge, scientists have to rely on the reports of others. This makes it possible for social effects to influence the scientific community. Here, an Opinion Dynamics agent model is proposed to describe this situation. The influence of Nature through experiments is described as an external field that acts on the experimental agents. We will see that the retirement of old scientists can be fundamental in the acceptance of a new theory. We will also investigate the interplay between social influence and observations. This will allow us to gain insight in the problem of when social effects can have negligible effects in the conclusions of a scientific community and when we should worry about them.
Resumo:
This study was designed to identify perseverative reaching tendencies in children with intellectual disabilities (ID), over a period of 1 year, by using a version of the Piagetian ""A not B"" task modified by Smith, Thelen, Titzer, and McLin (1999). Nine children (4.8 years old at the beginning of the study) with intellectual disabilities (ID) (eight with mild ID; one with moderate ID) were assessed every 3 months for approximately 1 year, totaling four assessments. The results indicate that in a majority of the cases perseveration was resilient, and that the visual system decoupled from the reaching, especially towards the later assessment periods at the end of the year. Across assessment periods variability seemed to increase in each trial (A1 through B2) for reached target. These individuals, vulnerable to distraction and attention and to short-term memory deficits, are easily locked into rigid modes of motor habits. They are susceptible to perseveration while performing simple task contexts that are typically designed for 10- to 12-month-old, normally-developing infants, therefore creating strong confinements to stable, rigid modes of elementary forms of behavior. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In recent years, magnetic nanoparticles have been studied due to their potential applications as magnetic carriers in biomedical area. These materials have been increasingly exploited as efficient delivery vectors, leading to opportunities of use as magnetic resonance imaging (MRI) agents, mediators of hyperthermia cancer treatment and in targeted therapies. Much attention has been also focused on ""smart"" polymers, which are able to respond to environmental changes, such as changes in the temperature and pH. In this context, this article reviews the state-of-the art in stimuli-responsive magnetic systems for biomedical applications. The paper describes different types of stimuli-sensitive systems, mainly temperature- and pH sensitive polymers, the combination of this characteristic with magnetic properties and, finally, it gives an account of their preparation methods. The article also discusses the main in vivo biomedical applications of such materials. A survey of the recent literature on various stimuli-responsive magnetic gels in biomedical applications is also included. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The objective of this study is to graft the Surface of carbon black, by chemically introducing polymeric chains (Nafion (R) like) with proton-conducting properties. This procedure aims for a better interaction of the proton-conducting phase with the metallic catalyst particles, as well as hinders posterior support particle agglomeration. Also loss of active surface call be prevented. The proton conduction between the active electrocatalyst site and the Nafion (R) ionomer membrane should be enhanced, thus diminishing the ohmic drop ill the polymer electrolyte membrane fuel cell (PEMFC). PtRu nanoparticles were supported on different carbon materials by the impregnation method and direct reduction with ethylene glycol and characterized using amongst others FTIR, XRD and TEM. The screen printing technique was used to produce membrane electrode assemblies (MEA) for single cell tests in H(2)/air(PEMFC) and methanol operation (DMFC). In the PEMFC experiments, PtRu supported on grafted carbon shows 550 mW cm(-2) gmetal(-1) power density, which represents at least 78% improvement in performance, compared to the power density of commercial PtRu/C ETEK. The DMFC results of the grafted electrocatalyst achieve around 100% improvement. The polarization Curves results clearly show that the main Cause of the observed effect is the reduction in ohmic drop, caused by the grafted polymer. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.