924 resultados para error model
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
An adaptive scheme is shown by the authors of the above paper (ibid. vol. 71, no. 2, pp. 275-276, Feb. 1983) for continuous time model reference adaptive systems (MRAS), where relays replace the usual multipliers in the existing MRAS. The commenter shows an error in the analysis of the hyperstability of the scheme, such that the validity of this configuration becomes an open question.
Resumo:
This paper addresses the problem of model reduction for uncertain discrete-time systems with convex bounded (polytope type) uncertainty. A reduced order precisely known model is obtained in such a way that the H2 and/or the H∞ guaranteed norm of the error between the original (uncertain) system and the reduced one is minimized. The optimization problems are formulated in terms of coupled (non-convex) LMIs - Linear Matrix Inequalities, being solved through iterative algorithms. Examples illustrate the results.
Resumo:
In this work a new method is proposed of separated estimation for the ARMA spectral model based on the modified Yule-Walker equations and on the least squares method. The proposal of the new method consists of performing an AR filtering in the random process generated obtaining a new random estimate, which will reestimate the ARMA model parameters, given a better spectrum estimate. Some numerical examples will be presented in order to ilustrate the performance of the method proposed, which is evaluated by the relative error and the average variation coefficient.
Resumo:
A branch and bound algorithm is proposed to solve the [image omitted]-norm model reduction problem for continuous and discrete-time linear systems, with convergence to the global optimum in a finite time. The lower and upper bounds in the optimization procedure are described by linear matrix inequalities (LMI). Also proposed are two methods with which to reduce the convergence time of the branch and bound algorithm: the first one uses the Hankel singular values as a sufficient condition to stop the algorithm, providing to the method a fast convergence to the global optimum. The second one assumes that the reduced model is in the controllable or observable canonical form. The [image omitted]-norm of the error between the original model and the reduced model is considered. Examples illustrate the application of the proposed method.
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
When searching for prospective novel peptides, it is difficult to determine the biological activity of a peptide based only on its sequence. The trial and error approach is generally laborious, expensive and time consuming due to the large number of different experimental setups required to cover a reasonable number of biological assays. To simulate a virtual model for Hymenoptera insects, 166 peptides were selected from the venoms and hemolymphs of wasps, bees and ants and applied to a mathematical model of multivariate analysis, with nine different chemometric components: GRAVY, aliphaticity index, number of disulfide bonds, total residues, net charge, pI value, Boman index, percentage of alpha helix, and flexibility prediction. Principal component analysis (PCA) with non-linear iterative projections by alternating least-squares (NIPALS) algorithm was performed, without including any information about the biological activity of the peptides. This analysis permitted the grouping of peptides in a way that strongly correlated to the biological function of the peptides. Six different groupings were observed, which seemed to correspond to the following groups: chemotactic peptides, mastoparans, tachykinins, kinins, antibiotic peptides, and a group of long peptides with one or two disulfide bonds and with biological activities that are not yet clearly defined. The partial overlap between the mastoparans group and the chemotactic peptides, tachykinins, kinins and antibiotic peptides in the PCA score plot may be used to explain the frequent reports in the literature about the multifunctionality of some of these peptides. The mathematical model used in the present investigation can be used to predict the biological activities of novel peptides in this system, and it may also be easily applied to other biological systems. © 2011 Elsevier Inc.
Resumo:
Semi-supervised learning is applied to classification problems where only a small portion of the data items is labeled. In these cases, the reliability of the labels is a crucial factor, because mislabeled items may propagate wrong labels to a large portion or even the entire data set. This paper aims to address this problem by presenting a graph-based (network-based) semi-supervised learning method, specifically designed to handle data sets with mislabeled samples. The method uses teams of walking particles, with competitive and cooperative behavior, for label propagation in the network constructed from the input data set. The proposed model is nature-inspired and it incorporates some features to make it robust to a considerable amount of mislabeled data items. Computer simulations show the performance of the method in the presence of different percentage of mislabeled data, in networks of different sizes and average node degree. Importantly, these simulations reveals the existence of the critical points of the mislabeled subset size, below which the network is free of wrong label contamination, but above which the mislabeled samples start to propagate their labels to the rest of the network. Moreover, numerical comparisons have been made among the proposed method and other representative graph-based semi-supervised learning methods using both artificial and real-world data sets. Interestingly, the proposed method has increasing better performance than the others as the percentage of mislabeled samples is getting larger. © 2012 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Cognitive radio is a growing zone in wireless communication which offers an opening in complete utilization of incompetently used frequency spectrum: deprived of crafting interference for the primary (authorized) user, the secondary user is indorsed to use the frequency band. Though, scheming a model with the least interference produced by the secondary user for primary user is a perplexing job. In this study we proposed a transmission model based on error correcting codes dealing with a countable number of pairs of primary and secondary users. However, we obtain an effective utilization of spectrum by the transmission of the pairs of primary and secondary users' data through the linear codes with different given lengths. Due to the techniques of error correcting codes we developed a number of schemes regarding an appropriate bandwidth distribution in cognitive radio.
Resumo:
Preservation of rivers and water resources is crucial in most environmental policies and many efforts are made to assess water quality. Environmental monitoring of large river networks are based on measurement stations. Compared to the total length of river networks, their number is often limited and there is a need to extend environmental variables that are measured locally to the whole river network. The objective of this paper is to propose several relevant geostatistical models for river modeling. These models use river distance and are based on two contrasting assumptions about dependency along a river network. Inference using maximum likelihood, model selection criterion and prediction by kriging are then developed. We illustrate our approach on two variables that differ by their distributional and spatial characteristics: summer water temperature and nitrate concentration. The data come from 141 to 187 monitoring stations in a network on a large river located in the Northeast of France that is more than 5000 km long and includes Meuse and Moselle basins. We first evaluated different spatial models and then gave prediction maps and error variance maps for the whole stream network.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
The enzymatically catalyzed template-directed extension of ssDNA/primer complex is an impor-tant reaction of extraordinary complexity. The DNA polymerase does not merely facilitate the insertion of dNMP, but it also performs rapid screening of substrates to ensure a high degree of fidelity. Several kinetic studies have determined rate constants and equilibrium constants for the elementary steps that make up the overall pathway. The information is used to develop a macro-scopic kinetic model, using an approach described by Ninio [Ninio J., 1987. Alternative to the steady-state method: derivation of reaction rates from first-passage times and pathway probabili-ties. Proc. Natl. Acad. Sci. U.S.A. 84, 663–667]. The principle idea of the Ninio approach is to track a single template/primer complex over time and to identify the expected behavior. The average time to insert a single nucleotide is a weighted sum of several terms, in-cluding the actual time to insert a nucleotide plus delays due to polymerase detachment from ei-ther the ternary (template-primer-polymerase) or quaternary (+nucleotide) complexes and time delays associated with the identification and ultimate rejection of an incorrect nucleotide from the binding site. The passage times of all events and their probability of occurrence are ex-pressed in terms of the rate constants of the elementary steps of the reaction pathway. The model accounts for variations in the average insertion time with different nucleotides as well as the in-fluence of G+C content of the sequence in the vicinity of the insertion site. Furthermore the model provides estimates of error frequencies. If nucleotide extension is recognized as a compe-tition between successful insertions and time delaying events, it can be described as a binomial process with a probability distribution. The distribution gives the probability to extend a primer/template complex with a certain number of base pairs and in general it maps annealed complexes into extension products.
Resumo:
In the clinical setting, the early detection of myocardial injury induced by doxorubicin (DXR) is still considered a challenge. To assess whether ultrasonic tissue characterization (UTC) can identify early DXR-related myocardial lesions and their correlation with collagen myocardial percentages, we studied 60 rats at basal status and prospectively after 2mg/Kg/week DXR endovenous infusion. Echocardiographic examinations were conducted at baseline and at 8,10,12,14 and 16 mg/Kg DXR cumulative dose. The left ventricle ejection fraction (LVEF), shortening fraction (SF), and the UTC indices: corrected coefficient of integrated backscatter (IBS) (tissue IBS intensity/phantom IBS intensity) (CC-IBS) and the cyclic variation magnitude of this intensity curve (MCV) were measured. The variation of each parameter of study through DXR dose was expressed by the average and standard error at specific DXR dosages and those at baseline. The collagen percent (%) was calculated in six control group animals and 24 DXR group animals. CC-IBS increased (1.29 +/- 0.27 x 1.1 +/- 0.26-basal; p=0.005) and MCV decreased (9.1 +/- 2.8 x 11.02 +/- 2.6-basal; p=0.006) from 8 mg/Kg to 16mg/Kg DXR. LVEF presented only a slight but significant decrease (80.4 +/- 6.9% x 85.3 +/- 6.9%-basal, p=0.005) from 8 mg/Kg to 16 mg/Kg DXR. CC-IBS was 72.2% sensitive and 83.3% specific to detect collagen deposition of 4.24%(AUC=0.76). LVEF was not accurate to detect initial collagen deposition (AUC=0.54). In conclusion: UTC was able to early identify the DXR myocardial lesion when compared to LVEF, showing good accuracy to detect the initial collagen deposition in this experimental animal model.