28 resultados para Error estimator
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The problem of signal tracking, in the presence of a disturbance signal in the plant, is solved using a zero-variation methodology. A state feedback controller is designed in order to minimise the H-2-norm of the closed-loop system, such that the effect of the disturbance is attenuated. Then, a state estimator is designed and the modification of the zeros is used to minimise the H-infinity-norm from the reference input signal to the error signal. The error is taken to be the difference between the reference and the output signals, thereby making it a tracking problem. The design is formulated in a linear matrix inequality framework, such that the optimal solution of the stated control problem is obtained. Practical examples illustrate the effectiveness of the proposed method.
Resumo:
Workplace accidents involving machines are relevant for their magnitude and their impacts on worker health. Despite consolidated critical statements, explanation centered on errors of operators remains predominant with industry professionals, hampering preventive measures and the improvement of production-system reliability. Several initiatives were adopted by enforcement agencies in partnership with universities to stimulate production and diffusion of analysis methodologies with a systemic approach. Starting from one accident case that occurred with a worker who operated a brake-clutch type mechanical press, the article explores cognitive aspects and the existence of traps in the operation of this machine. It deals with a large-sized press that, despite being endowed with a light curtain in areas of access to the pressing zone, did not meet legal requirements. The safety devices gave rise to an illusion of safety, permitting activation of the machine when a worker was still found within the operational zone. Preventive interventions must stimulate the tailoring of systems to the characteristics of workers, minimizing the creation of traps and encouraging safety policies and practices that replace judgments of behaviors that participate in accidents by analyses of reasons that lead workers to act in that manner.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We search for planar deviations of statistical isotropy in the Wilkinson Microwave Anisotropy Probe (WMAP) data by applying a recently introduced angular-planar statistics both to full-sky and to masked temperature maps, including in our analysis the effect of the residual foreground contamination and systematics in the foreground removing process as sources of error. We confirm earlier findings that full-sky maps exhibit anomalies at the planar (l) and angular (l) scales (l; l) = (2; 5); (4; 7); and (6; 8), which seem to be due to unremoved foregrounds since this features are present in the full-sky map but not in the masked maps. on the other hand, our test detects slightly anomalous results at the scales (l; l) = (10; 8) and (2; 9) in the masked maps but not in the full-sky one, indicating that the foreground cleaning procedure (used to generate the full-sky map) could not only be creating false anomalies but also hiding existing ones. We also find a significant trace of an anomaly in the full-sky map at the scale (l; l) = (10; 5), which is still present when we consider galactic cuts of 18.3% and 28.4%. As regards the quadrupole (l = 2), we find a coherent over-modulation over the whole celestial sphere, for all full-sky and cut-sky maps. Overall, our results seem to indicate that current CMB maps derived from WMAP data do not show significant signs of anisotropies, as measured by our angular-planar estimator. However, we have detected a curious coherence of planar modulations at angular scales of the order of the galaxy's plane, which may be an indication of residual contaminations in the full-and cut-sky maps.
Resumo:
A comparative study of aggregation error bounds for the generalized transportation problem is presented. A priori and a posteriori error bounds were derived and a computational study was performed to (a) test the correlation between the a priori, the a posteriori, and the actual error and (b) quantify the difference of the error bounds from the actual error. Based on the results we conclude that calculating the a priori error bound can be considered as a useful strategy to select the appropriate aggregation level. The a posteriori error bound provides a good quantitative measure of the actual error.
Resumo:
The error function is present in several equations describing eletrode processes. But, only approximations of this function are used. In this work, these and other approximations are studied and evaluated according to precision.
Resumo:
A number of studies have analyzed various indices of the final position variability in order to provide insight into different levels of neuromotor processing during reaching movements. Yet the possible effects of movement kinematics on variability have often been neglected. The present study was designed to test the effects of movement direction and curvature on the pattern of movement variable errors. Subjects performed series of reaching movements over the same distance and into the same target. However, due either to changes in starting position or to applied obstacles, the movements were performed in different directions or along the trajectories of different curvatures. The pattern of movement variable errors was assessed by means of the principal component analysis applied on the 2-D scatter of movement final positions. The orientation of these ellipses demonstrated changes associated with changes in both movement direction and curvature. However, neither movement direction nor movement curvature affected movement variable errors assessed by area of the ellipses. Therefore it was concluded that the end-point variability depends partly, but not exclusively, on movement kinematics.
Resumo:
The serological detection of antibodies against human papillomavirus (HPV) antigens is a useful tool to determine exposure to genital HPV infection and in predicting the risk of infection persistence and associated lesions. Enzyme-linked immunosorbent assays (ELISAs) are commonly used for seroepidemiological studies of HPV infection but are not standardized. Intra-and interassay performance variation is difficult to control, especially in cohort studies that require the testing of specimens over extended periods. We propose the use of normalized absorbance ratios (NARs) as a standardization procedure to control for such variations and minimize measurement error. We compared NAR and ELISA optical density (OD) values for the strength of the correlation between serological results for paired visits 4 months apart and HPV-16 DNA positivity in cervical specimens from a cohort investigation of 2,048 women tested with an ELISA using HPV-16 virus-like particles. NARs were calculated by dividing the mean blank-subtracted (net) ODs by the equivalent values of a control serum pool included in the same plate in triplicate, using different dilutions. Stronger correlations were observed with NAR values than with net ODs at every dilution, with an overall reduction in nonexplained regression variability of 39%. Using logistic regression, the ranges of odds ratios of HPV-16 DNA positivity contrasting upper and lower quintiles at different dilutions and their averages were 4.73 to 5.47 for NARs and 2.78 to 3.28 for net ODs, with corresponding significant improvements in seroreactivity-risk trends across quintiles when NARs were used. The NAR standardization is a simple procedure to reduce measurement error in seroepidemiological studies of HPV infection.
Resumo:
A systematic procedure of zero placement to design control systems is proposed. A state feedback controller with vector gain K is used to perform the pole placement. An estimator with vector gain L is also designed for output feedback control. A new systematic method of zero assignment to reduce the effect of the undesirable poles of the plant and also to increase the velocity error constant is presented. The methodology places the zeros in a specific region and it is based on Linear Matrix Inequalities (LMIs) framework, which is a new approach to solve this problem. Three examples illustrate the effectiveness of the proposed method.
Resumo:
In the last decades there was a great development in the study of control systems to attenuate the harmful effect of natural events in great structures, as buildings and bridges. Magnetorheological fluid (MR), that is an intelligent material, has been considered in many proposals of project for these controllers. This work presents the controller design using feedback of states through LMI (Linear Matrix Inequalities) approach. The experimental test were carried out in a structure with two degrees of freedom with a connected shock absorber MR. Experimental tests were realized in order to specify the features of this semi-active controller. In this case, there exist states that are not measurable, so the feedback of the states involves the project of an estimator. The coupling of the MR damper causes a variation in dynamics properties, so an identification methods, based on experimental input/output signal was used to compare with the numerical application. The identification method of Prediction Error Methods - (PEM) was used to find the physical characteristics of the system through realization in modal space of states. This proposal allows the project of a semi-active control, where the main characteristic is the possibility of the variation of the damping coefficient.
Resumo:
The term human factor is used by professionals of various fields meant for understanding the behavior of human beings at work. The human being, while developing a cooperative activity with a computer system, is subject to cause an undesirable situation in his/her task. This paper starts from the principle that human errors may be considered as a cause or factor contributing to a series of accidents and incidents in many diversified fields in which human beings interact with automated systems. We propose a simulator of performance in error with potentiality to assist the Human Computer Interaction (HCI) project manager in the construction of the critical systems. © 2011 Springer-Verlag.
Resumo:
Semi-supervised learning is applied to classification problems where only a small portion of the data items is labeled. In these cases, the reliability of the labels is a crucial factor, because mislabeled items may propagate wrong labels to a large portion or even the entire data set. This paper aims to address this problem by presenting a graph-based (network-based) semi-supervised learning method, specifically designed to handle data sets with mislabeled samples. The method uses teams of walking particles, with competitive and cooperative behavior, for label propagation in the network constructed from the input data set. The proposed model is nature-inspired and it incorporates some features to make it robust to a considerable amount of mislabeled data items. Computer simulations show the performance of the method in the presence of different percentage of mislabeled data, in networks of different sizes and average node degree. Importantly, these simulations reveals the existence of the critical points of the mislabeled subset size, below which the network is free of wrong label contamination, but above which the mislabeled samples start to propagate their labels to the rest of the network. Moreover, numerical comparisons have been made among the proposed method and other representative graph-based semi-supervised learning methods using both artificial and real-world data sets. Interestingly, the proposed method has increasing better performance than the others as the percentage of mislabeled samples is getting larger. © 2012 IEEE.
Resumo:
In this work we analyze the convergence of solutions of the Poisson equation with Neumann boundary conditions in a two-dimensional thin domain with highly oscillatory behavior. We consider the case where the height of the domain, amplitude and period of the oscillations are all of the same order, and given by a small parameter e > 0. Using an appropriate corrector approach, we show strong convergence and give error estimates when we replace the original solutions by the first-order expansion through the Multiple-Scale Method.
Resumo:
In most studies on beef cattle longevity, only the cows reaching a given number of calvings by a specific age are considered in the analyses. With the aim of evaluating all cows with productive life in herds, taking into consideration the different forms of management on each farm, it was proposed to measure cow longevity from age at last calving (ALC), that is, the most recent calving registered in the files. The objective was to characterize this trait in order to study the longevity of Nellore cattle, using the Kaplan-Meier estimators and the Cox model. The covariables and class effects considered in the models were age at first calving (AFC), year and season of birth of the cow and farm. The variable studied (ALC) was classified as presenting complete information (uncensored = 1) or incomplete information (censored = 0), using the criterion of the difference between the date of each cow's last calving and the date of the latest calving at each farm. If this difference was >36 months, the cow was considered to have failed. If not, this cow was censored, thus indicating that future calving remained possible for this cow. The records of 11 791 animals from 22 farms within the Nellore Breed Genetic Improvement Program ('Nellore Brazil') were used. In the estimation process using the Kaplan-Meier model, the variable of AFC was classified into three age groups. In individual analyses, the log-rank test and the Wilcoxon test in the Kaplan-Meier model showed that all covariables and class effects had significant effects (P < 0.05) on ALC. In the analysis considering all covariables and class effects, using the Wald test in the Cox model, only the season of birth of the cow was not significant for ALC (P > 0.05). This analysis indicated that each month added to AFC diminished the risk of the cow's failure in the herd by 2%. Nonetheless, this does not imply that animals with younger AFC had less profitability. Cows with greater numbers of calvings were more precocious than those with fewer calvings. Copyright © The Animal Consortium 2012.