987 resultados para Error detection
Resumo:
The Gram-Schmidt (GS) orthogonalisation procedure has been used to improve the convergence speed of least mean square (LMS) adaptive code-division multiple-access (CDMA) detectors. However, this algorithm updates two sets of parameters, namely the GS transform coefficients and the tap weights, simultaneously. Because of the additional adaptation noise introduced by the former, it is impossible to achieve the same performance as the ideal orthogonalised LMS filter, unlike the result implied in an earlier paper. The authors provide a lower bound on the minimum achievable mean squared error (MSE) as a function of the forgetting factor λ used in finding the GS transform coefficients, and propose a variable-λ algorithm to balance the conflicting requirements of good tracking and low misadjustment.
Resumo:
In high speed manufacturing systems, continuous operation is desirable, with minimal disruption for repairs and service. An intelligent diagnostic monitoring system, designed to detect developing faults before catastrophic failure, or prior to undesirable reduction in output quality, is a good means of achieving this. Artificial neural networks have already been found to be of value in fault diagnosis of machinery. The aim here is to provide a system capable of detecting a number of faults, in order that maintenance can be scheduled in advance of sudden failure, and to reduce the necessity to replace parts at intervals based on mean time between failures. Instead, parts will need to be replaced only when necessary. Analysis of control information in the form of position error data from two servomotors is described.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The serological detection of antibodies against human papillomavirus (HPV) antigens is a useful tool to determine exposure to genital HPV infection and in predicting the risk of infection persistence and associated lesions. Enzyme-linked immunosorbent assays (ELISAs) are commonly used for seroepidemiological studies of HPV infection but are not standardized. Intra-and interassay performance variation is difficult to control, especially in cohort studies that require the testing of specimens over extended periods. We propose the use of normalized absorbance ratios (NARs) as a standardization procedure to control for such variations and minimize measurement error. We compared NAR and ELISA optical density (OD) values for the strength of the correlation between serological results for paired visits 4 months apart and HPV-16 DNA positivity in cervical specimens from a cohort investigation of 2,048 women tested with an ELISA using HPV-16 virus-like particles. NARs were calculated by dividing the mean blank-subtracted (net) ODs by the equivalent values of a control serum pool included in the same plate in triplicate, using different dilutions. Stronger correlations were observed with NAR values than with net ODs at every dilution, with an overall reduction in nonexplained regression variability of 39%. Using logistic regression, the ranges of odds ratios of HPV-16 DNA positivity contrasting upper and lower quintiles at different dilutions and their averages were 4.73 to 5.47 for NARs and 2.78 to 3.28 for net ODs, with corresponding significant improvements in seroreactivity-risk trends across quintiles when NARs were used. The NAR standardization is a simple procedure to reduce measurement error in seroepidemiological studies of HPV infection.
Resumo:
Structural health monitoring (SHM) is related to the ability of monitoring the state and deciding the level of damage or deterioration within aerospace, civil and mechanical systems. In this sense, this paper deals with the application of a two-step auto-regressive and auto-regressive with exogenous inputs (AR-ARX) model for linear prediction of damage diagnosis in structural systems. This damage detection algorithm is based on the. monitoring of residual error as damage-sensitive indexes, obtained through vibration response measurements. In complex structures there are. many positions under observation and a large amount of data to be handed, making difficult the visualization of the signals. This paper also investigates data compression by using principal component analysis. In order to establish a threshold value, a fuzzy c-means clustering is taken to quantify the damage-sensitive index in an unsupervised learning mode. Tests are made in a benchmark problem, as proposed by IASC-ASCE with different damage patterns. The diagnosis that was obtained showed high correlation with the actual integrity state of the structure. Copyright © 2007 by ABCM.
Resumo:
Identification and classification of overlapping nodes in networks are important topics in data mining. In this paper, a network-based (graph-based) semi-supervised learning method is proposed. It is based on competition and cooperation among walking particles in a network to uncover overlapping nodes by generating continuous-valued outputs (soft labels), corresponding to the levels of membership from the nodes to each of the communities. Moreover, the proposed method can be applied to detect overlapping data items in a data set of general form, such as a vector-based data set, once it is transformed to a network. Usually, label propagation involves risks of error amplification. In order to avoid this problem, the proposed method offers a mechanism to identify outliers among the labeled data items, and consequently prevents error propagation from such outliers. Computer simulations carried out for synthetic and real-world data sets provide a numeric quantification of the performance of the method. © 2012 Springer-Verlag.
Resumo:
Structural damage identification is basically a nonlinear phenomenon; however, nonlinear procedures are not used currently in practical applications due to the complexity and difficulty for implementation of such techniques. Therefore, the development of techniques that consider the nonlinear behavior of structures for damage detection is a research of major importance since nonlinear dynamical effects can be erroneously treated as damage in the structure by classical metrics. This paper proposes the discrete-time Volterra series for modeling the nonlinear convolution between the input and output signals in a benchmark nonlinear system. The prediction error of the model in an unknown structural condition is compared with the values of the reference structure in healthy condition for evaluating the method of damage detection. Since the Volterra series separate the response of the system in linear and nonlinear contributions, these indexes are used to show the importance of considering the nonlinear behavior of the structure. The paper concludes pointing out the main advantages and drawbacks of this damage detection methodology. © (2013) Trans Tech Publications.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Bovine tuberculosis (BTB) was introduced into Swedish farmed deer herds in 1987. Epidemiological investigations showed that 10 deer herds had become infected (July 1994) and a common source of infection, a consignment of 168 imported farmed fallow deer, was identified (I). As trace-back of all imported and in-contact deer was not possible, a control program, based on tuberculin testing, was implemented in July 1994. As Sweden has been free from BTB since 1958, few practicing veterinarians had experience in tuberculin testing. In this test, result relies on the skill, experience and conscientiousness of the testing veterinarian. Deficiencies in performing the test may adversely affect the test results and thereby compromise a control program. Quality indicators may identify possible deficiencies in testing procedures. For that purpose, reference values for measured skin fold thickness (prior to injection of the tuberculin) were established (II) suggested to be used mainly by less experienced veterinarians to identify unexpected measurements. Furthermore, the within-veterinarian variation of the measured skin fold thickness was estimated by fitting general linear models to data (skin fold measurements) (III). The mean square error was used as an estimator of the within-veterinarian variation. Using this method, four (6%) veterinarians were considered to have unexpectedly large variation in measurements. In certain large extensive deer farms, where mustering of all animals was difficult, meat inspection was suggested as an alternative to tuberculin testing. The efficiency of such a control was estimated in paper IV and V. A Reed Frost model was fitted to data from seven BTB-infected deer herds and the spread of infection was estimated (< 0.6 effective contacts per deer and year) (IV). These results were used to model the efficiency of meat inspection in an average extensive Swedish deer herd. Given a 20% annual slaughter and meat inspection, the model predicted that BTB would be either detected or eliminated in most herds (90%) 15 years after introduction of one infected deer. In 2003, an alternative control for BTB in extensive Swedish deer herds, based on the results of paper V, was implemented.
Resumo:
In the clinical setting, the early detection of myocardial injury induced by doxorubicin (DXR) is still considered a challenge. To assess whether ultrasonic tissue characterization (UTC) can identify early DXR-related myocardial lesions and their correlation with collagen myocardial percentages, we studied 60 rats at basal status and prospectively after 2mg/Kg/week DXR endovenous infusion. Echocardiographic examinations were conducted at baseline and at 8,10,12,14 and 16 mg/Kg DXR cumulative dose. The left ventricle ejection fraction (LVEF), shortening fraction (SF), and the UTC indices: corrected coefficient of integrated backscatter (IBS) (tissue IBS intensity/phantom IBS intensity) (CC-IBS) and the cyclic variation magnitude of this intensity curve (MCV) were measured. The variation of each parameter of study through DXR dose was expressed by the average and standard error at specific DXR dosages and those at baseline. The collagen percent (%) was calculated in six control group animals and 24 DXR group animals. CC-IBS increased (1.29 +/- 0.27 x 1.1 +/- 0.26-basal; p=0.005) and MCV decreased (9.1 +/- 2.8 x 11.02 +/- 2.6-basal; p=0.006) from 8 mg/Kg to 16mg/Kg DXR. LVEF presented only a slight but significant decrease (80.4 +/- 6.9% x 85.3 +/- 6.9%-basal, p=0.005) from 8 mg/Kg to 16 mg/Kg DXR. CC-IBS was 72.2% sensitive and 83.3% specific to detect collagen deposition of 4.24%(AUC=0.76). LVEF was not accurate to detect initial collagen deposition (AUC=0.54). In conclusion: UTC was able to early identify the DXR myocardial lesion when compared to LVEF, showing good accuracy to detect the initial collagen deposition in this experimental animal model.
Resumo:
[EN] The aortic dissection is a disease that can cause a deadly situation, even with a correct treatment. It consists in a rupture of a layer of the aortic artery wall, causing a blood flow inside this rupture, called dissection. The aim of this paper is to contribute to its diagnosis, detecting the dissection edges inside the aorta. A subpixel accuracy edge detector based on the hypothesis of partial volume effect is used, where the intensity of an edge pixel is the sum of the contribution of each color weighted by its relative area inside the pixel. The method uses a floating window centred on the edge pixel and computes the edge features. The accuracy of our method is evaluated on synthetic images of different hickness and noise levels, obtaining an edge detection with a maximal mean error lower than 16 percent of a pixel.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.