955 resultados para Noise detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To prospectively evaluate, for the depiction of simulated hypervascular liver lesions in a phantom, the effect of a low tube voltage, high tube current computed tomographic (CT) technique on image noise, contrast-to-noise ratio (CNR), lesion conspicuity, and radiation dose. MATERIALS AND METHODS: A custom liver phantom containing 16 cylindric cavities (four cavities each of 3, 5, 8, and 15 mm in diameter) filled with various iodinated solutions to simulate hypervascular liver lesions was scanned with a 64-section multi-detector row CT scanner at 140, 120, 100, and 80 kVp, with corresponding tube current-time product settings at 225, 275, 420, and 675 mAs, respectively. The CNRs for six simulated lesions filled with different iodinated solutions were calculated. A figure of merit (FOM) for each lesion was computed as the ratio of CNR2 to effective dose (ED). Three radiologists independently graded the conspicuity of 16 simulated lesions. An anthropomorphic phantom was scanned to evaluate the ED. Statistical analysis included one-way analysis of variance. RESULTS: Image noise increased by 45% with the 80-kVp protocol compared with the 140-kVp protocol (P < .001). However, the lowest ED and the highest CNR were achieved with the 80-kVp protocol. The FOM results indicated that at a constant ED, a reduction of tube voltage from 140 to 120, 100, and 80 kVp increased the CNR by factors of at least 1.6, 2.4, and 3.6, respectively (P < .001). At a constant CNR, corresponding reductions in ED were by a factor of 2.5, 5.5, and 12.7, respectively (P < .001). The highest lesion conspicuity was achieved with the 80-kVp protocol. CONCLUSION: The CNR of simulated hypervascular liver lesions can be substantially increased and the radiation dose reduced by using an 80-kVp, high tube current CT technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

These investigations will discuss the operational noise caused by automotive torque converters during speed ratio operation. Two specific cases of torque converter noise will be studied; cavitation, and a monotonic turbine induced noise. Cavitation occurs at or near stall, or zero turbine speed. The bubbles produced due to the extreme torques at low speed ratio operation, upon collapse, may cause a broadband noise that is unwanted by those who are occupying the vehicle as other portions of the vehicle drive train improve acoustically. Turbine induced noise, which occurs at high engine torque at around 0.5 speed ratio, is a narrow-band phenomenon that is audible to vehicle occupants currently. The solution to the turbine induced noise is known, however this study is to gain a better understanding of the mechanics behind this occurrence. The automated torque converter dynamometer test cell was utilized in these experiments to determine the effect of torque converter design parameters on the offset of cavitation and to employ the use a microwave telemetry system to directly measure pressures and structural motion on the turbine. Nearfield acoustics were used as a detection method for all phenomena while using a standardized speed ratio sweep test. Changes in filtered sound pressure levels enabled the ability to detect cavitation desinence. This, in turn, was utilized to determine the effects of various torque converter design parameters, including diameter, torus dimensions, and pump and stator blade designs on cavitation. The on turbine pressures and motion measured with the microwave telemetry were used to understand better the effects of a notched trailing edge turbine blade on the turbine induced noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the first in situ measurements of neutral deuterium originating in the local interstellar medium (LISM) in Earth’s orbit. These measurements were performed with the IBEX-Lo camera on NASA’s interstellar boundary explorer (IBEX) satellite. All data from the spring observation periods of 2009 through 2011 have been analysed. In the three years of the IBEX mission time, the observation geometry and orbit allowed for a total observation time of 115.3 days for the LISM. However, the effects of the spinning spacecraft and the stepping through 8 energy channels mean that we are only observing the interstellar wind for a total time of 1.44 days, in which 2 counts for interstellar deuterium were collected. We report here a conservative number, because a possibility of systematic error or additional noise, though eliminated in our analysis to the best of our knowledge, only supports detection at a 1-sigma level. From these observations, we derive a ratio D/H = (5.8 ± 4.4) × 10-4 at 1 AU. After modelling the transport and loss of D and H from the termination shock to Earth’s orbit, we find that our result of D/HLISM = (1.6 ± 1.2) × 10-5 agrees with D/HLIC = (1.6 ± 0.4) × 10-5 for the local interstellar cloud. This weak interstellar signal is extracted from a strong terrestrial background signal consisting of sputter products from the sensor’s conversion surface. As reference, we accurately measure the terrestrial D/H ratio in these sputtered products and then discriminate this terrestrial background source. Because of the diminishing D and H signal at Earth’s orbit during the rising solar activity due to photoionisation losses and increased photon pressure, our result demonstrates that in situ measurements of interstellar deuterium in the inner heliosphere are only possible during solar minimum conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual short-term memory (VSTM) is the storage of visual information over a brief time period (usually a few seconds or less). Over the past decade, the most popular task for studying VSTM in humans has been the change detection task. In this task, subjects must remember several visual items per trial in order to identify a change following a brief delay interval. Results from change detection tasks have shown that VSTM is limited; humans are only able to accurately hold a few visual items in mind over a brief delay. However, there has been much debate in regard to the structure or cause of these limitations. The two most popular conceptualizations of VSTM limitations in recent years have been the fixed-capacity model and the continuous-resource model. The fixed-capacity model proposes a discrete limit on the total number of visual items that can be stored in VSTM. The continuous-resource model proposes a continuous-resource that can be allocated among many visual items in VSTM, with noise in item memory increasing as the number of items to be remembered increases. While VSTM is far from being completely understood in humans, even less is known about VSTM in non-human animals, including the rhesus monkey (Macaca mulatta). Given that rhesus monkeys are the premier medical model for humans, it is important to understand their VSTM if they are to contribute to understanding human memory. The primary goals of this study were to train and test rhesus monkeys and humans in change detection in order to directly compare VSTM between the two species and explore the possibility that direct species comparison might shed light on the fixed-capacity vs. continuous-resource models of VSTM. The comparative results suggest qualitatively similar VSTM for the two species through converging evidence supporting the continuous-resource model and thereby establish rhesus monkeys as a good system for exploring neurophysiological correlates of VSTM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein screening/detection is an essential tool in many laboratories. Owing to the relatively large time investments that are required by standard protocols, the development of methods with higher throughput while maintaining an at least comparable signal-to-noise ratio is highly beneficial in many research areas. This chapter describes how cold microwave technology can be used to enhance the rate of molecular interactions and provides protocols for dot blots, Western blots, and ELISA procedures permitting a completion of all incubation steps (blocking and antibody steps) within 24-45 min.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing importance of pollutant noise has led to the creation of many new noise testing laboratories in recent years. For this reason and due to the legal implications that noise reporting may have, it is necessary to create procedures intended to guarantee the quality of the testing and its results. For instance, the ISO/IEC standard 17025:2005 specifies general requirements for the competence of testing laboratories. In this standard, interlaboratory comparisons are one of the main measures that must be applied to guarantee the quality of laboratories when applying specific methodologies for testing. In the specific case of environmental noise, round robin tests are usually difficult to design, as it is difficult to find scenarios that can be available and controlled while the participants carry out the measurements. Monitoring and controlling the factors that can influence the measurements (source emissions, propagation, background noise…) is not usually affordable, so the most extended solution is to create very effortless scenarios, where most of the factors that can have an influence on the results are excluded (sampling, processing of results, background noise, source detection…) The new approach described in this paper only requires the organizer to make actual measurements (or prepare virtual ones). Applying and interpreting a common reference document (standard, regulation…), the participants must analyze these input data independently to provide the results, which will be compared among the participants. The measurement costs are severely reduced for the participants, there is no need to monitor the scenario conditions, and almost any relevant factor can be included in this methodology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a new method for the automatic detection and tracking of road traffic signs using an on-board single camera. This method aims to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. The proposed approach exploits a combination of different features, such as color, appearance, and tracking information. This information is introduced into a recursive Bayesian decision framework, in which prior probabilities are dynamically adapted to tracking results. This decision scheme obtains a number of candidate regions in the image, according to their HS (Hue-Saturation). Finally, a Kalman filter with an adaptive noise tuning provides the required time and spatial coherence to the estimates. Results have shown that the proposed method achieves high detection rates in challenging scenarios, including illumination changes, rapid motion and significant perspective distortion

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The localization of persons in indoor environments is nowadays an open problem. There are partial solutions based on the deployment of a network of sensors (Local Positioning Systems or LPS). Other solutions only require the installation of an inertial sensor on the person’s body (Pedestrian Dead-Reckoning or PDR). PDR solutions integrate the signals coming from an Inertial Measurement Unit (IMU), which usually contains 3 accelerometers and 3 gyroscopes. The main problem of PDR is the accumulation of positioning errors due to the drift caused by the noise in the sensors. This paper presents a PDR solution that incorporates a drift correction method based on detecting the access ramps usually found in buildings. The ramp correction method is implemented over a PDR framework that uses an Inertial Navigation algorithm (INS) and an IMU attached to the person’s foot. Unlike other approaches that use external sensors to correct the drift error, we only use one IMU on the foot. To detect a ramp, the slope of the terrain on which the user is walking, and the change in height sensed when moving forward, are estimated from the IMU. After detection, the ramp is checked for association with one of the existing in a database. For each associated ramp, a position correction is fed into the Kalman Filter in order to refine the INS-PDR solution. Drift-free localization is achieved with positioning errors below 2 meters for 1,000-meter-long routes in a building with a few ramps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the authors provide a methodology to design nonparametric permutation tests and, in particular, nonparametric rank tests for applications in detection. In the first part of the paper, the authors develop the optimization theory of both permutation and rank tests in the Neyman?Pearson sense; in the second part of the paper, they carry out a comparative performance analysis of the permutation and rank tests (detectors) against the parametric ones in radar applications. First, a brief review of some contributions on nonparametric tests is realized. Then, the optimum permutation and rank tests are derived. Finally, a performance analysis is realized by Monte-Carlo simulations for the corresponding detectors, and the results are shown in curves of detection probability versus signal-to-noise ratio

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to ever increasing transportation of people and goods, automatic traffic surveillance is becoming a key issue for both providing safety to road users and improving traffic control in an efficient way. In this paper, we propose a new system that, exploiting the capabilities that both computer vision and machine learning offer, is able to detect and track different types of real incidents on a highway. Specifically, it is able to accurately detect not only stopped vehicles, but also drivers and passengers leaving the stopped vehicle, and other pedestrians present in the roadway. Additionally, a theoretical approach for detecting vehicles which may leave the road in an unexpected way is also presented. The system works in real-time and it has been optimized for working outdoor, being thus appropriate for its deployment in a real-world environment like a highway. First experimental results on a dataset created with videos provided by two Spanish highway operators demonstrate the effectiveness of the proposed system and its robustness against noise and low-quality videos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic resonance microscopy (MRM) theoretically provides the spatial resolution and signal-to-noise ratio needed to resolve neuritic plaques, the neuropathological hallmark of Alzheimer’s disease (AD). Two previously unexplored MR contrast parameters, T2* and diffusion, are tested for plaque-specific contrast to noise. Autopsy specimens from nondemented controls (n = 3) and patients with AD (n = 5) were used. Three-dimensional T2* and diffusion MR images with voxel sizes ranging from 3 × 10−3 mm3 to 5.9 × 10−5 mm3 were acquired. After imaging, specimens were cut and stained with a microwave king silver stain to demonstrate neuritic plaques. From controls, the alveus, fimbria, pyramidal cell layer, hippocampal sulcus, and granule cell layer were detected by either T2* or diffusion contrast. These structures were used as landmarks when correlating MRMs with histological sections. At a voxel resolution of 5.9 × 10−5 mm3, neuritic plaques could be detected by T2*. The neuritic plaques emerged as black, spherical elements on T2* MRMs and could be distinguished from vessels only in cross-section when presented in three dimension. Here we provide MR images of neuritic plaques in vitro. The MRM results reported provide a new direction for applying this technology in vivo. Clearly, the ability to detect and follow the early progression of amyloid-positive brain lesions will greatly aid and simplify the many possibilities to intervene pharmacologically in AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deterministic chaos has been implicated in numerous natural and man-made complex phenomena ranging from quantum to astronomical scales and in disciplines as diverse as meteorology, physiology, ecology, and economics. However, the lack of a definitive test of chaos vs. random noise in experimental time series has led to considerable controversy in many fields. Here we propose a numerical titration procedure as a simple “litmus test” for highly sensitive, specific, and robust detection of chaos in short noisy data without the need for intensive surrogate data testing. We show that the controlled addition of white or colored noise to a signal with a preexisting noise floor results in a titration index that: (i) faithfully tracks the onset of deterministic chaos in all standard bifurcation routes to chaos; and (ii) gives a relative measure of chaos intensity. Such reliable detection and quantification of chaos under severe conditions of relatively low signal-to-noise ratio is of great interest, as it may open potential practical ways of identifying, forecasting, and controlling complex behaviors in a wide variety of physical, biomedical, and socioeconomic systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter presents signal processing techniques to detect a passive thermal threshold detector based on a chipless time-domain ultrawideband (UWB) radio frequency identification (RFID) tag. The tag is composed by a UWB antenna connected to a transmission line, in turn loaded with a biomorphic thermal switch. The working principle consists of detecting the impedance change of the thermal switch. This change occurs when the temperature exceeds a threshold. A UWB radar is used as the reader. The difference between the actual time sample and a reference signal obtained from the averaging of previous samples is used to determine the switch transition and to mitigate the interferences derived from clutter reflections. A gain compensation function is applied to equalize the attenuation due to propagation loss. An improved method based on the continuous wavelet transform with Morlet wavelet is used to overcome detection problems associated to a low signal-to-noise ratio at the receiver. The average delay profile is used to detect the tag delay. Experimental measurements up to 5 m are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.