961 resultados para Bluetooth Data Noise


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used the Green's functions from auto-correlations and cross-correlations of seismic ambient noise to monitor temporal velocity changes in the subsurface at Villarrica volcano in the Southern Andes of Chile. Campaigns were conducted from March to October 2010 and February to April 2011 with 8 broadband and 6 short-period stations, respectively. We prepared the data by removing the instrument response, normalizing with a root-mean-square method, whitening the spectra, and filtering from 1 to 10 Hz. This frequency band was chosen based on the relatively high background noise level in that range. Hour-long auto- and cross-correlations were computed and the Green's functions stacked by day and total time. To track the temporal velocity changes we stretched a 24 hour moving window of correlation functions from 90% to 110% of the original and cross correlated them with the total stack. All of the stations' auto-correlations detected what is interpreted as an increase in velocity in 2010, with an average increase of 0.13%. Cross-correlations from station V01, near the summit, to the other stations show comparable changes that are also interpreted as increases in velocity. We attribute this change to the closing of cracks in the subsurface due either to seasonal snow loading or regional tectonics. In addition to the common increase in velocity across the stations, there are excursions in velocity on the same order lasting several days. Amplitude decreases as the station's distance from the vent increases suggesting these excursions may be attributed to changes within the volcanic edifice. In at least two occurrences the amplitudes at stations V06 and V07, the stations farthest from the vent, are smaller. Similar short temporal excursions were seen in the auto-correlations from 2011, however, there was little to no increase in the overall velocity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: To determine the feasibility of evaluating surgically induced hepatocyte damage using gadoxetate disodium (Gd-EOB-DTPA) as a marker for viable hepatocytes at magnetic resonance imaging (MRI) after liver resection. MATERIAL AND METHODS: Fifteen patients were prospectively enrolled in this institutional review board-approved study prior to elective liver resection after informed consent. Three Tesla MRI was performed 3-7 days after surgery. Three-dimensional (3D) T1-weighted (W) volumetric interpolated breath-hold gradient echo (VIBE) sequences covering the liver were acquired before and 20 min after Gd-EOB-DTPA administration. The signal-to-noise ratio (SNR) was used to compare the uptake of Gd-EOB-DTPA in healthy liver tissue and in liver tissue adjacent to the resection border applying paired Student's t-test. Correlations with potential influencing factors (blood loss, duration of intervention, age, pre-existing liver diseases, postoperative change of resection surface) were calculated using Pearson's correlation coefficient. RESULTS: Before Gd-EOB-DTPA administration the SNR did not differ significantly (p = 0.052) between healthy liver tissue adjacent to untouched liver borders [59.55 ± 25.46 (SD)] and the liver tissue compartment close to the resection surface (63.31 ± 27.24). During the hepatocyte-specific phase, the surgical site showed a significantly (p = 0.04) lower SNR (69.44 ± 24.23) compared to the healthy site (78.45 ± 27.71). Dynamic analyses revealed a significantly lower increase (p = 0.008) in signal intensity in the healthy tissue compared to the resection border compartment. CONCLUSION: EOB-DTPA-enhanced MRI may have the potential to be an effective non-invasive tool for detecting hepatocyte damage after liver resection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strengthening car drivers’ intention to prevent road-traffic noise is a first step toward noise abatement through voluntary change of behavior. We analyzed predictors of this intention based on the norm activation model (i.e., personal norm, problem awareness, awareness of consequences, social norm, and value orientations). Moreover, we studied the effects of noise exposure, noise sensitivity, and noise annoyance on problem awareness. Data came from 1,002 car drivers who participated in a two-wave longitudinal survey over 4 months. Personal norm had a large prospective effect on intention, even when the previous level of intention was controlled for, and mediated the effect of all other variables on intention. Almost 60% of variance in personal norm was explained by problem awareness, social norm, and biospheric value orientation. The effects of noise sensitivity and noise exposure on problem awareness were small and mediated by noise annoyance. We propose four communication strategies for strengthening the intention to prevent road-traffic noise in car drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an application and sample independent method for the automatic discrimination of noise and signal in optical coherence tomography Bscans. The proposed algorithm models the observed noise probabilistically and allows for a dynamic determination of image noise parameters and the choice of appropriate image rendering parameters. This overcomes the observer variability and the need for a priori information about the content of sample images, both of which are challenging to estimate systematically with current systems. As such, our approach has the advantage of automatically determining crucial parameters for evaluating rendered image quality in a systematic and task independent way. We tested our algorithm on data from four different biological and nonbiological samples (index finger, lemon slices, sticky tape, and detector cards) acquired with three different experimental spectral domain optical coherence tomography (OCT) measurement systems including a swept source OCT. The results are compared to parameters determined manually by four experienced OCT users. Overall, our algorithm works reliably regardless of which system and sample are used and estimates noise parameters in all cases within the confidence interval of those found by observers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efforts are ongoing to decrease the noise of the GRACE gravity field models and hence to arrive closer to the GRACE baseline. The most significant error sources belong the untreated errors in the observation data and the imperfections in the background models. The recent study (Bandikova&Flury,2014) revealed that the current release of the star camera attitude data (SCA1B RL02) contain noise systematically higher than expected by about a factor 3-4. This is due to an incorrect implementation of the algorithms for quaternion combination in the JPL processing routines. Generating improved SCA data requires that valid data from both star camera heads are available which is not always the case because the Sun and Moon at times blind one camera. In the gravity field modeling, the attitude data are needed for the KBR antenna offset correction and to orient the non-gravitational linear accelerations sensed by the accelerometer. Hence any improvement in the SCA data is expected to be reflected in the gravity field models. In order to quantify the effect on the gravity field, we processed one month of observation data using two different approaches: the celestial mechanics approach (AIUB) and the variational equations approach (ITSG). We show that the noise in the KBR observations and the linear accelerations has effectively decreased. However, the effect on the gravity field on a global scale is hardly evident. We conclude that, at the current level of accuracy, the errors seen in the temporal gravity fields are dominated by errors coming from sources other than the attitude data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seismic data were acquired north of the Knipovich Ridge on the western Svalbard margin during cruise MSM21/4. They were recorded using a Geometrics GeoEel streamer of either 120 channels (profiles p100-p208) or 88 channels (profiles p300-p805) with a group spacing of 1.56 m and a sampling rate of 2 kHz. A GI-Gun (2×1.7 l) with a main frequency of ~150 Hz was used as a source and operated at a shot interval of 6-8 s. Processing of profiles p100-p208 and p600-p805: Positions for each channel were calculated by backtracking along the profiles from the GI-Gun GPS positions. The shot gathers were analyzed for abnormal amplitudes below the seafloor reflection by comparing neighboring traces in different frequency bands within sliding time windows. To suppress surface-generated water noise, a tau-p filter was applied in the shot gather domain. Common mid-point (CMP) profiles were then generated through crooked-line binning with a CMP spacing of 1.5625 m. A zero-phase band-pass filter with corner frequencies of 60 Hz and 360 Hz was applied to the data. Based on regional velocity information from MCS data [Sarkar, 2012], an interpolated and extrapolated 3D interval velocity model was created below the digitized seafloor reflection of the high-resolution streamer data. This velocity model was used to apply a CMP stack and an amplitude-preserving Kirchhoff post-stack time migration. Processing of profiles p400-p500: Data were sampled at 0.5 ms and sorted into common midpoint (CMP) domain with a bin spacing of 5 m. Normal move out correction was carried out with a velocity of 1500 m s-1 and an Ormsby bandpass filter with corner frequencies at 40, 80, 600 and 1000 Hz was applied. The data were time migrated using the water velocity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ocean acidification (OA) and anthropogenic noise are both known to cause stress and induce physiological and behavioural changes in fish, with consequences for fitness. OA is also predicted to reduce the ocean's capacity to absorb low-frequency sounds produced by human activity. Consequently, anthropogenic noise could propagate further under an increasingly acidic ocean. For the first time, this study investigated the independent and combined impacts of elevated carbon dioxide (CO2) and anthropogenic noise on the behaviour of a marine fish, the European sea bass (Dicentrarchus labrax). In a fully factorial experiment crossing two CO2 levels (current day and elevated) with two noise conditions (ambient and pile driving), D. labrax were exposed to four CO2/noise treatment combinations: 400 µatm/ambient, 1000 µatm/ambient, 400 µatm/pile-driving, and 1000 µatm/pile driving. Pile-driving noise increased ventilation rate (indicating stress) compared with ambient noise conditions. Elevated CO2 did not alter the ventilation rate response to noise. Furthermore, there was no interaction effect between elevated CO2 and pile-driving noise, suggesting that OA is unlikely to influence startle or ventilatory responses of fish to anthropogenic noise. However, effective management of anthropogenic noise could reduce fish stress, which may improve resilience to future stressors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to reduce the noise power in far-field pattern without modifying the desired signal is proposed. Therefore, an important signal-to-noise ratio improvement may be achieved. The method is used when the antenna measurement is performed in planar near-field, where the recorded data are assumed to be corrupted with white Gaussian and space-stationary noise, because of the receiver additive noise. Back-propagating the measured field from the scan plane to the antenna under test (AUT) plane, the noise remains white Gaussian and space-stationary, whereas the desired field is theoretically concentrated in the aperture antenna. Thanks to this fact, a spatial filtering may be applied, cancelling the field which is located out of the AUT dimensions and which is only composed by noise. Next, a planar field to far-field transformation is carried out, achieving a great improvement compared to the pattern obtained directly from the measurement. To verify the effectiveness of the method, two examples will be presented using both simulated and measured near-field data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing importance of pollutant noise has led to the creation of many new noise testing laboratories in recent years. For this reason and due to the legal implications that noise reporting may have, it is necessary to create procedures intended to guarantee the quality of the testing and its results. For instance, the ISO/IEC standard 17025:2005 specifies general requirements for the competence of testing laboratories. In this standard, interlaboratory comparisons are one of the main measures that must be applied to guarantee the quality of laboratories when applying specific methodologies for testing. In the specific case of environmental noise, round robin tests are usually difficult to design, as it is difficult to find scenarios that can be available and controlled while the participants carry out the measurements. Monitoring and controlling the factors that can influence the measurements (source emissions, propagation, background noise…) is not usually affordable, so the most extended solution is to create very effortless scenarios, where most of the factors that can have an influence on the results are excluded (sampling, processing of results, background noise, source detection…) The new approach described in this paper only requires the organizer to make actual measurements (or prepare virtual ones). Applying and interpreting a common reference document (standard, regulation…), the participants must analyze these input data independently to provide the results, which will be compared among the participants. The measurement costs are severely reduced for the participants, there is no need to monitor the scenario conditions, and almost any relevant factor can be included in this methodology