948 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To compare speech understanding of the BAHA BP110 and BAHA Intenso sound processors. STUDY DESIGN Prospective experimental study. SETTING Tertiary referral center. PATIENTS Twenty experienced user of osseointegrated auditory implants with conductive or mixed hearing loss. INTERVENTIONS In a first session, half of the participants were fitted with an Intenso, the other half with a BP110. After 1 month of use, aided speech understanding in quiet and in noise was measured, and the other test processor was fitted. One month later, speech understanding with the second sound processor was assessed. MAIN OUTCOME MEASURES Speech understanding in quiet and in noise, with noise arriving either from the front, the rear, or the side of the user with the osseointegrated bone conductor. RESULTS Significant improvements were found for both processors for speech understanding in quiet (+9.6 to +34.8 percent points; p = 0.02 to 0.001) and in noise (+6.2 to +13.8 dB, p < 0.001). No significant differences were found between the 2 devices for speech in quiet. For noise from the rear, subjects were able to understand speech at signal-to-noise ratios which were lower (less favorable) by -5.1 dB (p < 0.001) when compared with the Intenso. CONCLUSION Speech understanding is substantially improved by both devices, with no significant differences between the sound processors in quiet. In noise, speech understanding is significantly better with the BP110 when compared to the Intenso for noise from the rear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Englacial horizons deeper than 100 m are absent within 100 MHz ground-penetrating radar (GPR) surface profiles we recorded on Clark and Commonwealth Glaciers in the Antarctic Dry Valleys region. Both glaciers show continuous bottom horizons to 280 m, with bottom signal-to-noise ratios near 30 dB. Density horizons should fade below 50 m depth because impermeable ice occurred by 36 m. Folding within Commonwealth Glacier could preclude radar strata beneath about 80 m depth, but there is no significant folding within Clark Glacier. Strong sulfate concentrations and contrasts exist in our shallow ice core. However, it appears that high background concentration levels, and possible decreased concentration contrasts with depth placed their corresponding reflection coefficients at the limit of, or below, our system sensitivity by about 77 m depth. Further verification of this conclusion awaits processing of our deep-core chemistry profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CHaracterising ExOPlanet Satellite (CHEOPS) is a joint ESA-Switzerland space mission (expected to launch in 2017) dedicated to search for exoplanet transits by means of ultra-high precision photometry. CHEOPS will provide accurate radii for planets down to Earth size. Targets will mainly come from radial velocity surveys. The CHEOPS instrument is an optical space telescope of 30 cm clear aperture with a single focal plane CCD detector. The tube assembly is passively cooled and thermally controlled to support high precision, low noise photometry. The telescope feeds a re-imaging optic, which supports the straylight suppression concept to achieve the required Signal to Noise. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term electrocardiogram (ECG) signals might suffer from relevant baseline disturbances during physical activity. Motion artifacts in particular are more pronounced with dry surface or esophageal electrodes which are dedicated to prolonged ECG recording. In this paper we present a method called baseline wander tracking (BWT) that tracks and rejects strong baseline disturbances and avoids concurrent saturation of the analog front-end. The proposed algorithm shifts the baseline level of the ECG signal to the middle of the dynamic input range. Due to the fast offset shifts, that produce much steeper signal portions than the normal ECG waves, the true ECG signal can be reconstructed offline and filtered using computationally intensive algorithms. Based on Monte Carlo simulations we observed reconstruction errors mainly caused by the non-linearity inaccuracies of the DAC. However, the signal to error ratio of the BWT is higher compared to an analog front-end featuring a dynamic input ranges above 15 mV if a synthetic ECG signal was used. The BWT is additionally able to suppress (electrode) offset potentials without introducing long transients. Due to its structural simplicity, memory efficiency and the DC coupling capability, the BWT is dedicated to high integration required in long-term and low-power ECG recording systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive technique for quantitative assessment of the integrity of blood-brain barrier and blood-spinal cord barrier (BSCB) in the presence of central nervous system pathologies. However, the results of DCE-MRI show substantial variability. The high variability can be caused by a number of factors including inaccurate T1 estimation, insufficient temporal resolution and poor contrast-to-noise ratio. My thesis work is to develop improved methods to reduce the variability of DCE-MRI results. To obtain fast and accurate T1 map, the Look-Locker acquisition technique was implemented with a novel and truly centric k-space segmentation scheme. In addition, an original multi-step curve fitting procedure was developed to increase the accuracy of T1 estimation. A view sharing acquisition method was implemented to increase temporal resolution, and a novel normalization method was introduced to reduce image artifacts. Finally, a new clustering algorithm was developed to reduce apparent noise in the DCE-MRI data. The performance of these proposed methods was verified by simulations and phantom studies. As part of this work, the proposed techniques were applied to an in vivo DCE-MRI study of experimental spinal cord injury (SCI). These methods have shown robust results and allow quantitative assessment of regions with very low vascular permeability. In conclusion, applications of the improved DCE-MRI acquisition and analysis methods developed in this thesis work can improve the accuracy of the DCE-MRI results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Project for Ice Coring in Antarctica (EPICA) includes a comprehensive pre-site survey on the inland ice plateau of Dronning Maud Land. This paper focuses on the investigation of the 18O content of shallow firn and ice cores. These cores were dated by profiles derived from dielectric-profiling and continuous flow analysis measurements. The individual records were stacked in order to obtain composite chronologies of 18O contents and accumulation rates with enhanced signal-to-noise variance ratios.These chronologies document variations in the last 200 and 1000 years.The 18O contents and accumulation rates decreased in the 19th century and increased during the 20th century.Using the empirical relationships between stable isotopes, accumulation rates and the 10m firn temperature, the variation of both parameters can be explained by the same temperature history.But other causes for these variations, such as the build-up of the snow cover, cannot be excluded. A marked feature of the 1000 year chronology occurs during the period AD 1180-1530 when the 18O contents remains below the long-term mean. Cross-correlation analyses between five cores from the Weddell Sea region and Dronning Maud Land show that 18O records can in some periods be positively correlated and in others negatively correlated, indicating a complex climatic history in time and space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a new projector model specifically tailored for fast list-mode tomographic reconstructions in Positron emission tomography (PET) scanners with parallel planar detectors. The model provides an accurate estimation of the probability distribution of coincidence events defined by pairs of scintillating crystals. This distribution is parameterized with 2D elliptical Gaussian functions defined in planes perpendicular to the main axis of the tube of response (TOR). The parameters of these Gaussian functions have been obtained by fitting Monte Carlo simulations that include positron range, acolinearity of gamma rays, as well as detector attenuation and scatter effects. The proposed model has been applied efficiently to list-mode reconstruction algorithms. Evaluation with Monte Carlo simulations over a rotating high resolution PET scanner indicates that this model allows to obtain better recovery to noise ratio in OSEM (ordered-subsets, expectation-maximization) reconstruction, if compared to list-mode reconstruction with symmetric circular Gaussian TOR model, and histogram-based OSEM with precalculated system matrix using Monte Carlo simulated models and symmetries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a novel fast random search clustering (RSC) algorithm for mixing matrix identification in multiple input multiple output (MIMO) linear blind inverse problems with sparse inputs. The proposed approach is based on the clustering of the observations around the directions given by the columns of the mixing matrix that occurs typically for sparse inputs. Exploiting this fact, the RSC algorithm proceeds by parameterizing the mixing matrix using hyperspherical coordinates, randomly selecting candidate basis vectors (i.e. clustering directions) from the observations, and accepting or rejecting them according to a binary hypothesis test based on the Neyman–Pearson criterion. The RSC algorithm is not tailored to any specific distribution for the sources, can deal with an arbitrary number of inputs and outputs (thus solving the difficult under-determined problem), and is applicable to both instantaneous and convolutive mixtures. Extensive simulations for synthetic and real data with different number of inputs and outputs, data size, sparsity factors of the inputs and signal to noise ratios confirm the good performance of the proposed approach under moderate/high signal to noise ratios. RESUMEN. Método de separación ciega de fuentes para señales dispersas basado en la identificación de la matriz de mezcla mediante técnicas de "clustering" aleatorio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel mechanism of reciprocal behavioral agonist-antagonist activities of enantiomeric pheromones plays a pivotal role in overcoming the signal-to-noise problem derived from the use of a single-constituent pheromone system in scarab beetles. Female Anomala osakana produce (S, Z)-5-(+)-(1-decenyl)oxacyclopentan-2-one, which is highly attractive to males; the response is completely inhibited even by 5% of its antipode. These two enantiomers have reverse roles in the Popillia japonica sex pheromone system. Chiral GC-electroantennographic detector experiments suggest that A. osakana and P. japonica have both R and S receptors that are responsible for behavioral agonist and antagonist responses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Green fluorescent protein (GFP) is widely used as a reporter gene in both prokaryotes and eukaryotes. However, the fluorescence levels of wild-type GFP (wtGFP) are not bright enough for fluorescence-activated cell sorting or flow cytometry. Several GFP variants were generated that are brighter or have altered excitation spectra when expressed in prokaryotic cells. We engineered two GFP genes with different combinations of these mutations, GFP(S65T,V163A) termed GFP-Bex1, and GFP(S202F,T203I,V163A) termed GFP-Vex1. Both show enhanced brightness and improved signal-to-noise ratios when expressed in mammalian cells and appropriately excited, compared with wtGFP. Each mutant retains only one of the two excitation peaks of the wild-type protein. GFP-Bex1 excites at 488 nm (blue) and GFP-Vex1 excites at 406 nm (violet), both of which are available laser lines. Excitation at these wavelengths allows for the independent analyses of these mutants by fluorescence-activated cell sorting, permitting simultaneous, quantitative detection of expression from two different genes within single mammalian cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Only a few binary systems with compact objects display TeV emission. The physical properties of the companion stars represent basic input for understanding the physical mechanisms behind the particle acceleration, emission, and absorption processes in these so-called gamma-ray binaries. Here we present high-resolution and high signal-to-noise optical spectra of LS 2883, the Be star forming a gamma-ray binary with the young non-accreting pulsar PSR B1259-63, showing it to rotate faster and be significantly earlier and more luminous than previously thought. Analysis of the interstellar lines suggests that the system is located at the same distance as (and thus is likely a member of) Cen OB1. Taking the distance to the association, d = 2.3 kpc, and a color excess of E(B – V) = 0.85 for LS 2883 results in MV ≈ –4.4. Because of fast rotation, LS 2883 is oblate (R eq sime 9.7 R ☉ and R pole sime 8.1 R ☉) and presents a temperature gradient (T eq≈ 27,500 K, log g eq = 3.7; T pole≈ 34,000 K, log g pole = 4.1). If the star did not rotate, it would have parameters corresponding to a late O-type star. We estimate its luminosity at log(L */L ☉) sime 4.79 and its mass at M * ≈ 30 M ☉. The mass function then implies an inclination of the binary system i orb ≈ 23°, slightly smaller than previous estimates. We discuss the implications of these new astrophysical parameters of LS 2883 for the production of high-energy and very high-energy gamma rays in the PSR B1259-63/LS 2883 gamma-ray binary system. In particular, the stellar properties are very important for prediction of the line-like bulk Comptonization component from the unshocked ultrarelativistic pulsar wind.