948 resultados para Low signal-to-noise ratio regime


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The major drawback of Ka band, operating frequency of the AltiKa altimeter on board SARAL, is its sensitivity to atmospheric liquid water. Even light rain or heavy clouds can strongly attenuate the signal and distort the signal leading to erroneous geophysical parameters estimates. A good detection of the samples affected by atmospheric liquid water is crucial. As AltiKa operates at a single frequency, a new technique based on the detection by a Matching Pursuit algorithm of short scale variations of the slope of the echo waveform plateau has been developed and implemented prelaunch in the ground segment. As the parameterization of the detection algorithm was defined using Jason-1 data, the parameters were re-estimated during the cal-val phase, during which the algorithm was also updated. The measured sensor signal-to-noise ratio is significantly better than planned, the data loss due to attenuation by rain is significantly smaller than expected (<0.1%). For cycles 2 to 9, the flag detects about 9% of 1Hz data, 5.5% as rainy and 3.5 % as backscatter bloom (or sigma0 bloom). The results of the flagging process are compared to independent rain data from microwave radiometers to evaluate its performances in term of detection and false alarms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Ingold port adaption of a free beam NIR spectrometer is tailored for optimal bioprocess monitoring and control. The device shows an excellent signal to noise ratio dedicated to a large free aperture and therefore a large sample volume. This can be seen particularly in the batch trajectories which show a high reproducibility. The robust and compact design withstands rough process environments as well as SIP/CIP cycles. Robust free beam NIR process analyzers are indispensable tools within the PAT/QbD framework for realtime process monitoring and control. They enable multiparametric, non-invasive measurements of analyte concentrations and process trajectories. Free beam NIR spectrometers are an ideal tool to define golden batches and process borders in the sense of QbD. Moreover, sophisticated data analysis both quantitative and MSPC yields directly to a far better process understanding. Information can be provided online in easy to interpret graphs which allow the operator to make fast and knowledge-based decisions. This finally leads to higher stability in process operation, better performance and less failed batches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International audience

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate and compare the performance of Ripplet Type-1 transform and directional discrete cosine transform (DDCT) and their combinations for improved representation of MRI images while preserving its fine features such as edges along the smooth curves and textures. Methods: In a novel image representation method based on fusion of Ripplet type-1 and conventional/directional DCT transforms, source images were enhanced in terms of visual quality using Ripplet and DDCT and their various combinations. The enhancement achieved was quantified on the basis of peak signal to noise ratio (PSNR), mean square error (MSE), structural content (SC), average difference (AD), maximum difference (MD), normalized cross correlation (NCC), and normalized absolute error (NAE). To determine the attributes of both transforms, these transforms were combined to represent the entire image as well. All the possible combinations were tested to present a complete study of combinations of the transforms and the contrasts were evaluated amongst all the combinations. Results: While using the direct combining method (DDCT) first and then the Ripplet method, a PSNR value of 32.3512 was obtained which is comparatively higher than the PSNR values of the other combinations. This novel designed technique gives PSNR value approximately equal to the PSNR’s of parent techniques. Along with this, it was able to preserve edge information, texture information and various other directional image features. The fusion of DDCT followed by the Ripplet reproduced the best images. Conclusion: The transformation of images using Ripplet followed by DDCT ensures a more efficient method for the representation of images with preservation of its fine details like edges and textures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis details the design and applications of a terahertz (THz) frequency comb spectrometer. The spectrometer employs two offset locked Ti:Sapphire femtosecond oscillators with repetition rates of approximately 80 MHz, offset locked at 100 Hz to continuously sample a time delay of 12.5 ns at a maximum time delay resolution of 15.6 fs. These oscillators emit continuous pulse trains, allowing the generation of a THz pulse train by the master, or pump, oscillator and the sampling of this THz pulse train by the slave, or probe, oscillator via the electro-optic effect. Collecting a train of 16 consecutive THz pulses and taking the Fourier transform of this pulse train produces a decade-spanning frequency comb, from 0.25 to 2.5 THz, with a comb tooth width of 5 MHz and a comb tooth spacing of ~80 MHz. This frequency comb is suitable for Doppler-limited rotational spectroscopy of small molecules. Here, the data from 68 individual scans at slightly different pump oscillator repetition rates were combined, producing an interleaved THz frequency comb spectrum, with a maximum interval between comb teeth of 1.4 MHz, enabling THz frequency comb spectroscopy.

The accuracy of the THz frequency comb spectrometer was tested, achieving a root mean square error of 92 kHz measuring selected absorption center frequencies of water vapor at 10 mTorr, and a root mean square error of 150 kHz in measurements of a K-stack of acetonitrile. This accuracy is sufficient for fitting of measured transitions to a model Hamiltonian to generate a predicted spectrum for molecules of interest in the fields of astronomy and physical chemistry. As such, the rotational spectra of methanol and methanol-OD were acquired by the spectrometer. Absorptions from 1.3 THz to 2.0 THz were compared to JPL catalog data for methanol and the spectrometer achieved an RMS error of 402 kHz, improving to 303 kHz when excluding low signal-to-noise absorptions. This level of accuracy compares favorably with the ~100 kHz accuracy achieved by JPL frequency multiplier submillimeter spectrometers. Additionally, the relative intensity performance of the THz frequency comb spectrometer is linear across the entire decade-spanning bandwidth, making it the preferred instrument for recovering lineshapes and taking absolute intensity measurements in the THz region. The data acquired by the Terahertz Frequency Comb Spectrometer for methanol-OD is of comparable accuracy to the methanol data and may be used to refine the fit parameters for the predicted spectrum of methanol-OD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric scattering plays a crucial rule in degrading the performance of electro optical imaging systems operating in the visible and infra-red spectral bands, and hence limits the quality of the acquired images, either through reduction of contrast or increase of image blur. The exact nature of light scattering by atmospheric media is highly complex and depends on the types, orientations, sizes and distributions of particles constituting these media, as well as wavelengths, polarization states and directions of the propagating radiation. Here we follow the common approach for solving imaging and propagation problems by treating the propagating light through atmospheric media as composed of two main components: a direct (unscattered), and a scattered component. In this work we developed a detailed model of the effects of absorption and scattering by haze and fog atmospheric aerosols on the optical radiation propagating from the object plane to an imaging system, based on the classical theory of EM scattering. This detailed model is then used to compute the average point spread function (PSF) of an imaging system which properly accounts for the effects of the diffraction, scattering, and the appropriate optical power level of both the direct and the scattered radiation arriving at the pupil of the imaging system. Also, the calculated PSF, properly weighted for the energy contributions of the direct and scattered components is used, in combination with a radiometric model, to estimate the average number of the direct and scattered photons detected at the sensor plane, which are then used to calculate the image spectrum signal to- noise ratio (SNR) in the visible near infra-red (NIR) and mid infra-red (MIR) spectral wavelength bands. Reconstruction of images degraded by atmospheric scattering and measurement noise is then performed, up to the limit imposed by the noise effective cutoff spatial frequency of the image spectrum SNR. Key results of this research are as follows: A mathematical model based on Mie scattering theory for how scattering from aerosols affects the overall point spread function (PSF) of an imaging system was developed, coded in MATLAB, and demonstrated. This model along with radiometric theory was used to predict the limiting resolution of an imaging system as a function of the optics, scattering environment, and measurement noise. Finally, image reconstruction algorithms were developed and demonstrated which mitigate the effects of scattering-induced blurring to within the limits imposed by noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This novel capillary electrophoresis microchip, or also known as μTAS (micro total analysis system) was designed to separate complex aqueous based compounds, similar to commercial CE & microchip (capillary electrophoresis) systems, but more compact. This system can be potentially used for mobile/portable chemical analysis equipment. Un-doped silicon wafer & ultra-thin borofloat glass (Pyrex) wafers have been used to fabricate the device. Double-L injection feature, micro pillars column, bypass separation channel & hybrid- referenced C4D electrodes were designed to achieve a high SNR (signal to noise ratio), easy- separation, for a durable and reusable μTAS for CE use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spectrum of radiofrequency is distributed in such a way that it is fixed to certain users called licensed users and it cannot be used by unlicensed users even though the spectrum is not in use. This inefficient use of spectrum leads to spectral holes. To overcome the problem of spectral holes and increase the efficiency of the spectrum, Cognitive Radio (CR) was used and all simulation work was done on MATLAB. Here analyzed the performance of different spectrum sensing techniques as Match filter based spectrum sensing and energy detection, which depend on various factors, systems such as Numbers of input, signal-to-noise ratio ( SNR Ratio), QPSK system and BPSK system, and different fading channels, to identify the best possible channels and systems for spectrum sensing and improving the probability of detection. The study resulted that an averaging filter being better than an IIR filter. As the number of inputs and SNR increased, the probability of detection also improved. The Rayleigh fading channel has a better performance compared to the Rician and Nakagami fading channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the X-ray nuclear and extended properties of the radio galaxy 3C 277.3, where a recent optical observation performed with the multi-unit spectroscopic explorer (MUSE) has revealed star-forming regions triggered by the propagation of non-thermal plasma in the intergalactic medium. This work aims to study the nuclear engine and its environment and, possibly, discover signatures of non-thermal plasma-gas interaction at high energies. 3C 277.3 was observed with the Chandra satellite five times from 2010 to 2014 for a total of about 200 ks. Data in the Chandra public archive were retrieved and analyzed. When necessary, the different pointings were combined to improve the signal-to-noise ratio. A detailed analysis of the Chandra image (obtained by combining all the observations) has revealed several emission regions. In addition to a bright nucleus, two jet knots and the northern hot spot were clearly detected by overlapping the X-ray data to a VLA map of the source at 1.4 GHz. An X-ray spectral analysis was performed for all these structures. Finally, the X-ray image was over-imposed on the MUSE data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments evaluated both noise vulnerability and the extent of protection from noise by sub-chronic low-dose kanamycin in young F1 hybrids resulting from a cross between C57BL/6J and CBA/J inbred mice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[1] In the event of a termination of the Gravity Recovery and Climate Experiment (GRACE) mission before the launch of GRACE Follow-On (due for launch in 2017), high-low satellite-to-satellite tracking (hl-SST) will be the only dedicated observing system with global coverage available to measure the time-variable gravity field (TVG) on a monthly or even shorter time scale. Until recently, hl-SST TVG observations were of poor quality and hardly improved the performance of Satellite Laser Ranging observations. To date, they have been of only very limited usefulness to geophysical or environmental investigations. In this paper, we apply a thorough reprocessing strategy and a dedicated Kalman filter to Challenging Minisatellite Payload (CHAMP) data to demonstrate that it is possible to derive the very long-wavelength TVG features down to spatial scales of approximately 2000 km at the annual frequency and for multi-year trends. The results are validated against GRACE data and surface height changes from long-term GPS ground stations in Greenland. We find that the quality of the CHAMP solutions is sufficient to derive long-term trends and annual amplitudes of mass change over Greenland. We conclude that hl-SST is a viable source of information for TVG and can serve to some extent to bridge a possible gap between the end-of-life of GRACE and the availability of GRACE Follow-On.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rainfall variability occurs over a wide range of temporal scales. Knowledge and understanding of such variability can lead to improved risk management practices in agricultural and other industries. Analyses of temporal patterns in 100 yr of observed monthly global sea surface temperature and sea level pressure data show that the single most important cause of explainable, terrestrial rainfall variability resides within the El Nino-Southern Oscillation (ENSO) frequency domain (2.5-8.0 yr), followed by a slightly weaker but highly significant decadal signal (9-13 yr), with some evidence of lesser but significant rainfall variability at interclecadal time scales (15-18 yr). Most of the rainfall variability significantly linked to frequencies tower than ENSO occurs in the Australasian region, with smaller effects in North and South America, central and southern Africa, and western Europe. While low-frequency (LF) signals at a decadal frequency are dominant, the variability evident was ENSO-like in all the frequency domains considered. The extent to which such LF variability is (i) predictable and (ii) either part of the overall ENSO variability or caused by independent processes remains an as yet unanswered question. Further progress can only be made through mechanistic studies using a variety of models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.