955 resultados para signal-to-noise-ratio (SNR)
Resumo:
The role of microorganisms in the cycling of sedimentary organic carbon is a crucial one. To better understand relationships between molecular composition of a potentially bioavailable fraction of organic matter and microbial populations, bacterial and archaeal communities were characterized using pyrosequencing-based 16S rRNA gene analysis in surface (top 30 cm) and subsurface/deeper sediments (30-530 cm) of the Helgoland mud area, North Sea. Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS) was used to characterize a potentially bioavailable organic matter fraction (hot-water extractable organic matter, WE-OM). Algal polymer-associated microbial populations such as members of the Gammaproteobacteria, Bacteroidetes, and Verrucomicrobia were dominant in surface sediments while members of the Chloroflexi (Dehalococcoidales and candidate order GIF9) and Miscellaneous Crenarchaeota Groups (MCG), both of which are linked to degradation of more recalcitrant, aromatic compounds and detrital proteins, were dominant in subsurface sediments. Microbial populations dominant in subsurface sediments (Chloroflexi, members of MCG, and Thermoplasmata) showed strong correlations to total organic carbon (TOC) content. Changes of WE-OM with sediment depth reveal molecular transformations from oxygen-rich [high oxygen to carbon (O/C), low hydrogen to carbon (H/C) ratios] aromatic compounds and highly unsaturated compounds toward compounds with lower O/C and higher H/C ratios. The observed molecular changes were most pronounced in organic compounds containing only CHO atoms. Our data thus, highlights classes of sedimentary organic compounds that may serve as microbial energy sources in methanic marine subsurface environments.
Resumo:
Desde que las Tecnologías de la Información y la Comunicación comenzaron a adquirir una gran importancia en la sociedad, uno de los principales objetivos ha sido conseguir que la información transmitida llegue en perfectas condiciones al receptor. Por este motivo, se hace necesario el desarrollo de nuevos sistemas de comunicación digital capaces de ofrecer una transmisión segura y fiable. Con el paso de los años, se han ido mejorando las características de los mismos, lo que significa importantes avances en la vida cotidiana. En este contexto, uno de los sistemas que más éxito ha tenido es la Modulación Reticulada con Codificación TCM, que aporta grandes ventajas en la comunicación digital, especialmente en los sistemas de banda estrecha. Este tipo de código de protección contra errores, basado en la codificación convolucional, se caracteriza por realizar la modulación y codificación en una sola función. Como consecuencia, se obtiene una mayor velocidad de transmisión de datos sin necesidad de incrementar el ancho de banda, a costa de pasar a una constelación superior. Con este Proyecto Fin de Grado se quiere analizar el comportamiento de la modulación TCM y cuáles son las ventajas que ofrece frente a otros sistemas similares. Se propone realizar cuatro simulaciones, que permitan visualizar diversas gráficas en las que se relacione la probabilidad de bit erróneo BER y la relación señal a ruido SNR. Además, con estas gráficas se puede determinar la ganancia que se obtiene con respecto a la probabilidad de bit erróneo teórica. Estos sistemas pasan de una modulación QPSK a una 8PSK o de una 8PSK a una 16QAM. Finalmente, se desarrolla un entorno gráfico de Matlab con el fin de proporcionar un sencillo manejo al usuario y una mayor interactividad. ABSTRACT. Since Information and Communication Technologies began to gain importance on society, one of the main objectives has been to achieve the transmitted information reaches the receiver perfectly. For this reason, it is necessary to develop new digital communication systems with the ability to offer a secure and reliable transmission. The systems characteristics have improved over the past years, what it means important progress in everyday life. In this context, one of the most successful systems is Trellis Coded Modulation TCM, that brings great advantages in terms of digital communications, especially narrowband systems. This kind of error correcting code, based on convolutional coding, is characterized by codifying and modulating at the same time. As a result, a higher data transmission speed is achieved without increasing bandwidth at the expense of using a superior modulation. The aim of this project is to analyze the TCM performance and the advantages it offers in comparison with other similar systems. Four simulations are proposed, that allows to display several graphics that show how the Bit Error Ratio BER and Signal Noise Ratio SNR are related. Furthermore, it is possible to calculate the coding gain. Finally, a Matlab graphic environment is designed in order to guarantee the interactivity with the final user.
Resumo:
Rainfall variability occurs over a wide range of temporal scales. Knowledge and understanding of such variability can lead to improved risk management practices in agricultural and other industries. Analyses of temporal patterns in 100 yr of observed monthly global sea surface temperature and sea level pressure data show that the single most important cause of explainable, terrestrial rainfall variability resides within the El Nino-Southern Oscillation (ENSO) frequency domain (2.5-8.0 yr), followed by a slightly weaker but highly significant decadal signal (9-13 yr), with some evidence of lesser but significant rainfall variability at interclecadal time scales (15-18 yr). Most of the rainfall variability significantly linked to frequencies tower than ENSO occurs in the Australasian region, with smaller effects in North and South America, central and southern Africa, and western Europe. While low-frequency (LF) signals at a decadal frequency are dominant, the variability evident was ENSO-like in all the frequency domains considered. The extent to which such LF variability is (i) predictable and (ii) either part of the overall ENSO variability or caused by independent processes remains an as yet unanswered question. Further progress can only be made through mechanistic studies using a variety of models.
Resumo:
In the context of a hostile funding environment, universities are increasingly asked to justify their output in narrowly defined economic terms, and this can be difficult in Humanities or Arts faculties where productivity is rarely reducible to a simple financial indicator. This can lead to a number of immediate consequences that I have no need to rehearse here, but can also result in some interesting tensions within the academic community itself. First is that which has become known as the ‘Science Wars’: the increasingly acrimonious exchanges between scientists and scientific academics and cultural critics or theorists about who has the right to describe the world. Much has already been said—and much remains to be said—about this issue, but it is not my intention to discuss it here. Rather, I will look at a second area of contestation: the incorporation of scientific theory into literary or cultural criticism. Much of this work comes from a genuine commitment to interdisciplinarity, and an appreciation of insights that a fresh perspective can bring to a familiar object. However, some can be seen as cynical attempts to lend literary studies the sort of empirical legitimacy of the sciences. In particular, I want to look at a number of critics who have applied information theory to the literary work. In this paper, I will examine several instances of this sort of criticism, and then, through an analysis of a novel by American author Richard Powers, Three Farmers on Their Way to a Dance, show how this sort of criticism merely reduces the meaningful analysis of a complex literary text.
Resumo:
Sensory cells usually transmit information to afferent neurons via chemical synapses, in which the level of noise is dependent on an applied stimulus. Taking into account such dependence, we model a sensory system as an array of LIF neurons with a common signal. We show that information transmission is enhanced by a nonzero level of noise. Moreover, we demonstrate a phenomenon similar to suprathreshold stochastic resonance with additive noise. We remark that many properties of information transmission found for the LIF neurons was predicted by us before with simple binary units [Phys. Rev. E 75, 021121 (2007)]. This confirmation of our predictions allows us to point out identical roots of the phenomena found in the simple threshold systems and more complex LIF neurons.
Resumo:
We have investigated information transmission in an array of threshold units that have signal-dependent noise and a common input signal. We demonstrate a phenomenon similar to stochastic resonance and suprathreshold stochastic resonance with additive noise and show that information transmission can be enhanced by a nonzero level of noise. By comparing system performance to one with additive noise we also demonstrate that the information transmission of weak signals is significantly better with signal-dependent noise. Indeed, information rates are not compromised even for arbitrary small input signals. Furthermore, by an appropriate selection of parameters, we observe that the information can be made to be (almost) independent of the level of the noise, thus providing a robust method of transmitting information in the presence of noise. These result could imply that the ability of hair cells to code and transmit sensory information in biological sensory systems is not limited by the level of signal-dependent noise. © 2007 The American Physical Society.
Resumo:
Magnetic field inhomogeneity results in image artifacts including signal loss, image blurring and distortions, leading to decreased diagnostic accuracy. Conventional multi-coil (MC) shimming method employs both RF coils and shimming coils, whose mutual interference induces a tradeoff between RF signal-to-noise (SNR) ratio and shimming performance. To address this issue, RF coils were integrated with direct-current (DC) shim coils to shim field inhomogeneity while concurrently emitting and receiving RF signal without being blocked by the shim coils. The currents applied to the new coils, termed iPRES (integrated parallel reception, excitation and shimming), were optimized in the numerical simulation to improve the shimming performance. The objectives of this work is to offer a guideline for designing the optimal iPRES coil arrays to shim the abdomen.
In this thesis work, the main field () inhomogeneity was evaluated by root mean square error (RMSE). To investigate the shimming abilities of iPRES coil arrays, a set of the human abdomen MRI data was collected for the numerical simulations. Thereafter, different simplified iPRES(N) coil arrays were numerically modeled, including a 1-channel iPRES coil and 8-channel iPRES coil arrays. For 8-channel iPRES coil arrays, each RF coil was split into smaller DC loops in the x, y and z direction to provide extra shimming freedom. Additionally, the number of DC loops in a RF coil was increased from 1 to 5 to find the optimal divisions in z direction. Furthermore, switches were numerically implemented into iPRES coils to reduce the number of power supplies while still providing similar shimming performance with equivalent iPRES coil arrays.
The optimizations demonstrate that the shimming ability of an iPRES coil array increases with number of DC loops per RF coil. Furthermore, the z direction divisions tend to be more effective in reducing field inhomogeneity than the x and y divisions. Moreover, the shimming performance of an iPRES coil array gradually reach to a saturation level when the number of DC loops per RF coil is large enough. Finally, when switches were numerically implemented in the iPRES(4) coil array, the number of power supplies can be reduced from 32 to 8 while keeping the shimming performance similar to iPRES(3) and better than iPRES(1). This thesis work offers a guidance for the designs of iPRES coil arrays.
Resumo:
We investigate device-to-device (D2D) communication underlaying cellular networks with M-antenna base stations. We consider both beamforming (BF) and interference cancellation (IC) strategies under quantized channel state information (CSI), as well as, perfect CSI. We derive tight closed-form approximations of the ergodic achievable rate which hold for arbitrary transmit power, location of users and number of antennas. Based on these approximations, we derive insightful asymptotic expressions for three special cases namely high signal-to-noise (SNR), weak interference, and large M. In particular, we show that in the high SNR regime a ceiling effect exists which depends on the received signal-to-interference ratio and the number of antennas. Moreover, the achievable rate scales logarithmically with M. The ergodic achievable rate is shown to scale logarithmically with SNR and the antenna number in the weak interference case. When the BS is equipped with large number of antennas, we find that the ergodic achievable rate under quantized CSI reaches a saturated value, whilst it scales as log2M under perfect CSI.
Resumo:
Large-scale multiple-input multiple-output (MIMO) communication systems can bring substantial improvement in spectral efficiency and/or energy efficiency, due to the excessive degrees-of-freedom and huge array gain. However, large-scale MIMO is expected to deploy lower-cost radio frequency (RF) components, which are particularly prone to hardware impairments. Unfortunately, compensation schemes are not able to remove the impact of hardware impairments completely, such that a certain amount of residual impairments always exists. In this paper, we investigate the impact of residual transmit RF impairments (RTRI) on the spectral and energy efficiency of training-based point-to-point large-scale MIMO systems, and seek to determine the optimal training length and number of antennas which maximize the energy efficiency. We derive deterministic equivalents of the signal-to-noise-and-interference ratio (SINR) with zero-forcing (ZF) receivers, as well as the corresponding spectral and energy efficiency, which are shown to be accurate even for small number of antennas. Through an iterative sequential optimization, we find that the optimal training length of systems with RTRI can be smaller compared to ideal hardware systems in the moderate SNR regime, while larger in the high SNR regime. Moreover, it is observed that RTRI can significantly decrease the optimal number of transmit and receive antennas.
Resumo:
Ce mémoire présente deux algorithmes qui ont pour but d’améliorer la précision de l’estimation de la direction d’arrivée de sources sonores et de leurs échos. Le premier algorithme, qui s’appelle la méthode par élimination des sources, permet d’améliorer l’estimation de la direction d’arrivée d’échos qui sont noyés dans le bruit. Le second, qui s’appelle Multiple Signal Classification à focalisation de phase, utilise l’information dans la phase à chaque fréquence pour déterminer la direction d’arrivée de sources à large bande. La combinaison de ces deux algorithmes permet de localiser des échos dont la puissance est de -17 dB par rapport à la source principale, jusqu’à un rapport échoà- bruit de -15 dB. Ce mémoire présente aussi des mesures expérimentales qui viennent confirmer les résultats obtenus lors de simulations.
Resumo:
A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
This study aimed to describe the distribution of waist-to-height ratio (WHtR) percentiles and cutoffs for obesity in Brazilian adolescents. A cross-sectional study including adolescents aged 10 to 15 years was conducted in the city of São Paulo, Brazil; anthropometric measurements (weight, height, and waist-circumference) were taken, and WHtRs were calculated and then divided into percentiles derived by using Least Median of Squares (LMS) regression. The receiver operating characteristic (ROC) curve was used in determining cutoffs for obesity (BMI ≥ 97th percentile) and Mann-Whitney and Kruskal-Wallis tests were used for comparing variables. The study included 8,019 adolescents from 43 schools, of whom 54.5% were female, and 74.8% attended public schools. Boys had higher mean WHtR than girls (0.45 ± 0.06 vs 0.44 ± 0.05; p=0.002) and higher WHtR at the 95th percentile (0.56 vs 0.54; p<0.05). The WHtR cutoffs according to the WHO criteria ranged from 0.467 to 0.506 and 0.463 to 0.496 among girls and boys respectively, with high sensitivity (82.8-95%) and specificity (84-95.5%). The WHtR was significantly associated with body adiposity measured by BMI. Its age-specific percentiles and cutoffs may be used as additional surrogate markers of central obesity and its co-morbidities.
Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density
Resumo:
We investigate the sensitivity of the composite cellular automaton of H. Fuks [Phys. Rev. E 55, R2081 (1997)] to noise and assess the density classification performance of the resulting probabilistic cellular automaton (PCA) numerically. We conclude that the composite PCA performs the density classification task reliably only up to very small levels of noise. In particular, it cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise. While the original composite CA is nonergodic, analyses of relaxation times indicate that its noisy version is an ergodic automaton, with the relaxation times decaying algebraically over an extended range of parameters with an exponent very close (possibly equal) to the mean-field value.
Resumo:
Imaging Spectroscopy (IS) is a promising tool for studying soil properties in large spatial domains. Going from point to image spectrometry is not only a journey from micro to macro scales, but also a long stage where problems such as dealing with data having a low signal-to-noise level, contamination of the atmosphere, large data sets, the BRDF effect and more are often encountered. In this paper we provide an up-to-date overview of some of the case studies that have used IS technology for soil science applications. Besides a brief discussion on the advantages and disadvantages of IS for studying soils, the following cases are comprehensively discussed: soil degradation (salinity, erosion, and deposition), soil mapping and classification, soil genesis and formation, soil contamination, soil water content, and soil swelling. We review these case studies and suggest that the 15 data be provided to the end-users as real reflectance and not as raw data and with better signal-to-noise ratios than presently exist. This is because converting the raw data into reflectance is a complicated stage that requires experience, knowledge, and specific infrastructures not available to many users, whereas quantitative spectral models require good quality data. These limitations serve as a barrier that impedes potential end-users, inhibiting researchers from trying this technique for their needs. The paper ends with a general call to the soil science audience to extend the utilization of the IS technique, and it provides some ideas on how to propel this technology forward to enable its widespread adoption in order to achieve a breakthrough in the field of soil science and remote sensing. (C) 2009 Elsevier Inc. All rights reserved.