945 resultados para Entropy of noise
Resumo:
Mach number and thermal effects on the mechanisms of sound generation and propagation are investigated in spatially evolving two-dimensional isothermal and non-isothermal mixing layers at Mach number ranging from 0.2 to 0.4 and Reynolds number of 400. A characteristic-based formulation is used to solve by direct numerical simulation the compressible Navier-Stokes equations using high-order schemes. The radiated sound is directly computed in a domain that includes both the near-field aerodynamic source region and the far-field sound propagation. In the isothermal mixing layer, Mach number effects may be identified in the acoustic field through an increase of the directivity associated with the non-compactness of the acoustic sources. Baroclinic instability effects may be recognized in the non-isothermal mixing layer, as the presence of counter-rotating vorticity layers, the resulting acoustic sources being found less efficient. An analysis based on the acoustic analogy shows that the directivity increase with the Mach number can be associated with the emergence of density fluctuations of weak amplitude but very efficient in terms of noise generation at shallow angle. This influence, combined with convection and refraction effects, is found to shape the acoustic wavefront pattern depending on the Mach number.
Resumo:
Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.
Resumo:
Fluctuation-dissipation theorems can be used to predict characteristics of noise from characteristics of the macroscopic response of a system. In the case of gene networks, feedback control determines the "network rigidity," defined as resistance to slow external changes. We propose an effective Fokker-Planck equation that relates gene expression noise to topology and to time scales of the gene network. We distinguish between two situations referred to as normal and inverted time hierarchies. The noise can be buffered by network feedback in the first situation, whereas it can be topology independent in the latter.
Resumo:
We study the Von Neumann and Renyi entanglement entropy of long-range harmonic oscillators (LRHO) by both theoretical and numerical means. We show that the entanglement entropy in massless harmonic oscillators increases logarithmically with the sub-system size as S - c(eff)/3 log l. Although the entanglement entropy of LRHO's shares some similarities with the entanglement entropy at conformal critical points we show that the Renyi entanglement entropy presents some deviations from the expected conformal behaviour. In the massive case we demonstrate that the behaviour of the entanglement entropy with respect to the correlation length is also logarithmic as the short-range case. Copyright (c) EPLA, 2012
Resumo:
This work reports aspects of seed germination at different temperatures of Adenanthera pavonina L., a woody Southeast Asian Leguminosae. Germination was studied by measuring the final percentages, the rate, the rate variance and the synchronisation of the individual seeds calculated by the minimal informational entropy of frequencies distribution of seed germination. Overlapping the germinability range with the range for the highest values of germination rates and the minimal informational entropy of frequencies distribution of seed germination, we found that the best temperature for the germination of A. pavonina seeds is 35 ºC. The slope µ of the Arrhenius plot of the germination rates is positive for T < 35 ºC and negative for T > 35 ºC. The activation enthalpies, estimated from closely-spaced points, shows that |ΔH-| < 12 Cal mol-1 occur for temperatures in the range between 25 ºC and 40 ºC. The ecological implication of these results are that this species may germinate very fast in tropical areas during the summer season. This may be an advantage to the establishment of this species under the climatic conditions in those areas.
Resumo:
The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.
Resumo:
The strength and durability of materials produced from aggregates (e.g., concrete bricks, concrete, and ballast) are critically affected by the weathering of the particles, which is closely related to their mineral composition. It is possible to infer the degree of weathering from visual features derived from the surface of the aggregates. By using sound pattern recognition methods, this study shows that the characterization of the visual texture of particles, performed by using texture-related features of gray scale images, allows the effective differentiation between weathered and nonweathered aggregates. The selection of the most discriminative features is also performed by taking into account a feature ranking method. The evaluation of the methodology in the presence of noise suggests that it can be used in stone quarries for automatic detection of weathered materials.
Resumo:
We consider a general class of mathematical models for stochastic gene expression where the transcription rate is allowed to depend on a promoter state variable that can take an arbitrary (finite) number of values. We provide the solution of the master equations in the stationary limit, based on a factorization of the stochastic transition matrix that separates timescales and relative interaction strengths, and we express its entries in terms of parameters that have a natural physical and/or biological interpretation. The solution illustrates the capacity of multiple states promoters to generate multimodal distributions of gene products, without the need for feedback. Furthermore, using the example of a three states promoter operating at low, high, and intermediate expression levels, we show that using multiple states operons will typically lead to a significant reduction of noise in the system. The underlying mechanism is that a three-states promoter can change its level of expression from low to high by passing through an intermediate state with a much smaller increase of fluctuations than by means of a direct transition.
Resumo:
Porous materials are widely used in many fields of industrial applications, to achieve the requirements of noise reduction, that nowadays derive from strict regulations. The modeling of porous materials is still a problematic issue. Numerical simulations are often problematic in case of real complex geometries, especially in terms of computational times and convergence. At the same time, analytical models, even if partly limited by restrictive simplificative hypotheses, represent a powerful instrument to capture quickly the physics of the problem and general trends. In this context, a recently developed numerical method, called the Cell Method, is described, is presented in the case of the Biot's theory and applied for representative cases. The peculiarity of the Cell Method is that it allows for a direct algebraic and geometrical discretization of the field equations, without any reduction to a weak integral form. Then, the second part of the thesis presents the case of interaction between two poroelastic materials under the context of double porosity. The idea of using periodically repeated inclusions of a second porous material into a layer composed by an original material is described. In particular, the problem is addressed considering the efficiency of the analytical method. A analytical procedure for the simulation of heterogeneous layers based is described and validated considering both conditions of absorption and transmission; a comparison with the available numerical methods is performed. ---------------- I materiali porosi sono ampiamente utilizzati per diverse applicazioni industriali, al fine di raggiungere gli obiettivi di riduzione del rumore, che sono resi impegnativi da norme al giorno d'oggi sempre più stringenti. La modellazione dei materiali porori per applicazioni vibro-acustiche rapprensenta un aspetto di una certa complessità. Le simulazioni numeriche sono spesso problematiche quando siano coinvolte geometrie di pezzi reali, in particolare riguardo i tempi computazionali e la convergenza. Allo stesso tempo, i modelli analitici, anche se parzialmente limitati a causa di ipotesi semplificative che ne restringono l'ambito di utilizzo, rappresentano uno strumento molto utile per comprendere rapidamente la fisica del problema e individuare tendenze generali. In questo contesto, un metodo numerico recentemente sviluppato, il Metodo delle Celle, viene descritto, implementato nel caso della teoria di Biot per la poroelasticità e applicato a casi rappresentativi. La peculiarità del Metodo delle Celle consiste nella discretizzazione diretta algebrica e geometrica delle equazioni di campo, senza alcuna riduzione a forme integrali deboli. Successivamente, nella seconda parte della tesi viene presentato il caso delle interazioni tra due materiali poroelastici a contatto, nel contesto dei materiali a doppia porosità. Viene descritta l'idea di utilizzare inclusioni periodicamente ripetute di un secondo materiale poroso all'interno di un layer a sua volta poroso. In particolare, il problema è studiando il metodo analitico e la sua efficienza. Una procedura analitica per il calcolo di strati eterogenei di materiale viene descritta e validata considerando sia condizioni di assorbimento, sia di trasmissione; viene effettuata una comparazione con i metodi numerici a disposizione.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
In this thesis, atomistic simulations are performed to investigate hydrophobic solvation and hydrophobic interactions in cosolvent/water binary mixtures. Many cosolvent/water binary mixtures exhibit non-ideal behavior caused by aggregation at the molecular scale level although they are stable and homogenous at the macroscopic scale. Force-field based atomistic simulations provide routes to relate atomistic-scale structure and interactions to thermodynamic solution properties. The predicted solution properties are however sensitive to the parameters used to describe the molecular interactions. In this thesis, a force field for tertiary butanol (TBA) and water mixtures is parameterized by making use of the Kirkwood-Buff theory of solution. The new force field is capable of describing the alcohol-alcohol, water-water and alcohol-water clustering in the solution as well as the solution components’ chemical potential derivatives in agreement with experimental data. With the new force field, the preferential solvation and the solvation thermodynamics of a hydrophobic solute in TBA/water mixtures have been studied. First, methane solvation at various TBA/water concentrations is discussed in terms of solvation free energy-, enthalpy- and entropy- changes, which have been compared to experimental data. We observed that the methane solvation free energy varies smoothly with the alcohol/water composition while the solvation enthalpies and entropies vary nonmonotonically. The latter occurs due to structural solvent reorganization contributions which are not present in the free energy change due to exact enthalpy-entropy compensation. It is therefore concluded that the enthalpy and entropy of solvation provide more detailed information on the reorganization of solvent molecules around the inserted solute. Hydrophobic interactions in binary urea/water mixtures are next discussed. This system is particularly relevant in biology (protein folding/unfolding), however, changes in the hydrophobic interaction induced by urea molecules are not well understood. In this thesis, this interaction has been studied by calculating the free energy (potential of mean force), enthalpy and entropy changes as a function of the solute-solute distance in water and in aqueous urea (6.9 M) solution. In chapter 5, the potential of mean force in both solution systems is analyzed in terms of its enthalpic and entropic contributions. In particular, contributions of solvent reorganization in the enthalpy and entropy changes are studied separately to better understand what are the changes in interactions in the system that contribute to the free energy of association of the nonpolar solutes. We observe that in aqueous urea the association between nonpolar solutes remains thermodynamically favorable (i.e., as it is the case in pure water). This observation contrasts a long-standing belief that clusters of nonpolar molecules dissolve completely in the presence of urea molecules. The consequences of our observations for the stability of proteins in concentrated urea solutions are discussed in the chapter 6 of the thesis.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Resumo:
This article gives an overview over the methods used in the low--level analysis of gene expression data generated using DNA microarrays. This type of experiment allows to determine relative levels of nucleic acid abundance in a set of tissues or cell populations for thousands of transcripts or loci simultaneously. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. This includes the design of probes, the experimental design, the image analysis of microarray scanned images, the normalization of fluorescence intensities, the assessment of the quality of microarray data and incorporation of quality information in subsequent analyses, the combination of information across arrays and across sets of experiments, the discovery and recognition of patterns in expression at the single gene and multiple gene levels, and the assessment of significance of these findings, considering the fact that there is a lot of noise and thus random features in the data. For all of these components, access to a flexible and efficient statistical computing environment is an essential aspect.