937 resultados para energy auto-correlation function
Resumo:
The fundamental aim in our investigation of the interaction of a polymer film with a nanoparticle is the extraction of information on the dynamics of the liquid using a single tracking particle. In this work two theoretical methods were used: one passive, where the motion of the particle measures the dynamics of the liquid, one active, where perturbations in the system are introduced through the particle. In the first part of this investigation a thin polymeric film on a substrate is studied using molecular dynamics simulations. The polymer is modeled via a 'bead spring' model. The particle is spheric and non structured and is able to interact with the monomers via a Lennard Jones potential. The system is micro-canonical and simulations were performed for average temperatures between the glass transition temperature of the film and its dewetting temperature. It is shown that the stability of the nanoparticle on the polymer film in the absence of gravity depends strongly on the form of the chosen interaction potential between nanoparticle and polymer. The relative position of the tracking particle to the liquid vapor interface of the polymer film shows the glass transition of the latter. The velocity correlation function and the mean square displacement of the particle has shown that it is caged when the temperature is close to the glass transition temperature. The analysis of the dynamics at long times shows the coupling of the nanoparticle to the center of mass of the polymer chains. The use of the Stokes-Einstein formula, which relates the diffusion coefficient to the viscosity, permits to use the nanoparticle as a probe for the determination of the bulk viscosity of the melt, the so called 'microrheology'. It is shown that for low frequencies the result obtained using microrheology coincides with the results of the Rouse model applied to the polymer dynamics. In the second part of this investigation the equations of Linear Hydrodynamics are solved for a nanoparticle oscillating above the film. It is shown that compressible liquids have mechanical response to external perturbations induced with the nanoparticle. These solutions show strong velocity and pressure profiles of the liquid near the interface, as well as a mechanical response of the liquid-vapor interface. The results obtained with this calculations can be employed for the interpretation of experimental results of non contact AFM microscopy
Resumo:
We obtain the exact time-dependent Kohn-Sham potentials Vks for 1D Hubbard chains, driven by a d.c. external field, using the time-dependent electron density and current density obtained from exact many-body time-evolution. The exact Vxc is compared to the adiabatically-exact Vad-xc and the “instantaneous ground state” Vigs-xc. The effectiveness of these two approximations is analyzed. Approximations for the exchange-correlation potential Vxc and its gradient, based on the local density and on the local current density, are also considered and both physical quantities are observed to be far outside the reach of any possible local approximation. Insight into the respective roles of ground-state and excited-state correlation in the time-dependent system, as reflected in the potentials, is provided by the pair correlation function.
Resumo:
Nel primo capitolo viene introdotto lo studio eff�ettuato e descritto un metodo di misure successivo alla caratterizzazione della super�ficie. Nel secondo capitolo vengono descritti i campioni analizzati e, nello speci�fico, la crescita attraverso MaCE dei nanofi�li di silicio. Nel terzo capitolo viene descritto lo strumento AFM utilizzato e la teoria della caratterizzazione alla base dello studio condotto. Nella quarta sezione vengono descritti i risultati ottenuti mentre nelle conclusioni viene tratto il risultato dei valori ottenuti di RMS roughness e roughness exponent.
Resumo:
Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.
Resumo:
Radial velocities measured from near-infrared (NIR) spectra are a potential tool to search for extrasolar planets around cool stars. High resolution infrared spectrographs now available reach the high precision of visible instruments, with a constant improvement over time. GIANO is an infrared echelle spectrograph and it is a powerful tool to provide high resolution spectra for accurate radial velocity measurements of exo-planets and for chemical and dynamical studies of stellar or extragalactic objects. No other IR instruments have the GIANO's capability to cover the entire NIR wavelength range. In this work we develop an ensemble of IDL procedures to measure high precision radial velocities on a few GIANO spectra acquired during the commissioning run, using the telluric lines as wevelength reference. In Section 1.1 various exoplanet search methods are described. They exploit different properties of the planetary system. In Section 1.2 we describe the exoplanet population discovered trough the different methods. In Section 1.3 we explain motivations for NIR radial velocities and the challenges related the main issue that has limited the pursuit of high-precision NIR radial velocity, that is, the lack of a suitable calibration method. We briefly describe calibration methods in the visible and the solutions for IR calibration, for instance, the use of telluric lines. The latter has advantages and problems, described in detail. In this work we use telluric lines as wavelength reference. In Section 1.4 the Cross Correlation Function (CCF) method is described. This method is widely used to measure the radial velocities.In Section 1.5 we describe GIANO and its main science targets. In Chapter 2 observational data obtained with GIANO spectrograph are presented and the choice criteria are reported. In Chapter 3 we describe the detail of the analysis and examine in depth the flow chart reported in Section 3.1. In Chapter 4 we give the radial velocities measured with our IDL procedure for all available targets. We obtain an rms scatter in radial velocities of about 7 m/s. Finally, we conclude that GIANO can be used to measure radial velocities of late type stars with an accuracy close to or better than 10 m/s, using telluric lines as wevelength reference. In 2014 September GIANO is being operative at TNG for Science Verification and more observational data will allow to further refine this analysis.
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
We calculate the momentum diffusion coefficient for heavy quarks in SU(3) gluon plasma at temperatures 1-2 times the deconfinement temperature. The momentum diffusion coefficient is extracted from a Monte Carlo calculation of the correlation function of color electric fields, in the leading order of expansion in heavy quark mass. Systematics of the calculation are examined, and compared with perturbtion theory and other estimates.
Resumo:
Ionotropic glutamate receptors are important excitatory neurotransmitter receptors in the mammalian central nervous system that have been implicated in a number of neuropathologies such as epilepsy, ischemia, and amyotrophic lateral sclerosis. Glutamate binding to an extracellular ligand binding domain initiates a series of structural changes that leads to the formation of a cation selective transmembrane channel, which consequently closes due to desensitization of the receptor. The crystal structures of the AMPA subtype of the glutamate receptor have been particularly useful in providing initial insight into the conformational changes in the ligand binding domain; however, these structures are limited by crystallographic constraint. To gain a clear picture of how agonist binding is coupled to channel activation and desensitization, it is essential to study changes in the ligand binding domain in a dynamic, physiological state. In this dissertation, a technique called Luminescence Resonance Energy Transfer was used to determine the conformational changes associated with activation and desensitization in a functional AMPA receptor (ÄN*-AMPA) that contains the ligand binding domain and transmembrane segments; ÄN*-AMPA has been modified such that fluorophores can be introduced at specific sites to serve as a readout of cleft closure or to establish intersubunit distances. Previous structural studies of cleft closure of the isolated ligand binding domain in conjunction with functional studies of the full receptor suggest that extent of cleft closure correlates with extent of activation. Here, LRET has been used to show that a similar relationship between cleft closure and activation is observed in the “full length” receptor showing that the isolated ligand binding domain is a good model of the domain in the full length receptor for changes within a subunit. Similar LRET investigations were used to study intersubunit distances specifically to probe conformational changes between subunits within a dimer in the tetrameric receptor. These studies show that the dimer interface is coupled in the open state, and decoupled in the desensitized state, similar to the isolated ligand binding domain crystal structure studies. However, we show that the apo state dimer interface is not pre-formed as in the crystal structure, hence suggesting a mechanism for functional transitions within the receptor based on LRET distances obtained.
Resumo:
Background. The purpose of this study was to describe the risk factors and demographics of persons with salmonellosis and shigellosis and to investigate both seasonal and spatial variations in the occurrence of these infections in Texas from 2000 to 2004, utilizing time series analyses and the geographic information system digital mapping methods. ^ Methods. Spatial Analysis: MapInfo software was used to map the distribution of age-adjusted rates of reported shigellosis and salmonellosis in Texas from 2000–2004 by zip codes. Census data on above or below poverty level, household income, highest level of educational attainment, race, ethnicity, and urban/rural community status was obtained from the 2000 Decennial Census for each zip code. The zip codes with the upper 10% and lower 10% were compared using t-tests and logistic regression to determine whether there were any potential risk factors. ^ Temporal analysis. Seasonal patterns in the prevalence of infections in Texas from 2000 to 2003 were determined by performing time-series analysis on the numbers of cases of salmonellosis and shigellosis. A linear regression was also performed to assess for trends in the incidence of each disease, along with auto-correlation and multi-component cosinor analysis. ^ Results. Spatial analysis: Analysis by general linear model showed a significant association between infection rates and age, with young children aged less than 5 and those aged 5–9 years having increased risk of infection for both disease conditions. The data demonstrated that those populations with high percentages of people who attained a higher than high school education were less likely to be represented in zip codes with high rates of shigellosis. However, for salmonellosis, logistic regression models indicated that when compared to populations with high percentages of non-high school graduates, having a high school diploma or equivalent increased the odds of having a high rate of infection. ^ Temporal analysis. For shigellosis, multi-component cosinor analyses were used to determine the approximated cosine curve which represented a statistically significant representation of the time series data for all age groups by sex. The shigellosis results show 2 peaks, with a major peak occurring in June and a secondary peak appearing around October. Salmonellosis results showed a single peak and trough in all age groups with the peak occurring in August and the trough occurring in February. ^ Conclusion. The results from this study can be used by public health agencies to determine the timing of public health awareness programs and interventions in order to prevent salmonellosis and shigellosis from occurring. Because young children depend on adults for their meals, it is important to increase the awareness of day-care workers and new parents about modes of transmission and hygienic methods of food preparation and storage. ^
Resumo:
This paper examines the causalities in mean and variance between stock returns and Foreign Institutional Investment (FII) in India. The analysis in this paper applies the Cross Correlation Function approach from Cheung and Ng (1996), and uses daily data for the timeframe of January 1999 to March 2008 divided into two periods before and after May 2003. Empirical results showed that there are uni-directional causalities in mean and variance from stock returns to FII flows irrelevant of the sample periods, while the reverse causalities in mean and variance are only found in the period beginning with 2003. These results point to FII flows having exerted an impact on the movement of Indian stock prices during the more recent period.
Resumo:
The characteristics of the power-line communication (PLC) channel are difficult to model due to the heterogeneity of the networks and the lack of common wiring practices. To obtain the full variability of the PLC channel, random channel generators are of great importance for the design and testing of communication algorithms. In this respect, we propose a random channel generator that is based on the top-down approach. Basically, we describe the multipath propagation and the coupling effects with an analytical model. We introduce the variability into a restricted set of parameters and, finally, we fit the model to a set of measured channels. The proposed model enables a closed-form description of both the mean path-loss profile and the statistical correlation function of the channel frequency response. As an example of application, we apply the procedure to a set of in-home measured channels in the band 2-100 MHz whose statistics are available in the literature. The measured channels are divided into nine classes according to their channel capacity. We provide the parameters for the random generation of channels for all nine classes, and we show that the results are consistent with the experimental ones. Finally, we merge the classes to capture the entire heterogeneity of in-home PLC channels. In detail, we introduce the class occurrence probability, and we present a random channel generator that targets the ensemble of all nine classes. The statistics of the composite set of channels are also studied, and they are compared to the results of experimental measurement campaigns in the literature.
Resumo:
The Kolmogorov approach to turbulence is applied to the Burgers turbulence in the stochastic adhesion model of large-scale structure formation. As the perturbative approach to this model is unreliable, here a new, non-perturbative approach, based on a suitable formulation of Kolmogorov's scaling laws, is proposed. This approach suggests that the power-law exponent of the matter density two-point correlation function is in the range 1–1.33, but it also suggests that the adhesion model neglects important aspects of the gravitational dynamics.
Resumo:
Este trabajo aborda el problema de modelizar sistemas din´amicos reales a partir del estudio de sus series temporales, usando una formulaci´on est´andar que pretende ser una abstracci´on universal de los sistemas din´amicos, independientemente de su naturaleza determinista, estoc´astica o h´ıbrida. Se parte de modelizaciones separadas de sistemas deterministas por un lado y estoc´asticos por otro, para converger finalmente en un modelo h´ıbrido que permite estudiar sistemas gen´ericos mixtos, esto es, que presentan una combinaci´on de comportamiento determinista y aleatorio. Este modelo consta de dos componentes, uno determinista consistente en una ecuaci´on en diferencias, obtenida a partir de un estudio de autocorrelaci´on, y otro estoc´astico que modeliza el error cometido por el primero. El componente estoc´astico es un generador universal de distribuciones de probabilidad, basado en un proceso compuesto de variables aleatorias, uniformemente distribuidas en un intervalo variable en el tiempo. Este generador universal es deducido en la tesis a partir de una nueva teor´ıa sobre la oferta y la demanda de un recurso gen´erico. El modelo resultante puede formularse conceptualmente como una entidad con tres elementos fundamentales: un motor generador de din´amica determinista, una fuente interna de ruido generadora de incertidumbre y una exposici´on al entorno que representa las interacciones del sistema real con el mundo exterior. En las aplicaciones estos tres elementos se ajustan en base al hist´orico de las series temporales del sistema din´amico. Una vez ajustados sus componentes, el modelo se comporta de una forma adaptativa tomando como inputs los nuevos valores de las series temporales del sistema y calculando predicciones sobre su comportamiento futuro. Cada predicci´on se presenta como un intervalo dentro del cual cualquier valor es equipro- bable, teniendo probabilidad nula cualquier valor externo al intervalo. De esta forma el modelo computa el comportamiento futuro y su nivel de incertidumbre en base al estado actual del sistema. Se ha aplicado el modelo en esta tesis a sistemas muy diferentes mostrando ser muy flexible para afrontar el estudio de campos de naturaleza dispar. El intercambio de tr´afico telef´onico entre operadores de telefon´ıa, la evoluci´on de mercados financieros y el flujo de informaci´on entre servidores de Internet son estudiados en profundidad en la tesis. Todos estos sistemas son modelizados de forma exitosa con un mismo lenguaje, a pesar de tratarse de sistemas f´ısicos totalmente distintos. El estudio de las redes de telefon´ıa muestra que los patrones de tr´afico telef´onico presentan una fuerte pseudo-periodicidad semanal contaminada con una gran cantidad de ruido, sobre todo en el caso de llamadas internacionales. El estudio de los mercados financieros muestra por su parte que la naturaleza fundamental de ´estos es aleatoria con un rango de comportamiento relativamente acotado. Una parte de la tesis se dedica a explicar algunas de las manifestaciones emp´ıricas m´as importantes en los mercados financieros como son los “fat tails”, “power laws” y “volatility clustering”. Por ´ultimo se demuestra que la comunicaci´on entre servidores de Internet tiene, al igual que los mercados financieros, una componente subyacente totalmente estoc´astica pero de comportamiento bastante “d´ocil”, siendo esta docilidad m´as acusada a medida que aumenta la distancia entre servidores. Dos aspectos son destacables en el modelo, su adaptabilidad y su universalidad. El primero es debido a que, una vez ajustados los par´ametros generales, el modelo se “alimenta” de los valores observables del sistema y es capaz de calcular con ellos comportamientos futuros. A pesar de tener unos par´ametros fijos, la variabilidad en los observables que sirven de input al modelo llevan a una gran riqueza de ouputs posibles. El segundo aspecto se debe a la formulaci´on gen´erica del modelo h´ıbrido y a que sus par´ametros se ajustan en base a manifestaciones externas del sistema en estudio, y no en base a sus caracter´ısticas f´ısicas. Estos factores hacen que el modelo pueda utilizarse en gran variedad de campos. Por ´ultimo, la tesis propone en su parte final otros campos donde se han obtenido ´exitos preliminares muy prometedores como son la modelizaci´on del riesgo financiero, los algoritmos de routing en redes de telecomunicaci´on y el cambio clim´atico. Abstract This work faces the problem of modeling dynamical systems based on the study of its time series, by using a standard language that aims to be an universal abstraction of dynamical systems, irrespective of their deterministic, stochastic or hybrid nature. Deterministic and stochastic models are developed separately to be merged subsequently into a hybrid model, which allows the study of generic systems, that is to say, those having both deterministic and random behavior. This model is a combination of two different components. One of them is deterministic and consisting in an equation in differences derived from an auto-correlation study and the other is stochastic and models the errors made by the deterministic one. The stochastic component is an universal generator of probability distributions based on a process consisting in random variables distributed uniformly within an interval varying in time. This universal generator is derived in the thesis from a new theory of offer and demand for a generic resource. The resulting model can be visualized as an entity with three fundamental elements: an engine generating deterministic dynamics, an internal source of noise generating uncertainty and an exposure to the environment which depicts the interactions between the real system and the external world. In the applications these three elements are adjusted to the history of the time series from the dynamical system. Once its components have been adjusted, the model behaves in an adaptive way by using the new time series values from the system as inputs and calculating predictions about its future behavior. Every prediction is provided as an interval, where any inner value is equally probable while all outer ones have null probability. So, the model computes the future behavior and its level of uncertainty based on the current state of the system. The model is applied to quite different systems in this thesis, showing to be very flexible when facing the study of fields with diverse nature. The exchange of traffic between telephony operators, the evolution of financial markets and the flow of information between servers on the Internet are deeply studied in this thesis. All these systems are successfully modeled by using the same “language”, in spite the fact that they are systems physically radically different. The study of telephony networks shows that the traffic patterns are strongly weekly pseudo-periodic but mixed with a great amount of noise, specially in the case of international calls. It is proved that the underlying nature of financial markets is random with a moderate range of variability. A part of this thesis is devoted to explain some of the most important empirical observations in financial markets, such as “fat tails”, “power laws” and “volatility clustering”. Finally it is proved that the communication between two servers on the Internet has, as in the case of financial markets, an underlaying random dynamics but with a narrow range of variability, being this lack of variability more marked as the distance between servers is increased. Two aspects of the model stand out as being the most important: its adaptability and its universality. The first one is due to the fact that once the general parameters have been adjusted , the model is “fed” on the observable manifestations of the system in order to calculate its future behavior. Despite the fact that the model has fixed parameters the variability in the observable manifestations of the system, which are used as inputs of the model, lead to a great variability in the possible outputs. The second aspect is due to the general “language” used in the formulation of the hybrid model and to the fact that its parameters are adjusted based on external manifestations of the system under study instead of its physical characteristics. These factors made the model suitable to be used in great variety of fields. Lastly, this thesis proposes other fields in which preliminary and promising results have been obtained, such as the modeling of financial risk, the development of routing algorithms for telecommunication networks and the assessment of climate change.