909 resultados para Discrete-time Dynamics
Resumo:
Statistical mechanics of two coupled vector fields is studied in the tight-binding model that describes propagation of polarized light in discrete waveguides in the presence of the four-wave mixing. The energy and power conservation laws enable the formulation of the equilibrium properties of the polarization state in terms of the Gibbs measure with positive temperature. The transition line T=∞ is established beyond which the discrete vector solitons are created. Also in the limit of the large nonlinearity an analytical expression for the distribution of Stokes parameters is obtained, which is found to be dependent only on the statistical properties of the initial polarization state and not on the strength of nonlinearity. The evolution of the system to the final equilibrium state is shown to pass through the intermediate stage when the energy exchange between the waveguides is still negligible. The distribution of the Stokes parameters in this regime has a complex multimodal structure strongly dependent on the nonlinear coupling coefficients and the initial conditions.
Resumo:
Fibre lasers are light sources that are synonymous with stability. They can give rise to highly coherent continuous-wave radiation, or a stable train of mode locked pulses with well-defined characteristics. However, they can also exhibit an exceedingly diverse range of nonlinear operational regimes spanning a multi-dimensional parameter space. The complex nature of the dynamics poses significant challenges in the theoretical and experimental studies of such systems. Here, we demonstrate how the real-time experimental methodology of spatio-temporal dynamics can be used to unambiguously identify and discern between such highly complex lasing regimes. This two-dimensional representation of laser intensity allows the identification and tracking of individual features embedded in the radiation as they make round-trip circulations inside the cavity. The salient features of this methodology are highlighted by its application to the case of Raman fibre lasers and a partially mode locked ring fibre laser operating in the normal dispersion regime.
Resumo:
Conventional tools for measurement of laser spectra (e.g. optical spectrum analysers) capture data averaged over a considerable time period. However, the generation spectrum of many laser types may involve spectral dynamics whose relatively fast time scale is determined by their cavity round trip period, calling for instrumentation featuring both high temporal and spectral resolution. Such real-time spectral characterisation becomes particularly challenging if the laser pulses are long, or they have continuous or quasi-continuous wave radiation components. Here we combine optical heterodyning with a technique of spatiooral intensity measurements that allows the characterisation of such complex sources. Fast, round-trip-resolved spectral dynamics of cavity-based systems in real-time are obtained, with temporal resolution of one cavity round trip and frequency resolution defined by its inverse (85 ns and 24 MHz respectively are demonstrated). We also show how under certain conditions for quasi-continuous wave sources, the spectral resolution could be further increased by a factor of 100 by direct extraction of phase information from the heterodyned dynamics or by using double time scales within the spectrogram approach.
Resumo:
This dissertation consists of three separate essays on job search and labor market dynamics. In the first essay, “The Impact of Labor Market Conditions on Job Creation: Evidence from Firm Level Data”, I study how much changes in labor market conditions reduce employment fluctuations over the business cycle. Changes in labor market conditions make hiring more expensive during expansions and cheaper during recessions, creating counter-cyclical incentives for job creation. I estimate firm level elasticities of labor demand with respect to changes in labor market conditions, considering two margins: changes in labor market tightness and changes in wages. Using employer-employee matched data from Brazil, I find that all firms are more sensitive to changes in wages rather than labor market tightness, and there is substantial heterogeneity in labor demand elasticity across regions. Based on these results, I demonstrate that changes in labor market conditions reduce the variance of employment growth over the business cycle by 20% in a median region, and this effect is equally driven by changes along each margin. Moreover, I show that the magnitude of the effect of labor market conditions on employment growth can be significantly affected by economic policy. In particular, I document that the rapid growth of the national minimum wages in Brazil in 1997-2010 amplified the impact of the change in labor market conditions during local expansions and diminished this impact during local recessions.
In the second essay, “A Framework for Estimating Persistence of Local Labor
Demand Shocks”, I propose a decomposition which allows me to study the persistence of local labor demand shocks. Persistence of labor demand shocks varies across industries, and the incidence of shocks in a region depends on the regional industrial composition. As a result, less diverse regions are more likely to experience deeper shocks, but not necessarily more long lasting shocks. Building on this idea, I propose a decomposition of local labor demand shocks into idiosyncratic location shocks and nationwide industry shocks and estimate the variance and the persistence of these shocks using the Quarterly Census of Employment and Wages (QCEW) in 1990-2013.
In the third essay, “Conditional Choice Probability Estimation of Continuous- Time Job Search Models”, co-authored with Peter Arcidiacono and Arnaud Maurel, we propose a novel, computationally feasible method of estimating non-stationary job search models. Non-stationary job search models arise in many applications, where policy change can be anticipated by the workers. The most prominent example of such policy is the expiration of unemployment benefits. However, estimating these models still poses a considerable computational challenge, because of the need to solve a differential equation numerically at each step of the optimization routine. We overcome this challenge by adopting conditional choice probability methods, widely used in dynamic discrete choice literature, to job search models and show how the hazard rate out of unemployment and the distribution of the accepted wages, which can be estimated in many datasets, can be used to infer the value of unemployment. We demonstrate how to apply our method by analyzing the effect of the unemployment benefit expiration on duration of unemployment using the data from the Survey of Income and Program Participation (SIPP) in 1996-2007.
Resumo:
R-matrix with time-dependence theory is applied to electron-impact ionisation processes for He in the S-wave model. Cross sections for electron-impact excitation, ionisation and ionisation with excitation for impact energies between 25 and 225 eV are in excellent agreement with benchmark cross sections. Ultra-fast dynamics induced by a scattering event is observed through time-dependent signatures associated with autoionisation from doubly excited states. Further insight into dynamics can be obtained through examination of the spin components of the time-dependent wavefunction.
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.
Resumo:
La dinámica demográfica ha sido modelada con ecuaciones diferenciales desde que Malthus comenzó sus estudios hace más de doscientos años atrás. Los modelos convencionales siempre tratan relaciones entre especies como estáticas, denotando sólo su dependencia durante un período fijo del tiempo, aunque sea conocido que las relaciones entre especies pueden cambiar con el tiempo. Aquí proponemos un modelo para la dinámica demográfica que incorpora la evolución con el tiempo de las interacciones entre especies. Este modelo incluye una amplia gama de interacciones, de depredador-presa a las relaciones mutualistas, ya sea obligada o facultativa. El mecanismo que describimos permite la transición de una clase de relación entre especies a algún otro, según algunos parámetros externos fijados por el contexto. Estas transiciones podrían evitar la extinción de una de las especies, si esto termina por depender demasiado del ambiente o su relación con las otras especies.
Resumo:
In this talk, we propose an all regime Lagrange-Projection like numerical scheme for the gas dynamics equations. By all regime, we mean that the numerical scheme is able to compute accurate approximate solutions with an under-resolved discretization with respect to the Mach number M, i.e. such that the ratio between the Mach number M and the mesh size or the time step is small with respect to 1. The key idea is to decouple acoustic and transport phenomenon and then alter the numerical flux in the acoustic approximation to obtain a uniform truncation error in term of M. This modified scheme is conservative and endowed with good stability properties with respect to the positivity of the density and the internal energy. A discrete entropy inequality under a condition on the modification is obtained thanks to a reinterpretation of the modified scheme in the Harten Lax and van Leer formalism. A natural extension to multi-dimensional problems discretized over unstructured mesh is proposed. Then a simple and efficient semi implicit scheme is also proposed. The resulting scheme is stable under a CFL condition driven by the (slow) material waves and not by the (fast) acoustic waves and so verifies the all regime property. Numerical evidences are proposed and show the ability of the scheme to deal with tests where the flow regime may vary from low to high Mach values.
Resumo:
Background: Partially clonal organisms are very common in nature, yet the influence of partial asexuality on the temporal dynamics of genetic diversity remains poorly understood. Mathematical models accounting for clonality predict deviations only for extremely rare sex and only towards mean inbreeding coefficient (F-IS) over bar < 0. Yet in partially clonal species, both F-IS < 0 and F-IS > 0 are frequently observed also in populations where there is evidence for a significant amount of sexual reproduction. Here, we studied the joint effects of partial clonality, mutation and genetic drift with a state-and-time discrete Markov chain model to describe the dynamics of F-IS over time under increasing rates of clonality. Results: Results of the mathematical model and simulations show that partial clonality slows down the asymptotic convergence to F-IS = 0. Thus, although clonality alone does not lead to departures from Hardy-Weinberg expectations once reached the final equilibrium state, both negative and positive F-IS values can arise transiently even at intermediate rates of clonality. More importantly, such "transient" departures from Hardy Weinberg proportions may last long as clonality tunes up the temporal variation of F-IS and reduces its rate of change over time, leading to a hyperbolic increase of the maximal time needed to reach the final mean (F-IS,F-infinity) over bar value expected at equilibrium. Conclusion: Our results argue for a dynamical interpretation of F-IS in clonal populations. Negative values cannot be interpreted as unequivocal evidence for extremely scarce sex but also as intermediate rates of clonality in finite populations. Complementary observations (e.g. frequency distribution of multiloci genotypes, population history) or time series data may help to discriminate between different possible conclusions on the extent of clonality when mean (F-IS) over bar values deviating from zero and/or a large variation of F-IS over loci are observed.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
Turbulent plasmas inside tokamaks are modeled and studied using guiding center theory, applied to charged test particles, in a Hamiltonian framework. The equations of motion for the guiding center dynamics, under the conditions of a constant and uniform magnetic field and turbulent electrostatic field are derived by averaging over the fast gyroangle, for the first and second order in the guiding center potential, using invertible changes of coordinates such as Lie transforms. The equations of motion are then made dimensionless, exploiting temporal and spatial periodicities of the model chosen for the electrostatic potential. They are implemented numerically in Python. Fast Fourier Transform and its inverse are used. Improvements to the original Python scripts are made, notably the introduction of a power-law curve fitting to account for anomalous diffusion, the possibility to integrate the equations in two steps to save computational time by removing trapped trajectories, and the implementation of multicolored stroboscopic plots to distinguish between trapped and untrapped guiding centers. The post-processing of the results is made in MATLAB. The values and ranges of the parameters chosen for the simulations are selected based on numerous simulations used as feedback tools. In particular, a recurring value for the threshold to detect trapped trajectories is evidenced. Effects of the Larmor radius, the amplitude of the guiding center potential and the intensity of its second order term are studied by analyzing their diffusive regimes, their stroboscopic plots and the shape of guiding center potentials. The main result is the identification of cases anomalous diffusion depending on the values of the parameters (mostly the Larmor radius). The transitions between diffusive regimes are identified. The presence of highways for the super-diffusive trajectories are unveiled. The influence of the charge on these transitions from diffusive to ballistic behaviors is analyzed.
Resumo:
Films of poly (2,5-dicyano-p-phenylene vinylene), DCNPPV, were obtained by electrochemical synthesis over gold thin layer (20 nm) transparent electrode deposited on a glass plate. The DCNPPV films of 4 µm thickness were produced by electropolymerization process of α,α,α',α'-tetrabromo-2-5-dicyano-p-xilene at different applied potentials (-0.15, -0.25, -0.40, -0.60, -0.80, and -1.0 V) using 0.1 mol L-1 of tetraethylammonium bromide in acetonitrile as the supporting electrolyte. The emission decays have three exponential components: a fast component in the picosecond range (200-400 ps), and two other of about one and five nanoseconds at 293 K. The fluorescence quenching process seems to occur by exciton trapping in a low-energy site and quenching by residual bromine monomer attached at the end of the polymer chain. However, the electrochemical synthesis generates entrapped bromide or ion pairs during the growth step of the film which also contributes to the deactivation. The change of the electrolyte from bromide to perchlorate reduces significantly this additional quenching effect by allowing ion exchange of formed bromide with the nonquenching perchloride anion.
Resumo:
A susceptible-infective-recovered (SIR) epidemiological model based on probabilistic cellular automaton (PCA) is employed for simulating the temporal evolution of the registered cases of chickenpox in Arizona, USA, between 1994 and 2004. At each time step, every individual is in one of the states S, I, or R. The parameters of this model are the probabilities of each individual (each cell forming the PCA lattice ) passing from a state to another state. Here, the values of these probabilities are identified by using a genetic algorithm. If nonrealistic values are allowed to the parameters, the predictions present better agreement with the historical series than if they are forced to present realistic values. A discussion about how the size of the PCA lattice affects the quality of the model predictions is presented. Copyright (C) 2009 L. H. A. Monteiro et al.
Resumo:
We report on a method to study the dynamics of triplet formation based on the fluorescence signal produced by a pulse train. Basically, the pulse train acts as sequential pump-probe pulses that precisely map the excited-state dynamics in the long time scale. This allows characterizing those processes that affect the population evolution of the first excited singlet state, whose decay gives rise to the fluorescence. The technique was proven to be valuable to measure parameters of triplet formation in organic molecules. Additionally, this single beam technique has the advantages of simplicity, low noise and background-free signal detection. (C) 2011 Optical Society of America
Resumo:
This paper studies semistability of the recursive Kalman filter in the context of linear time-varying (LTV), possibly nondetectable systems with incorrect noise information. Semistability is a key property, as it ensures that the actual estimation error does not diverge exponentially. We explore structural properties of the filter to obtain a necessary and sufficient condition for the filter to be semistable. The condition does not involve limiting gains nor the solution of Riccati equations, as they can be difficult to obtain numerically and may not exist. We also compare semistability with the notions of stability and stability w.r.t. the initial error covariance, and we show that semistability in a sense makes no distinction between persistent and nonpersistent incorrect noise models, as opposed to stability. In the linear time invariant scenario we obtain algebraic, easy to test conditions for semistability and stability, which complement results available in the context of detectable systems. Illustrative examples are included.