911 resultados para Deterministic walkers
Resumo:
In this thesis, I present the realization of a fiber-optical interface using optically trapped cesium atoms, which is an efficient tool for coupling light and atoms. The basic principle of the presented scheme relies on the trapping of neutral cesium atoms in a two-color evanescent field surrounding a nanofiber. The strong confinement of the fiber guided light, which also protrudes outside the nanofiber, provides strong confinement of the atoms as well as efficient coupling to near-resonant light propagating through the fiber. In chapter 1, the necessary physical and mathematical background describing the propagation of light in an optical fiber is presented. The exact solution of Maxwell’s equations allows us to model fiber-guided light fields which give rise to the trapping potentials and the atom-light coupling in the close vicinity of a nanofiber. Chapter 2 gives the theoretical background of light-atom interaction. A quantum mechanical model of the light-induced shifts of the relevant atomic levels is reviewed, which allows us to quantify the perturbation of the atomic states due to the presence of the trapping light-fields. The experimental realization of the fiber-based atom trap is the focus of chapter 3. Here, I analyze the properties of the fiber-based trap in terms of the confinement of the atoms and the impact of several heating mechanisms. Furthermore, I demonstrate the transportation of the trapped atoms, as a first step towards a deterministic delivery of individual atoms. In chapter 4, I present the successful interfacing of the trapped atomic ensemble and fiber-guided light. Three different approaches are discussed, i.e., those involving the measurement of either near-resonant scattering in absorption or the emission into the guided mode of the nanofiber. In the analysis of the spectroscopic properties of the trapped ensemble we find good agreement with the prediction of theoretical model discussed in chapter 2. In addition, I introduce a non-destructive scheme for the interrogation of the atoms states, which is sensitive to phase shifts of far-detuned fiber-guided light interacting with the trapped atoms. The inherent birefringence in our system, induced by the atoms, changes the state of polarization of the probe light and can be thus detected via a Stokes vector measurement.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
Wir betrachten einen zeitlich inhomogenen Diffusionsprozess, der durch eine stochastische Differentialgleichung gegeben wird, deren Driftterm ein deterministisches T-periodisches Signal beinhaltet, dessen Periodizität bekannt ist. Dieses Signal sei in einem Besovraum enthalten. Wir schätzen es mit Hilfe eines nichtparametrischen Waveletschätzers. Unser Schätzer ist von einem Wavelet-Dichteschätzer mit Thresholding inspiriert, der 1996 in einem klassischen iid-Modell von Donoho, Johnstone, Kerkyacharian und Picard konstruiert wurde. Unter gewissen Ergodizitätsvoraussetzungen an den Prozess können wir nichtparametrische Konvergenzraten angegeben, die bis auf einen logarithmischen Term den Raten im klassischen iid-Fall entsprechen. Diese Raten werden mit Hilfe von Orakel-Ungleichungen gezeigt, die auf Ergebnissen über Markovketten in diskreter Zeit von Clémencon, 2001, beruhen. Außerdem betrachten wir einen technisch einfacheren Spezialfall und zeigen einige Computersimulationen dieses Schätzers.
Resumo:
La corretta modellizzazione della zona del riflettore dei sistemi GEN III+ è un passaggio fondamentale per un’accurata predizione dei parametri di cella il cui valore influenza direttamente la distribuzione di potenza su tutto il nocciolo. Tale esigenza si è resa ancora più stringente dopo la constatazione che il fenomeno del “tilt power” risulta essere più amplificato nei noccioli nucleari equipaggiati con un riflettore pesante. Per tali ragioni, nel presente lavoro di tesi si è dedicata particolare attenzione alle metodiche di modellizzazione ed alla generazione delle sezioni d’urto efficaci omogenee ed agli assembly discontinuity factors (ADF) nella zona di riflessione. Il codice deterministico utilizzato per il calcolo è SCALE 6.1.3. La notevole differenza nelle proprietà neutroniche associata ad un’elevata eterogeneità geometrica tra un nocciolo ed un riflettore hanno suggerito di effettuare un’analisi preliminare sul sistema riflettente GEN II proposto nel benchmark NEA-NSC-DOC (2013) per testare la capacità di SCALE 6.1.3 di effettuare un corretto calcolo di cella adottando una modellizzazione monodimensionale assembly/riflettore. I risultati ottenuti sono confrontati con quelli presentati nel benchmark e e con quelli valutati attraverso il codice Monte Carlo SERPENT 2.0 confermando la capacità di calcolo di SCALE 6.1.3. L’analisi sulla modellizzazione dei sistemi riflettenti GEN III+ è stata effettuata ricavando il valore dei parametri di cella per configurazioni omogenee ed una serie di configurazioni geometriche esatte che comprendono tutte le modellizzazioni del sistema riflettente lungo la direzione angolare del riflettore. Si è inoltre effettuata un’analisi di sensitività su parametri operativi e sui parametri di codice. Si è infine effettuato un calcolo in color-set per indagare l’influenza degli effetti 2-D sui parametri di cella. I risultati prodotti rappresentano un contributo migliorativo nella conoscenza dei parametri di cella di riflettore e potranno essere utilizzati per una più precisa valutazione del fenomeno del tilt nei sistemi GEN III+.
Resumo:
Logistics involves planning, managing, and organizing the flows of goods from the point of origin to the point of destination in order to meet some requirements. Logistics and transportation aspects are very important and represent a relevant costs for producing and shipping companies, but also for public administration and private citizens. The optimization of resources and the improvement in the organization of operations is crucial for all branches of logistics, from the operation management to the transportation. As we will have the chance to see in this work, optimization techniques, models, and algorithms represent important methods to solve the always new and more complex problems arising in different segments of logistics. Many operation management and transportation problems are related to the optimization class of problems called Vehicle Routing Problems (VRPs). In this work, we consider several real-world deterministic and stochastic problems that are included in the wide class of the VRPs, and we solve them by means of exact and heuristic methods. We treat three classes of real-world routing and logistics problems. We deal with one of the most important tactical problems that arises in the managing of the bike sharing systems, that is the Bike sharing Rebalancing Problem (BRP). We propose models and algorithms for real-world earthwork optimization problems. We describe the 3DP process and we highlight several optimization issues in 3DP. Among those, we define the problem related to the tool path definition in the 3DP process, the 3D Routing Problem (3DRP), which is a generalization of the arc routing problem. We present an ILP model and several heuristic algorithms to solve the 3DRP.
Resumo:
How to evaluate the cost-effectiveness of repair/retrofit intervention vs. demolition/replacement and what level of shaking intensity can the chosen repairing/retrofit technique sustain are open questions affecting either the pre-earthquake prevention, the post-earthquake emergency and the reconstruction phases. The (mis)conception that the cost of retrofit interventions would increase linearly with the achieved seismic performance (%NBS) often discourages stakeholders to consider repair/retrofit options in a post-earthquake damage situation. Similarly, in a pre-earthquake phase, the minimum (by-law) level of %NBS might be targeted, leading in some cases to no-action. Furthermore, the performance measure enforcing owners to take action, the %NBS, is generally evaluated deterministically. Not directly reflecting epistemic and aleatory uncertainties, the assessment can result in misleading confidence on the expected performance. The present study aims at contributing to the delicate decision-making process of repair/retrofit vs. demolition/replacement, by developing a framework to assist stakeholders with the evaluation of the effects in terms of long-term losses and benefits of an increment in their initial investment (targeted retrofit level) and highlighting the uncertainties hidden behind a deterministic approach. For a pre-1970 case study building, different retrofit solutions are considered, targeting different levels of %NBS, and the actual probability of reaching Collapse when considering a suite of ground-motions is evaluated, providing a correlation between %NBS and Risk. Both a simplified and a probabilistic loss modelling are then undertaken to study the relationship between %NBS and expected direct and indirect losses.
Resumo:
Efficient coupling of light to quantum emitters, such as atoms, molecules or quantum dots, is one of the great challenges in current research. The interaction can be strongly enhanced by coupling the emitter to the eva-nescent field of subwavelength dielectric waveguides that offer strong lateral confinement of the guided light. In this context subwavelength diameter optical nanofibers as part of a tapered optical fiber (TOF) have proven to be powerful tool which also provide an efficient transfer of the light from the interaction region to an optical bus, that is to say, from the nanofiber to an optical fiber. rnAnother approach towards enhancing light–matter interaction is to employ an optical resonator in which the light is circulating and thus passes the emitters many times. Here, both approaches are combined by experi-mentally realizing a microresonator with an integrated nanofiber waist. This is achieved by building a fiber-integrated Fabry-Pérot type resonator from two fiber Bragg grating mirrors with a stop-band near the cesium D2-line wavelength. The characteristics of this resonator fulfill the requirements of nonlinear optics, optical sensing, and cavity quantum electrodynamics in the strong-coupling regime. Together with its advantageous features, such as a constant high coupling strength over a large volume, tunability, high transmission outside the mirror stop band, and a monolithic design, this resonator is a promising tool for experiments with nanofiber-coupled atomic ensembles in the strong-coupling regime. rnThe resonator's high sensitivity to the optical properties of the nanofiber provides a probe for changes of phys-ical parameters that affect the guided optical mode, e.g., the temperature via the thermo-optic effect of silica. Utilizing this detection scheme, the thermalization dynamics due to far-field heat radiation of a nanofiber is studied over a large temperature range. This investigation provides, for the first time, a measurement of the total radiated power of an object with a diameter smaller than all absorption lengths in the thermal spectrum at the level of a single object of deterministic shape and material. The results show excellent agreement with an ab initio thermodynamic model that considers heat radiation as a volumetric effect and that takes the emitter shape and size relative to the emission wavelength into account. Modeling and investigating the thermalization of microscopic objects with arbitrary shape from first principles is of fundamental interest and has important applications, such as heat management in nano-devices or radiative forcing of aerosols in Earth's climate system. rnUsing a similar method, the effect of the TOF's mechanical modes on the polarization and phase of the fiber-guided light is studied. The measurement results show that in typical TOFs these quantities exhibit high-frequency thermal fluctuations. They originate from high-Q torsional oscillations that couple to the nanofiber-guided light via the strain-optic effect. An ab-initio opto-mechanical model of the TOF is developed that provides an accurate quantitative prediction for the mode spectrum and the mechanically induced polarization and phase fluctuations. These high-frequency fluctuations may limit the ultimate ideality of fiber-coupling into photonic structures. Furthermore, first estimations show that they may currently limit the storage time of nanofiber-based atom traps. The model, on the other hand, provides a method to design TOFs with tailored mechanical properties in order to meet experimental requirements. rn
Resumo:
One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.
Resumo:
In most real-life environments, mechanical or electronic components are subjected to vibrations. Some of these components may have to pass qualification tests to verify that they can withstand the fatigue damage they will encounter during their operational life. In order to conduct a reliable test, the environmental excitations can be taken as a reference to synthesize the test profile: this procedure is referred to as “test tailoring”. Due to cost and feasibility reasons, accelerated qualification tests are usually performed. In this case, the duration of the original excitation which acts on the component for its entire life-cycle, typically hundreds or thousands of hours, is reduced. In particular, the “Mission Synthesis” procedure lets to quantify the induced damage of the environmental vibration through two functions: the Fatigue Damage Spectrum (FDS) quantifies the fatigue damage, while the Maximum Response Spectrum (MRS) quantifies the maximum stress. Then, a new random Power Spectral Density (PSD) can be synthesized, with same amount of induced damage, but a specified duration in order to conduct accelerated tests. In this work, the Mission Synthesis procedure is applied in the case of so-called Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic contributions, in the form of sine tones typically due to some rotating parts of the system (e.g. helicopters, engine-mounted components, …). In fact, a proper test tailoring should not only preserve the accumulated fatigue damage, but also the “nature” of the excitation (in this case the sinusoidal components superimposed on the random process) in order to obtain reliable results. The classic time-domain approach is taken as a reference for the comparison of different methods for the FDS calculation in presence of Sine-on-Random vibrations. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is presented.
Resumo:
The present work studies a km-scale data assimilation scheme based on a LETKF developed for the COSMO model. The aim is to evaluate the impact of the assimilation of two different types of data: temperature, humidity, pressure and wind data from conventional networks (SYNOP, TEMP, AIREP reports) and 3d reflectivity from radar volume. A 3-hourly continuous assimilation cycle has been implemented over an Italian domain, based on a 20 member ensemble, with boundary conditions provided from ECMWF ENS. Three different experiments have been run for evaluating the performance of the assimilation on one week in October 2014 during which Genova flood and Parma flood took place: a control run of the data assimilation cycle with assimilation of data from conventional networks only, a second run in which the SPPT scheme is activated into the COSMO model, a third run in which also reflectivity volumes from meteorological radar are assimilated. Objective evaluation of the experiments has been carried out both on case studies and on the entire week: check of the analysis increments, computing the Desroziers statistics for SYNOP, TEMP, AIREP and RADAR, over the Italian domain, verification of the analyses against data not assimilated (temperature at the lowest model level objectively verified against SYNOP data), and objective verification of the deterministic forecasts initialised with the KENDA analyses for each of the three experiments.
Resumo:
Generalised epileptic seizures are frequently accompanied by sudden, reversible transitions from low amplitude, irregular background activity to high amplitude, regular spike-wave discharges (SWD) in the EEG. The underlying mechanisms responsible for SWD generation and for the apparently spontaneous transitions to SWD and back again are still not fully understood. Specifically, the role of spatial cortico-cortical interactions in ictogenesis is not well studied. We present a macroscopic, neural mass model of a cortical column which includes two distinct time scales of inhibition. This model can produce both an oscillatory background and a pathological SWD rhythm. We demonstrate that coupling two of these cortical columns can lead to a bistability between out-of-phase, low amplitude background dynamics and in-phase, high amplitude SWD activity. Stimuli can cause state-dependent transitions from background into SWD. In an extended local area of cortex, spatial heterogeneities in a model parameter can lead to spontaneous reversible transitions from a desynchronised background to synchronous SWD due to intermittency. The deterministic model is therefore capable of producing absence seizure-like events without any time dependent adjustment of model parameters. The emergence of such mechanisms due to spatial coupling demonstrates the importance of spatial interactions in modelling ictal dynamics, and in the study of ictogenesis.
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Resumo:
Ecomorphology and functional morphology are two distinct disciplines within biology that are often conflated and erroneously used interchangeably. By investigating the morphological distinctiveness of bottom-walking turtles relative to aquatic swimmers and terrestrial walkers, we can disentangle the effects of ecology and performance. Shell morphology, tail length, digit length, webbing length, and integumental differences were examined using dry and wet preserved specimens. Bottom-walkers were hypothesized to be distinct in all measurements. Instead, bottom-walkers were typically distinct from terrestrial taxa but not aquatic taxa, although for integumentary structures, only bottom-walkers were found to have significantly more integumentary structures than terrestrial turtles. This demonstrates that, despite sometimes highly differential locomotor modes, ecology, defined as habitat type, can show a stronger morphological signal than function.
Resumo:
Clinicians and researchers have characterized early life experiences as permanent and stable influences on the personality and subsequent life experiences of an individual. Recent conceptualizations have suggested that personal and environmental factors influencing development are not deterministic. Multiple pathways into adulthood are possible. Adoption is one potential early life stressor that may illustrate the usefulness of such conceptualizations for assessing long-term effects in adulthood. Previous studies of adoption have characterized the effects of adoption into adolescence and young adulthood. The purpose of this study was to provide an initial assessment of the long-term impact of adoption. The participants were taken from the Swedish Adoption/Twin Study of Aging. From the original sample, we identified a subsample of 60 pairs of twins who were separated and reared apart, with one member being raised by a biological parent or parents and the other by an adoptive parent or parents with no biological relationship. A series of univariate and multivariate analyses were undertaken to assess the elements associated with being reared in either an adoptive home or the home of biological parent(s). The results suggest few significant effects of adoption on the adult adjustment of adoptees. In particular, the results reflect the important mediating role of childhood socioeconomic status, suggesting that the stress of adoption itself is mediated by the type of rearing environment provided by the adoption process.