909 resultados para stochastic adding machines
Resumo:
La ricerca ha per oggetto la messa a punto e applicazione di un approccio metaprogettuale finalizzato alla definizione di criteri di qualità architettonica e paesaggistica nella progettazione di aziende vitivinicole medio-piccole, che effettuano la trasformazione della materia prima, prevalentemente di propria produzione. L’analisi della filiera vitivinicola, della letteratura scientifica, della normativa di settore, di esempi di “architetture del vino eccellenti” hanno esplicitato come prevalentemente vengano indagate cantine industriali ed aspetti connessi con l'innovazione tecnologica delle attrezzature. Soluzioni costruttive e tecnologiche finalizzate alla qualità architettonica ed ambientale, attuali dinamiche riguardanti il turismo enogastronomico, nuove funzionalità aziendali, problematiche legate alla sostenibilità dell’intervento risultano ancora poco esplorate, specialmente con riferimento a piccole e medie aziende vitivinicole. Assunto a riferimento il territorio ed il sistema costruito del Nuovo Circondario Imolese (areale rappresentativo per vocazione ed espressione produttiva del comparto vitivinicolo emiliano-romagnolo) è stato identificato un campione di aziende con produzioni annue non superiori ai 5000 hl. Le analisi svolte sul campione hanno permesso di determinare: modalità di aggregazione funzionale degli spazi costruiti, relazioni esistenti con il paesaggio, aspetti distributivi e materico-costruttivi, dimensioni di massima dei locali funzionali alla produzione. Il caso studio relativo alla riqualificazione di un’azienda rappresentativa del comparto è stato utilizzato per la messa a punto e sperimentazione di criteri di progettazione guidati da valutazioni relative alle prestazioni energetiche, alla qualità architettonica e alla sostenibilità ambientale, economica e paesaggistica. L'analisi costi-benefici (pur non considerando le ricadute positive in termini di benessere degli occupanti ed il guadagno della collettività in termini di danni collegati all’inquinamento che vengono evitati in architetture progettate per garantire qualità ambientale interna ed efficienza energetica) ha esplicitato il ritorno in pochi anni dell’investimento proposto, nonostante gli ancora elevati costi di materiali di qualità e dei componenti per il corretto controllo climatico delle costruzioni.
Resumo:
Sea-level variability is characterized by multiple interacting factors described in the Fourth Assessment Report (Bindoff et al., 2007) of the Intergovernmental Panel on Climate Change (IPCC) that act over wide spectra of temporal and spatial scales. In Church et al. (2010) sea-level variability and changes are defined as manifestations of climate variability and change. The European Environmental Agency (EEA) defines sea level as one of most important indicators for monitoring climate change, as it integrates the response of different components of the Earths system and is also affected by anthropogenic contributions (EEA, 2011). The balance between the different sea-level contributions represents an important source of uncertainty, involving stochastic processes that are very difficult to describe and understand in detail, to the point that they are defined as an enigma in Munk (2002). Sea-level rate estimates are affected by all these uncertainties, in particular if we look at possible responses to sea-level contributions to future climate. At the regional scale, lateral fluxes also contribute to sea-level variability, adding complexity to sea-level dynamics. The research strategy adopted in this work to approach such an interesting and challenging topic has been to develop an objective methodology to study sea-level variability at different temporal and spatial scales, applicable in each part of the Mediterranean basin in particular, and in the global ocean in general, using all the best calibrated sources of data (for the Mediterranean): in-situ, remote-sensig and numerical models data. The global objective of this work was to achieve a deep understanding of all of the components of the sea-level signal contributing to sea-level variability, tendency and trend and to quantify them.
Resumo:
DNA block copolymer, a new class of hybrid material composed of a synthetic polymer and an oligodeoxynucleotide segment, owns unique properties which can not be achieved by only one of the two polymers. Among amphiphilic DNA block copolymers, DNA-b-polypropylene oxide (PPO) was chosen as a model system, because PPO is biocompatible and has a Tg < 0 °C. Both properties might be essential for future applications in living systems. During my PhD study, I focused on the properties and the structures of DNA-b-PPO molecules. First, DNA-b-PPO micelles were studied by scanning force microscopy (SFM) and fluorescence correlation spectroscopy (FCS). In order to control the size of micelles without re-synthesis, micelles were incubated with template-independent DNA polymerase TdT and deoxynucleotide triphosphates in reaction buffer solution. By carrying out ex-situ experiments, the growth of micelles was visualized by imaging in liquid with AFM. Complementary measurements with FCS and polyacrylamide gel electrophoresis (PAGE) confirmed the increase in size. Furthermore, the growing process was studied with AFM in-situ at 37 °C. Hereby the growth of individual micelles could be observed. In contrast to ex-situ reactions, the growth of micelles adsorbed on mica surface for in-situ experiments terminated about one hour after the reaction was initiated. Two reasons were identified for the termination: (i) block of catalytic sites by interaction with the substrate and (ii) reduced exchange of molecules between micelles and the liquid environment. In addition, a geometrical model for AFM imaging was developed which allowed deriving the average number of mononucleotides added to DNA-b-PPO molecules in dependence on the enzymatic reaction time (chapter 3). Second, a prototype of a macroscopic DNA machine made of DNA-b-PPO was investigated. As DNA-b-PPO molecules were amphiphilic, they could form a monolayer at the air-water interface. Using a Langmuir film balance, the energy released owing to DNA hybridization was converted into macroscopic movements of the barriers in the Langmuir trough. A specially adapted Langmuir trough was build to exchange the subphase without changing the water level significantly. Upon exchanging the subphase with complementary DNA containing buffer solution, an increase of lateral pressure was observed which could be attributed to hybridization of single stranded DNA-b-PPO. The pressure versus area/molecule isotherms were recorded before and after hybridization. I also carried out a series of control experiments, in order to identify the best conditions of realizing a DNA machine with DNA-b-PPO. To relate the lateral pressure with molecular structures, Langmuir Blodgett (LB) films were transferred to highly ordered pyrolytic graphite (HOPG) and mica substrates at different pressures. These films were then investigated with AFM (chapter 4). At last, this thesis includes studies of DNA and DNA block copolymer assemblies with AFM, which were performed in cooperation with different group of the Sonderforschungsbereich 625 “From Single Molecules to Nanoscopically Structured Materials”. AFM was proven to be an important method to confirm the formation of multiblock copolymers and DNA networks (chapter 5).
Resumo:
Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
In the large maturity limit, we compute explicitly the Local Volatility surface for Heston, through Dupire’s formula, with Fourier pricing of the respective derivatives of the call price. Than we verify that the prices of European call options produced by the Heston model, concide with those given by the local volatility model where the Local Volatility is computed as said above.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
The Curry-Howard isomorphism is the idea that proofs in natural deduction can be put in correspondence with lambda terms in such a way that this correspondence is preserved by normalization. The concept can be extended from Intuitionistic Logic to other systems, such as Linear Logic. One of the nice conseguences of this isomorphism is that we can reason about functional programs with formal tools which are typical of proof systems: such analysis can also include quantitative qualities of programs, such as the number of steps it takes to terminate. Another is the possiblity to describe the execution of these programs in terms of abstract machines. In 1990 Griffin proved that the correspondence can be extended to Classical Logic and control operators. That is, Classical Logic adds the possiblity to manipulate continuations. In this thesis we see how the things we described above work in this larger context.
Resumo:
The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Resumo:
We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.
Resumo:
Questa tesi verte sullo studio di un modello a volatilità stocastica e locale, utilizzato per valutare opzioni esotiche nei mercati dei cambio. La difficoltà nell'implementare un modello di tal tipo risiede nella calibrazione della leverage surface e uno degli scopi principali di questo lavoro è quello di mostrarne la procedura.