969 resultados para Stochastic particle dynamics (theory)
Resumo:
For decades researchers have been trying to build models that would help understand price performance in financial markets and, therefore, to be able to forecast future prices. However, any econometric approaches have notoriously failed in predicting extreme events in markets. At the end of 20th century, market specialists started to admit that the reasons for economy meltdowns may originate as much in rational actions of traders as in human psychology. The latter forces have been described as trading biases, also known as animal spirits. This study aims at expressing in mathematical form some of the basic trading biases as well as the idea of market momentum and, therefore, reconstructing the dynamics of prices in financial markets. It is proposed through a novel family of models originating in population and fluid dynamics, applied to an electricity spot price time series. The main goal of this work is to investigate via numerical solutions how well theequations succeed in reproducing the real market time series properties, especially those that seemingly contradict standard assumptions of neoclassical economic theory, in particular the Efficient Market Hypothesis. The results show that the proposed model is able to generate price realizations that closely reproduce the behaviour and statistics of the original electricity spot price. That is achieved in all price levels, from small and medium-range variations to price spikes. The latter were generated from price dynamics and market momentum, without superimposing jump processes in the model. In the light of the presented results, it seems that the latest assumptions about human psychology and market momentum ruling market dynamics may be true. Therefore, other commodity markets should be analyzed with this model as well.
Resumo:
Atomic structure of ZrO2 and B2O3 was investigated in this work. New data under extreme conditions (T = 3100 K) was obtained for the liquid ZrO2 structure. A fractional number of boron was investigated for glassy structure of B2O3. It was shown that it is possible to obtain an agreement for the fractional number between NMR and DFT techniques using a suitable initial configuration.
Resumo:
Particle Image Velocimetry, PIV, is an optical measuring technique to obtain velocity information of a flow in interest. With PIV it is possible to achieve two or three dimensional velocity vector fields from a measurement area instead of a single point in a flow. Measured flow can be either in liquid or in gas form. PIV is nowadays widely applied to flow field studies. The need for PIV is to obtain validation data for Computational Fluid Dynamics calculation programs that has been used to model blow down experiments in PPOOLEX test facility in the Lappeenranta University of Technology. In this thesis PIV and its theoretical background are presented. All the subsystems that can be considered to be part of a PIV system are presented as well with detail. Emphasis is also put to the mathematics behind the image evaluation. The work also included selection and successful testing of a PIV system, as well as the planning of the installation to the PPOOLEX facility. Already in the preliminary testing PIV was found to be good addition to the measuring equipment for Nuclear Safety Research Unit of LUT. The installation to PPOOLEX facility was successful even though there were many restrictions considering it. All parts of the PIV system worked and they were found out to be appropriate for the planned use. Results and observations presented in this thesis are a good background to further PIV use.
Resumo:
The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.
Resumo:
In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.
Resumo:
Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.
Resumo:
The objective of the thesis was to develop methods to manufacture and control calcium carbonate crystal nucleation and growth in precipitation process. The work consists of experimental part and literature part that addresses theory of nucleation, crystallization and precipitation. In the experimental part calcium carbonate was precipitated using carbonization reaction. Precipitation was carried out in presence of known morphology controlling agents (anionic polymers and sodium silicate) and by using different operation conditions. Formed material was characterized using SEM images, and its thermal stability was assessed. This work demonstrates that carbon dioxide feeding rate and concentrations of calcium hydroxide and additives can be used to control size, shape and amount of precipitating calcium carbonate.
Resumo:
In 1859, Charles Darwin published his theory of evolution by natural selection, the process occurring based on fitness benefits and fitness costs at the individual level. Traditionally, evolution has been investigated by biologists, but it has induced mathematical approaches, too. For example, adaptive dynamics has proven to be a very applicable framework to the purpose. Its core concept is the invasion fitness, the sign of which tells whether a mutant phenotype can invade the prevalent phenotype. In this thesis, four real-world applications on evolutionary questions are provided. Inspiration for the first two studies arose from a cold-adapted species, American pika. First, it is studied how the global climate change may affect the evolution of dispersal and viability of pika metapopulations. Based on the results gained here, it is shown that the evolution of dispersal can result in extinction and indeed, evolution of dispersalshould be incorporated into the viability analysis of species living in fragmented habitats. The second study is focused on the evolution of densitydependent dispersal in metapopulations with small habitat patches. It resulted a very surprising unintuitive evolutionary phenomenon, how a non-monotone density-dependent dispersal may evolve. Cooperation is surprisingly common in many levels of life, despite of its obvious vulnerability to selfish cheating. This motivated two applications. First, it is shown that density-dependent cooperative investment can evolve to have a qualitatively different, monotone or non-monotone, form depending on modelling details. The last study investigates the evolution of investing into two public-goods resources. The results suggest one general path by which labour division can arise via evolutionary branching. In addition to applications, two novel methodological derivations of fitness measures in structured metapopulations are given.
Resumo:
The succession dynamics of a macroalgal community in a tropical stream (20º58' S and 49º25' W) was investigated after disturbance by a sequence of intensive rains. High precipitation levels caused almost complete loss of the macroalgal community attached to the substratum and provided a strong pressure against its immediate re-establishment. After this disturbance, a weekly sampling program from May 1999 to January 2000 was established to investigate macroalgal recolonization. The community changed greatly throughout the succession process. The number of species varied from one to seven per sampling. Global abundance of macroalgal community did not reveal a consistent temporal pattern of variation. In early succession stages, the morphological form of tufts dominated, followed by unbranched filaments. Latter succession stages showed the almost exclusive occurrence of gelatinous forms, including filaments and colonies. The succession trajectory was mediated by phosphorus availability in which community composition followed a scheme of changes in growth forms. However, we believe that deterministic and stochastic processes occur in lotic ecosystems, but they are dependent on the length of time considered in the succession analyses.
Resumo:
The main objective of this research is to estimate and characterize heterogeneous mass transfer coefficients in bench- and pilot-scale fluidized bed processes by the means of computational fluid dynamics (CFD). A further objective is to benchmark the heterogeneous mass transfer coefficients predicted by fine-grid Eulerian CFD simulations against empirical data presented in the scientific literature. First, a fine-grid two-dimensional Eulerian CFD model with a solid and gas phase has been designed. The model is applied for transient two-dimensional simulations of char combustion in small-scale bubbling and turbulent fluidized beds. The same approach is used to simulate a novel fluidized bed energy conversion process developed for the carbon capture, chemical looping combustion operated with a gaseous fuel. In order to analyze the results of the CFD simulations, two one-dimensional fluidized bed models have been formulated. The single-phase and bubble-emulsion models were applied to derive the average gas-bed and interphase mass transfer coefficients, respectively. In the analysis, the effects of various fluidized bed operation parameters, such as fluidization, velocity, particle and bubble diameter, reactor size, and chemical kinetics, on the heterogeneous mass transfer coefficients in the lower fluidized bed are evaluated extensively. The analysis shows that the fine-grid Eulerian CFD model can predict the heterogeneous mass transfer coefficients quantitatively with acceptable accuracy. Qualitatively, the CFD-based research of fluidized bed process revealed several new scientific results, such as parametrical relationships. The huge variance of seven orders of magnitude within the bed Sherwood numbers presented in the literature could be explained by the change of controlling mechanisms in the overall heterogeneous mass transfer process with the varied process conditions. The research opens new process-specific insights into the reactive fluidized bed processes, such as a strong mass transfer control over heterogeneous reaction rate, a dominance of interphase mass transfer in the fine-particle fluidized beds and a strong chemical kinetic dependence of the average gas-bed mass transfer. The obtained mass transfer coefficients can be applied in fluidized bed models used for various engineering design, reactor scale-up and process research tasks, and they consequently provide an enhanced prediction accuracy of the performance of fluidized bed processes.
Resumo:
Ecological specialization in resource utilization has various facades ranging from nutritional resources via host use of parasites or phytophagous insects to local adaptation in different habitats. Therefore, the evolution of specialization affects the evolution of most other traits, which makes it one of the core issues in the theory of evolution. Hence, the evolution of specialization has gained enormous amounts of research interest, starting already from Darwin’s Origin of species in 1859. Vast majority of the theoretical studies has, however, focused on the mathematically most simple case with well-mixed populations and equilibrium dynamics. This thesis explores the possibilities to extend the evolutionary analysis of resource usage to spatially heterogeneous metapopulation models and to models with non-equilibrium dynamics. These extensions are enabled by the recent advances in the field of adaptive dynamics, which allows for a mechanistic derivation of the invasion-fitness function based on the ecological dynamics. In the evolutionary analyses, special focus is set to the case with two substitutable renewable resources. In this case, the most striking questions are, whether a generalist species is able to coexist with the two specialist species, and can such trimorphic coexistence be attained through natural selection starting from a monomorphic population. This is shown possible both due to spatial heterogeneity and due to non-equilibrium dynamics. In addition, it is shown that chaotic dynamics may sometimes inflict evolutionary suicide or cyclic evolutionary dynamics. Moreover, the relations between various ecological parameters and evolutionary dynamics are investigated. Especially, the relation between specialization and dispersal propensity turns out to be counter-intuitively non-monotonous. This observation served as inspiration to the analysis of joint evolution of dispersal and specialization, which may provide the most natural explanation to the observed coexistence of specialist and generalist species.
Resumo:
The present study compares the performance of stochastic and fuzzy models for the analysis of the relationship between clinical signs and diagnosis. Data obtained for 153 children concerning diagnosis (pneumonia, other non-pneumonia diseases, absence of disease) and seven clinical signs were divided into two samples, one for analysis and other for validation. The former was used to derive relations by multi-discriminant analysis (MDA) and by fuzzy max-min compositions (fuzzy), and the latter was used to assess the predictions drawn from each type of relation. MDA and fuzzy were closely similar in terms of prediction, with correct allocation of 75.7 to 78.3% of patients in the validation sample, and displaying only a single instance of disagreement: a patient with low level of toxemia was mistaken as not diseased by MDA and correctly taken as somehow ill by fuzzy. Concerning relations, each method provided different information, each revealing different aspects of the relations between clinical signs and diagnoses. Both methods agreed on pointing X-ray, dyspnea, and auscultation as better related with pneumonia, but only fuzzy was able to detect relations of heart rate, body temperature, toxemia and respiratory rate with pneumonia. Moreover, only fuzzy was able to detect a relationship between heart rate and absence of disease, which allowed the detection of six malnourished children whose diagnoses as healthy are, indeed, disputable. The conclusion is that even though fuzzy sets theory might not improve prediction, it certainly does enhance clinical knowledge since it detects relationships not visible to stochastic models.
Resumo:
Kalman filter is a recursive mathematical power tool that plays an increasingly vital role in innumerable fields of study. The filter has been put to service in a multitude of studies involving both time series modelling and financial time series modelling. Modelling time series data in Computational Market Dynamics (CMD) can be accomplished using the Jablonska-Capasso-Morale (JCM) model. Maximum likelihood approach has always been utilised to estimate the parameters of the JCM model. The purpose of this study is to discover if the Kalman filter can be effectively utilized in CMD. Ensemble Kalman filter (EnKF), with 50 ensemble members, applied to US sugar prices spanning the period of January, 1960 to February, 2012 was employed for this work. The real data and Kalman filter trajectories showed no significant discrepancies, hence indicating satisfactory performance of the technique. Since only US sugar prices were utilized, it would be interesting to discover the nature of results if other data sets are employed.
Resumo:
Quantum computation and quantum communication are two of the most promising future applications of quantum mechanics. Since the information carriers used in both of them are essentially open quantum systems it is necessary to understand both quantum information theory and the theory of open quantum systems in order to investigate realistic implementations of such quantum technologies. In this thesis we consider the theory of open quantum systems from a quantum information theory perspective. The thesis is divided into two parts: review of the literature and original research. In the review of literature we present some important definitions and known results of open quantum systems and quantum information theory. We present the definitions of trace distance, two channel capacities and superdense coding capacity and give a reasoning why they can be used to represent the transmission efficiency of a communication channel. We also show derivations of some properties useful to link completely positive and trace preserving maps to trace distance and channel capacities. With the help of these properties we construct three measures of non-Markovianity and explain why they detect non-Markovianity. In the original research part of the thesis we study the non-Markovian dynamics in an experimentally realized quantum optical set-up. For general one-qubit dephasing channels we calculate the explicit forms of the two channel capacities and the superdense coding capacity. For the general two-qubit dephasing channel with uncorrelated local noises we calculate the explicit forms of the quantum capacity and the mutual information of a four-letter encoding. By using the dynamics in the experimental implementation as a set of specific dephasing channels we also calculate and compare the measures in one- and two-qubit dephasing channels and study the options of manipulating the environment to achieve revivals and higher transmission rates in superdense coding protocol with dephasing noise. Kvanttilaskenta ja kvanttikommunikaatio ovat kaksi puhutuimmista tulevaisuuden kvanttimekaniikan käytännön sovelluksista. Koska molemmissa näistä informaatio koodataan systeemeihin, jotka ovat oleellisesti avoimia kvanttisysteemejä, sekä kvantti-informaatioteorian, että avointen kvanttisysteemien tuntemus on välttämätöntä. Tässä tutkielmassa käsittelemme avointen kvanttisysteemien teoriaa kvantti-informaatioteorian näkökulmasta. Tutkielma on jaettu kahteen osioon: kirjallisuuskatsaukseen ja omaan tutkimukseen. Kirjallisuuskatsauksessa esitämme joitakin avointen kvanttisysteemien ja kvantti-informaatioteorian tärkeitä määritelmiä ja tunnettuja tuloksia. Esitämme jälkietäisyyden, kahden kanavakapasiteetin ja superdense coding -kapasiteetin määritelmät ja esitämme perustelun sille, miksi niitä voidaan käyttää kuvaamaan kommunikointikanavan lähetystehokkuutta. Näytämme myös todistukset kahdelle ominaisuudelle, jotka liittävät täyspositiiviset ja jäljensäilyttävät kuvaukset jälkietäisyyteen ja kanavakapasiteetteihin. Näiden ominaisuuksien avulla konstruoimme kolme epä-Markovisuusmittaa ja perustelemme, miksi ne havaitsevat dynamiikan epä-Markovisuutta. Oman tutkimuksen osiossa tutkimme epä-Markovista dynamiikkaa kokeellisesti toteutetussa kvanttioptisessa mittausjärjestelyssä. Yleisen yhden qubitin dephasing-kanavan tapauksessa laskemme molempien kanavakapasiteettien ja superdense coding -kapasiteetin eksplisiittiset muodot. Yleisen kahden qubitin korreloimattomien ympäristöjen dephasing-kanavan tapauksessa laskemme yhteisen informaation lausekkeen nelikirjaimisessa koodauksessa ja kvanttikanavakapasiteetin. Käyttämällä kokeellisen mittajärjestelyn dynamiikkoja esimerkki dephasing-kanavina me myös laskemme konstruoitujen epä-Markovisuusmittojen arvot ja vertailemme niitä yksi- ja kaksi-qubitti-dephasing-kanavissa. Lisäksi käyttäen kokeellisia esimerkkikanavia tutkimme, kuinka ympäristöä manipuloimalla superdense coding –skeemassa voidaan saada yhteinen informaatio ajoittain kasvamaan tai saavuttaa kaikenkaikkiaan korkeampi lähetystehokkuus.
Resumo:
Different axioms underlie efficient market theory and Keynes's liquidity preference theory. Efficient market theory assumes the ergodic axiom. Consequently, today's decision makers can calculate with actuarial precision the future value of all possible outcomes resulting from today's decisions. Since in an efficient market world decision makers "know" their intertemporal budget constraints, decision makers never default on a loan, i.e., systemic defaults, insolvencies, and bankruptcies are impossible. Keynes liquidity preference theory rejects the ergodic axiom. The future is ontologically uncertain. Accordingly systemic defaults and insolvencies can occur but can never be predicted in advance.