27 resultados para Non-commutative particles dynamics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis various aspects of memory effects in the dynamics of open quantum systems are studied. We develop a general theoretical framework for open quantum systems beyond the Markov approximation which allows us to investigate different sources of memory effects and to develop methods for harnessing them in order to realise controllable open quantum systems. In the first part of the Thesis a characterisation of non-Markovian dynamics in terms of information flow is developed and applied to study different sources of memory effects. Namely, we study nonlocal memory effects which arise due to initial correlations between two local environments and further the memory effects induced by initial correlations between the open system and the environment. The last part focuses on describing two all-optical experiment in which through selective preparation of the initial environment states the information flow between the system and the environment can be controlled. In the first experiment the system is driven from the Markovian to the non- Markovian regime and the degree of non-Markovianity is determined. In the second experiment we observe the nonlocal nature of the memory effects and provide a novel method to experimentally quantify frequency correlations in photonic environments via polarisation measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frontier and Emerging economies have implemented policies with the objective of liberalizing their equity markets. Equity market liberalization opens the domestic equity market to foreign investors and as well paves the way for domestic investors to invest in foreign equity securities. Among other things, equity market liberalization results in diversification benefits. Moreover, equity market liberalization leads to low cost of equity capital resulting from the lower rate of return by investors. Additionally, foreign and local investors share any potential risks. Liberalized equity markets also become liquid considering that there are more investors to trade. Equity market liberalization results in financial integration which explains the movement of two markets. In crisis period, increased volatility and co-movement between two markets may result in what is termed contagion effects. In Africa, major moves toward financial liberalization generally started in the late 1980s with South Africa as the pioneer. Over the years, researchers have studied the impact of financial liberalization on Africa’s economic development with diverse results; some being positive, others negative and still others being mixed. The objective of this study is to establish whether African stock-markets are integrated into the United States (US) and World market. Furthermore, the study helps to see if there are international linkages between the Africa, US and the world markets. A Bivariate- VAR- GARCH- BEKK model is employed in the study. In the study, the effect of thin trading is removed through series of econometric data purification. This is because thin trading, also known as non-trading or inconsistency of trading, is a main feature of African markets and may trigger inconsistency and biased results. The study confirmed the widely established results that the South Africa and Egypt stock markets are highly integrated with the US and World market. Interestingly, the study adds to knowledge in this research area by establishing the fact that Kenya is very integrated with the US and World markets and that it receives and exports past innovations as well as shocks to and from the US and World market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1859, Charles Darwin published his theory of evolution by natural selection, the process occurring based on fitness benefits and fitness costs at the individual level. Traditionally, evolution has been investigated by biologists, but it has induced mathematical approaches, too. For example, adaptive dynamics has proven to be a very applicable framework to the purpose. Its core concept is the invasion fitness, the sign of which tells whether a mutant phenotype can invade the prevalent phenotype. In this thesis, four real-world applications on evolutionary questions are provided. Inspiration for the first two studies arose from a cold-adapted species, American pika. First, it is studied how the global climate change may affect the evolution of dispersal and viability of pika metapopulations. Based on the results gained here, it is shown that the evolution of dispersal can result in extinction and indeed, evolution of dispersalshould be incorporated into the viability analysis of species living in fragmented habitats. The second study is focused on the evolution of densitydependent dispersal in metapopulations with small habitat patches. It resulted a very surprising unintuitive evolutionary phenomenon, how a non-monotone density-dependent dispersal may evolve. Cooperation is surprisingly common in many levels of life, despite of its obvious vulnerability to selfish cheating. This motivated two applications. First, it is shown that density-dependent cooperative investment can evolve to have a qualitatively different, monotone or non-monotone, form depending on modelling details. The last study investigates the evolution of investing into two public-goods resources. The results suggest one general path by which labour division can arise via evolutionary branching. In addition to applications, two novel methodological derivations of fitness measures in structured metapopulations are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future of privacy in the information age is a highly debated topic. In particular, new and emerging technologies such as ICTs and cognitive technologies are seen as threats to privacy. This thesis explores images of the future of privacy among non-experts within the time frame from the present until the year 2050. The aims of the study are to conceptualise privacy as a social and dynamic phenomenon, to understand how privacy is conceptualised among citizens and to analyse ideal-typical images of the future of privacy using the causal layered analysis method. The theoretical background of the thesis combines critical futures studies and critical realism, and the empirical material is drawn from three focus group sessions held in spring 2012 as part of the PRACTIS project. From a critical realist perspective, privacy is conceptualised as a social institution which creates and maintains boundaries between normative circles and preserves the social freedom of individuals. Privacy changes when actors with particular interests engage in technology-enabled practices which challenge current privacy norms. The thesis adopts a position of technological realism as opposed to determinism or neutralism. In the empirical part, the focus group participants are divided into four clusters based on differences in privacy conceptions and perceived threats and solutions. The clusters are fundamentalists, pragmatists, individualists and collectivists. Correspondingly, four ideal-typical images of the future are composed: ‘drift to low privacy’, ‘continuity and benign evolution’, ‘privatised privacy and an uncertain future’, and ‘responsible future or moral decline’. The images are analysed using the four layers of causal layered analysis: litany, system, worldview and myth. Each image has its strengths and weaknesses. The individualistic images tend to be fatalistic in character while the collectivistic images are somewhat utopian. In addition, the images have two common weaknesses: lack of recognition of ongoing developments and simplistic conceptions of privacy based on a dichotomy between the individual and society. The thesis argues for a dialectical understanding of futures as present images of the future and as outcomes of real processes and mechanisms. The first steps in promoting desirable futures are the awareness of privacy as a social institution, the awareness of current images of the future, including their assumptions and weaknesses, and an attitude of responsibility where futures are seen as the consequences of present choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work is to obtain a better understanding of behaviour of possible ultrasound appliance on fluid media mixing. The research is done in the regard to Newtonian and non-Newtonian fluids. The process of ultrasound appliance on liquids is modelled in COMSOL Multiphysics software. The influence of ultrasound using is introduced as waveform equation. Turbulence modelling is fulfilled by the k-ε model in Newtonian fluid. The modeling of ultrasound assisted mixing in non-Newtonian fluids is based on the power law. To verify modelling results two practical methods are used: Particle Image Velocimetry and measurements of mixing time. Particle Image Velocimetry allows capturing of velocity flow field continuously and presents detailed depiction of liquid dynamics. The second way of verification is the comparison of mixing time of homogeneity. Experimentally achievement of mixing time is done by conductivity measurements. In modelling part mixing time is achieved by special module of COMSOL Multiphysics – the transport of diluted species. Both practical and modelling parts show similar radial mechanism of fluid flow under ultrasound appliance – from the horn tip fluid moves to the bottom and along the walls goes back. Velocity profiles are similar in modelling and experimental part in the case of Newtonian fluid. In the case of non-Newtonian fluid velocity profiles do not agree. The development track of ultrasound-assisted mixing modelling is presented in the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological specialization in resource utilization has various facades ranging from nutritional resources via host use of parasites or phytophagous insects to local adaptation in different habitats. Therefore, the evolution of specialization affects the evolution of most other traits, which makes it one of the core issues in the theory of evolution. Hence, the evolution of specialization has gained enormous amounts of research interest, starting already from Darwin’s Origin of species in 1859. Vast majority of the theoretical studies has, however, focused on the mathematically most simple case with well-mixed populations and equilibrium dynamics. This thesis explores the possibilities to extend the evolutionary analysis of resource usage to spatially heterogeneous metapopulation models and to models with non-equilibrium dynamics. These extensions are enabled by the recent advances in the field of adaptive dynamics, which allows for a mechanistic derivation of the invasion-fitness function based on the ecological dynamics. In the evolutionary analyses, special focus is set to the case with two substitutable renewable resources. In this case, the most striking questions are, whether a generalist species is able to coexist with the two specialist species, and can such trimorphic coexistence be attained through natural selection starting from a monomorphic population. This is shown possible both due to spatial heterogeneity and due to non-equilibrium dynamics. In addition, it is shown that chaotic dynamics may sometimes inflict evolutionary suicide or cyclic evolutionary dynamics. Moreover, the relations between various ecological parameters and evolutionary dynamics are investigated. Especially, the relation between specialization and dispersal propensity turns out to be counter-intuitively non-monotonous. This observation served as inspiration to the analysis of joint evolution of dispersal and specialization, which may provide the most natural explanation to the observed coexistence of specialist and generalist species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The non-idealities in a rotor-bearing system may cause undesirable subcritical superharmonic resonances that occur when the rotating speed of the rotor is a fraction of the natural frequency of the system. These resonances arise partly from the non-idealities of the bearings. This study introduces a novel simulation approach that can be used to study the superharmonic vibrations of rotor-bearing systems. The superharmonic vibrations of complex rotor-bearing systems can be studied in an accurate manner by combining a detailed rotor and bearing model in a multibody simulation approach. The research looks at the theoretical background of multibody formulations that can be used in the dynamic analysis of flexible rotors. The multibody formulations currently in use are suitable for linear deformation analysis only. However, nonlinear formulation may arise in high-speed rotor dynamics applications due to the cenrrifugal stiffening effect. For this reason, finite element formulations that can describe nonlinear deformation are also introduced in this work. The description of the elastic forces in the absolute nodal coordinate formulation is studied and improved. A ball bearing model that includes localized and distributed defects is developed in this study. This bearing model could be used in rotor dynamics or multibody code as an interface elements between the rotor and the supporting structure. The model includes descriptions of the nonlinear Hertzian contact deformation and the elastohydrodynamic fluid film. The simulation approaches and models developed here are applied in the analysis of two example rotor-bearing systems. The first example is an electric motor supported by two ball bearings and the second is a roller test rig that consists of the tube roll of a paper machine supported by a hard-bearing-type balanceing machine. The simulation results are compared to the results available in literature as well as to those obtained by measuring the existing structure. In both practical examples, the comparison shows that the simulation model is capable of predicting the realistic responses of a rotor system. The simulation approaches developed in this work can be used in the analysis of the superharmonic vibrations of general rotor-bearing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The underwater light field is an important environmental variable as it, among other things, enables aquatic primary production. Although the portion of solar radiation that is referred to as visible light penetrates water, it is restricted to a limited surface water layer because of efficient absorption and scattering processes. Based on the varying content of optical constituents in the water, the efficiency of light attenuation changes in many dimensions and over various spatial and temporal scales. This thesis discusses the underwater light dynamics of a transitional coastal archipelago in south-western Finland, in the Baltic Sea. While the area has long been known to have a highly variable underwater light field, quantified knowledge on the phenomenon has been scarce, patchy, or non-existent. This thesis focuses on the variability in the underwater light field through euphotic depths (1% irradiance remaining), which were derived from in situ measurements of vertical profiles of photosynthetically active radiation (PAR). Spot samples were conducted in the archipelago of south-western Finland, mainly during the ice-free growing seasons of 2010 and 2011. In addition to quantifying both the seasonal and geographical patterns of euphotic depth development, the need and usability of underwater light information are also discussed. Light availability was found to fluctuate in multiple dimensions and scales. The euphotic depth was shown to have combined spatio-temporal dynamics rather than separate changes in spatial and temporal dimensions. Such complexity in the underwater light field creates challenges in data collection, as well as in its utilisation. Although local information is needed, in highly variable conditions spot sampled information may only poorly represent its surroundings. Moreover, either temporally or spatially limited sampling may cause biases in understanding underwater light dynamics. Consequently, the application of light availability data, for example in ecological modelling, should be made with great caution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Solar Intensity X-ray and particle Spectrometer (SIXS) on board BepiColombo's Mercury Planetary Orbiter (MPO) will study solar energetic particles moving towards Mercury and solar X-rays on the dayside of Mercury. The SIXS instrument consists of two detector sub-systems; X-ray detector SIXS-X and particle detector SIXS-P. The SIXS-P subdetector will detect solar energetic electrons and protons in a broad energy range using a particle telescope approach with five outer Si detectors around a central CsI(Tl) scintillator. The measurements made by the SIXS instrument are necessary for other instruments on board the spacecraft. SIXS data will be used to study the Solar X-ray corona, solar flares, solar energetic particles, the Hermean magnetosphere, and solar eruptions. The SIXS-P detector was calibrated by comparing experimental measurement data from the instrument with Geant4 simulation data. Calibration curves were produced for the different side detectors and the core scintillator for electrons and protons, respectively. The side detector energy response was found to be linear for both electrons and protons. The core scintillator energy response to protons was found to be non-linear. The core scintillator calibration for electrons was omitted due to insufficient experimental data. The electron and proton acceptance of the SIXS-P detector was determined with Geant4 simulations. Electron and proton energy channels are clean in the main energy range of the instrument. At higher energies, protons and electrons produce non-ideal response in the energy channels. Due to the limited bandwidth of the spacecraft's telemetry, the particle measurements made by SIXS-P have to be pre-processed in the data processing unit of the SIXS instrument. A lookup table was created for the pre-processing of data with Geant4 simulations, and the ability of the lookup table to provide spectral information from a simulated electron event was analysed. The lookup table produces clean electron and proton channels and is able to separate protons and electrons. Based on a simulated solar energetic electron event, the incident electron spectrum cannot be determined from channel particle counts with a standard analysis method.