970 resultados para turbulence modelling theory
Resumo:
Stochastic models based on Markov birth processes are constructed to describe the process of invasion of a fly larva by entomopathogenic nematodes. Various forms for the birth (invasion) rates are proposed. These models are then fitted to data sets describing the observed numbers of nematodes that have invaded a fly larval after a fixed period of time. Non-linear birthrates are required to achieve good fits to these data, with their precise form leading to different patterns of invasion being identified for three populations of nematodes considered. One of these (Nemasys) showed the greatest propensity for invasion. This form of modelling may be useful more generally for analysing data that show variation which is different from that expected from a binomial distribution.
Resumo:
Previous work on formally modelling and analysing program compilation has shown the need for a simple and expressive semantics for assembler level programs. Assembler programs contain unstructured jumps and previous formalisms have modelled these by using continuations, or by embedding the program in an explicit emulator. We propose a simpler approach, which uses techniques from compiler theory in a formal setting. This approach is based on an interpretation of programs as collections of program paths, each of which has a weakest liberal precondition semantics. We then demonstrate, by example, how we can use this formalism to justify the compilation of block-structured high-level language programs into assembler.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
This thesis consists of three empirical and one theoretical studies. While China has received an increasing amount of foreign direct investment (FDI) and become the second largest host country for FDI in recent years, the absence of comprehensive studies on FDI inflows into this country drives this research. In the first study, an econometric model is developed to analyse the economic, political, cultural and geographic determinants of both pledged and realised FDI in China. The results of this study suggest that China's relatively cheaper labour force, high degree of international integration with the outside world (represented by its exports and imports) and bilateral exchange rates are the important economic determinants of both pledged FDI and realised FDI in China. The second study analyses the regional distribution of both pledged and realised FDI within China. The econometric properties of the panel data set are examined using a standardised 't-bar' test. The empirical results indicate that provinces with higher level of international trade, lower wage rates, more R&D manpower, more preferential policies and closer ethnic links with overseas Chinese attract relatively more FDI. The third study constructs a dynamic equilibrium model to study the interactions among FDI, knowledge spillovers and long run economic growth in a developing country. The ideas of endogenous product cycles and trade-related international knowledge spillovers are modified and extended to FDI. The major conclusion is that, in the presence of FDI, economic growth is determined by the stock of human capital, the subjective discount rate and knowledge gap, while unskilled labour can not sustain growth. In the fourth study, the role of FDI in the growth process of the Chinese economy is investigated by using a panel of data for 27 provinces across China between 1986 and 1995. In addition to FDI, domestic R&D expenditure, international trade and human capital are added to the standard convergence regressions to control for different structural characteristics in each province. The empirical results support endogenous innovation growth theory in which regional per capita income can converge given technological diffusion, transfer and imitation.
Resumo:
This article considers the role of accounting in organisational decision making. It challenges the rational nature of decisions made in organisations through the use of accounting models and the problems of predicting the future through the use of such models. The use of accounting in this manner is evaluated from an epochal postmodern stance. Issues raised by chaos theory and the uncertainty principle are used to demonstrate problems with the predictive ability of accounting models. The authors argue that any consideration of the predictive value of accounting needs to change to incorporate a recognition of the turbulent external environment, if it is to be of use for organisational decision making. Thus it is argued that the role of accounting as a mechanism for knowledge creation regarding the future is fundamentally flawed. We take this as a starting-point to argue for the real purpose of the use of the predictive techniques of accounting, using its ritualistic role in the context of myth creation to argue for the cultural benefits of the use of such flawed techniques.
Resumo:
In the last two decades there have been substantial developments in the mathematical theory of inverse optimization problems, and their applications have expanded greatly. In parallel, time series analysis and forecasting have become increasingly important in various fields of research such as data mining, economics, business, engineering, medicine, politics, and many others. Despite the large uses of linear programming in forecasting models there is no a single application of inverse optimization reported in the forecasting literature when the time series data is available. Thus the goal of this paper is to introduce inverse optimization into forecasting field, and to provide a streamlined approach to time series analysis and forecasting using inverse linear programming. An application has been used to demonstrate the use of inverse forecasting developed in this study. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
This thesis presents the results of numerical modelling of the propagation of dispersion managed solitons. The theory of optical pulse propagation in single mode optical fibre is introduced specifically looking at the use of optical solitons for fibre communications. The numerical technique used to solve the nonlinear Schrödinger equation is also introduced. The recent developments in the use of dispersion managed solitons are reviewed before the numerical results are presented. The work in this thesis covers two main areas; (i) the use of a saturable absorber to control the propagation of dispersion managed solutions and (ii) the upgrade of the installed standard fibre network to higher data rates through the use of solitons and dispersion management. Saturable absorbe can be used to suppress the build up of noise and dispersive radiation in soliton transmission lines. The use of saturable absorbers in conjunction with dispersion management has been investigated both as a single pulse and for the transmission of a 10Gbit/s data pattern. It is found that this system supports a new regime of stable soliton pulses with significantly increased powers. The upgrade of the installed standard fibre network to higher data rates through the use of fibre amplifiers and dispersion management is of increasing interest. In this thesis the propagation of data at both 10Gbit/s and 40Gbit/s is studied. Propagation over transoceanic distances is shown to be possible for 10Gbit/s transmission and for more than 2000km at 40Gbit/s. The contribution of dispersion managed solitons in the future of optical communications is discussed in the thesis conclusions.