88 resultados para Data Modelling
Resumo:
Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about km800, carrying a C-band scatterometer. A scatterometer measures the amount of radar back scatter generated by small ripples on the ocean surface induced by instantaneous local winds. Operational methods that extract wind vectors from satellite scatterometer data are based on the local inversion of a forward model, mapping scatterometer observations to wind vectors, by the minimisation of a cost function in the scatterometer measurement space.par This report uses mixture density networks, a principled method for modelling conditional probability density functions, to model the joint probability distribution of the wind vectors given the satellite scatterometer measurements in a single cell (the `inverse' problem). The complexity of the mapping and the structure of the conditional probability density function are investigated by varying the number of units in the hidden layer of the multi-layer perceptron and the number of kernels in the Gaussian mixture model of the mixture density network respectively. The optimal model for networks trained per trace has twenty hidden units and four kernels. Further investigation shows that models trained with incidence angle as an input have results comparable to those models trained by trace. A hybrid mixture density network that incorporates geophysical knowledge of the problem confirms other results that the conditional probability distribution is dominantly bimodal.par The wind retrieval results improve on previous work at Aston, but do not match other neural network techniques that use spatial information in the inputs, which is to be expected given the ambiguity of the inverse problem. Current work uses the local inverse model for autonomous ambiguity removal in a principled Bayesian framework. Future directions in which these models may be improved are given.
Resumo:
The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.
Resumo:
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method to model conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.
Resumo:
Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.
Resumo:
This thesis consists of three empirical and one theoretical studies. While China has received an increasing amount of foreign direct investment (FDI) and become the second largest host country for FDI in recent years, the absence of comprehensive studies on FDI inflows into this country drives this research. In the first study, an econometric model is developed to analyse the economic, political, cultural and geographic determinants of both pledged and realised FDI in China. The results of this study suggest that China's relatively cheaper labour force, high degree of international integration with the outside world (represented by its exports and imports) and bilateral exchange rates are the important economic determinants of both pledged FDI and realised FDI in China. The second study analyses the regional distribution of both pledged and realised FDI within China. The econometric properties of the panel data set are examined using a standardised 't-bar' test. The empirical results indicate that provinces with higher level of international trade, lower wage rates, more R&D manpower, more preferential policies and closer ethnic links with overseas Chinese attract relatively more FDI. The third study constructs a dynamic equilibrium model to study the interactions among FDI, knowledge spillovers and long run economic growth in a developing country. The ideas of endogenous product cycles and trade-related international knowledge spillovers are modified and extended to FDI. The major conclusion is that, in the presence of FDI, economic growth is determined by the stock of human capital, the subjective discount rate and knowledge gap, while unskilled labour can not sustain growth. In the fourth study, the role of FDI in the growth process of the Chinese economy is investigated by using a panel of data for 27 provinces across China between 1986 and 1995. In addition to FDI, domestic R&D expenditure, international trade and human capital are added to the standard convergence regressions to control for different structural characteristics in each province. The empirical results support endogenous innovation growth theory in which regional per capita income can converge given technological diffusion, transfer and imitation.
Resumo:
This paper argues the use of reusable simulation templates as a tool that can help to predict the effect of e-business introduction on business processes. First, a set of requirements for e-business modelling is introduced and modelling options described. Traditional business process mapping techniques are examined as a way of identifying potential changes. Whilst paper-based process mapping may not highlight significant differences between traditional and e-business processes, simulation does allow the real effects of e-business to be identified. Simulation has the advantage of capturing the dynamic characteristics of the process, thus reflecting more accurately the changes in behaviour. This paper shows the value of using generic process maps as a starting point for collecting the data that is needed to build the simulation and proposes the use of reusable templates/components for the speedier building of e-business simulation models.
Resumo:
Data envelopment analysis (DEA) is defined based on observed units and by finding the distance of each unit to the border of estimated production possibility set (PPS). The convexity is one of the underlying assumptions of the PPS. This paper shows some difficulties of using standard DEA models in the presence of input-ratios and/or output-ratios. The paper defines a new convexity assumption when data includes a ratio variable. Then it proposes a series of modified DEA models which are capable to rectify this problem.
Resumo:
The major aim of this research is benchmarking top Arab banks using Data Envelopment Analysis (DEA) technique and to compare the results with that of published recently in Mostafa (2007a,b) [Mostafa, M. M. (2007a). Modeling the efficiency of top Arab banks: A DEA–neural network approach. Expert Systems with Applications, doi:10.1016/j.eswa.2007.09.001; Mostafa M. M. (2007b), Benchmarking top Arab banks’ efficiency through efficient frontier analysis, Industrial Management & Data Systems, 107(6) 802–823]. Data for 85 Arab banks used to conduct the analysis of relative efficiency. Our findings indicate that (1) the efficiency of Arab banks reported in Mostafa (2007a,b) is incorrect, hence, readers should take extra caution of using such results, (2) the corrected efficiency scores suggest that there is potential for significant improvements in Arab banks. In summary, this study overcomes with some data and methodology issues in measuring efficiency of Arab banks and highlights the importance of encouraging increased efficiency throughout the banking industry in the Arab world using the new results.
Resumo:
In the last two decades there have been substantial developments in the mathematical theory of inverse optimization problems, and their applications have expanded greatly. In parallel, time series analysis and forecasting have become increasingly important in various fields of research such as data mining, economics, business, engineering, medicine, politics, and many others. Despite the large uses of linear programming in forecasting models there is no a single application of inverse optimization reported in the forecasting literature when the time series data is available. Thus the goal of this paper is to introduce inverse optimization into forecasting field, and to provide a streamlined approach to time series analysis and forecasting using inverse linear programming. An application has been used to demonstrate the use of inverse forecasting developed in this study. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
Financial prediction has attracted a lot of interest due to the financial implications that the accurate prediction of financial markets can have. A variety of data driven modellingapproaches have been applied but their performance has produced mixed results. In this study we apply both parametric (neural networks with active neurons) and nonparametric (analog complexing) self-organisingmodelling methods for the daily prediction of the exchangerate market. We also propose acombinedapproach where the parametric and nonparametricself-organising methods are combined sequentially, exploiting the advantages of the individual methods with the aim of improving their performance. The combined method is found to produce promising results and to outperform the individual methods when tested with two exchangerates: the American Dollar and the Deutche Mark against the British Pound.
Resumo:
This thesis presents the results of numerical modelling of the propagation of dispersion managed solitons. The theory of optical pulse propagation in single mode optical fibre is introduced specifically looking at the use of optical solitons for fibre communications. The numerical technique used to solve the nonlinear Schrödinger equation is also introduced. The recent developments in the use of dispersion managed solitons are reviewed before the numerical results are presented. The work in this thesis covers two main areas; (i) the use of a saturable absorber to control the propagation of dispersion managed solutions and (ii) the upgrade of the installed standard fibre network to higher data rates through the use of solitons and dispersion management. Saturable absorbe can be used to suppress the build up of noise and dispersive radiation in soliton transmission lines. The use of saturable absorbers in conjunction with dispersion management has been investigated both as a single pulse and for the transmission of a 10Gbit/s data pattern. It is found that this system supports a new regime of stable soliton pulses with significantly increased powers. The upgrade of the installed standard fibre network to higher data rates through the use of fibre amplifiers and dispersion management is of increasing interest. In this thesis the propagation of data at both 10Gbit/s and 40Gbit/s is studied. Propagation over transoceanic distances is shown to be possible for 10Gbit/s transmission and for more than 2000km at 40Gbit/s. The contribution of dispersion managed solitons in the future of optical communications is discussed in the thesis conclusions.