29 resultados para Autoregressive-Moving Average model

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the exchange rate forecasting performance of neural network models are evaluated against the random walk, autoregressive moving average and generalised autoregressive conditional heteroskedasticity models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore, the parameters are chosen according to what the researcher considers to be the best. Such an approach, however,implies that the risk of making bad decisions is extremely high, which could explain why in many studies, neural network models do not consistently perform better than their time series counterparts. In this paper, through extensive experimentation, the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of Forecasting exchange rates with linear and nonlinear models 415 performing well. The results show that in general, neural network models perform better than the traditionally used time series models in forecasting exchange rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National meteorological offices are largely concerned with synoptic-scale forecasting where weather predictions are produced for a whole country for 24 hours ahead. In practice, many local organisations (such as emergency services, construction industries, forestry, farming, and sports) require only local short-term, bespoke, weather predictions and warnings. This thesis shows that the less-demanding requirements do not require exceptional computing power and can be met by a modern, desk-top system which monitors site-specific ground conditions (such as temperature, pressure, wind speed and direction, etc) augmented with above ground information from satellite images to produce `nowcasts'. The emphasis in this thesis has been towards the design of such a real-time system for nowcasting. Local site-specific conditions are monitored using a custom-built, stand alone, Motorola 6809 based sub-system. Above ground information is received from the METEOSAT 4 geo-stationary satellite using a sub-system based on a commercially available equipment. The information is ephemeral and must be captured in real-time. The real-time nowcasting system for localised weather handles the data as a transparent task using the limited capabilities of the PC system. Ground data produces a time series of measurements at a specific location which represents the past-to-present atmospheric conditions of the particular site from which much information can be extracted. The novel approach adopted in this thesis is one of constructing stochastic models based on the AutoRegressive Integrated Moving Average (ARIMA) technique. The satellite images contain features (such as cloud formations) which evolve dynamically and may be subject to movement, growth, distortion, bifurcation, superposition, or elimination between images. The process of extracting a weather feature, following its motion and predicting its future evolution involves algorithms for normalisation, partitioning, filtering, image enhancement, and correlation of multi-dimensional signals in different domains. To limit the processing requirements, the analysis in this thesis concentrates on an `area of interest'. By this rationale, only a small fraction of the total image needs to be processed, leading to a major saving in time. The thesis also proposes an extention to an existing manual cloud classification technique for its implementation in automatically classifying a cloud feature over the `area of interest' for nowcasting using the multi-dimensional signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To carry out stability and voltage regulation studies on more electric aircraft systems in which there is a preponderance of multi-pulse, rectifier-fed motor-drive equipment, average dynamic models of the rectifier converters are required. Existing methods are difficult to apply to anything other than single converters with a low pulse number. Therefore an efficient, compact method for deriving the approximate, linear, average model of 6- and 12-pulse rectifiers, based on the assumption of a small duration of the overlap angle is presented. The models are validated against detailed simulations and laboratory prototypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a forecasting technique for forward electricity/gas prices, one day ahead. This technique combines a Kalman filter (KF) and a generalised autoregressive conditional heteroschedasticity (GARCH) model (often used in financial forecasting). The GARCH model is used to compute next value of a time series. The KF updates parameters of the GARCH model when the new observation is available. This technique is applied to real data from the UK energy markets to evaluate its performance. The results show that the forecasting accuracy is improved significantly by using this hybrid model. The methodology can be also applied to forecasting market clearing prices and electricity/gas loads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that even slight changes in nonuniform illumination lead to a large image variability and are crucial for many visual tasks. This paper presents a new ICA related probabilistic model where the number of sources exceeds the number of sensors to perform an image segmentation and illumination removal, simultaneously. We model illumination and reflectance in log space by a generalized autoregressive process and Hidden Gaussian Markov random field, respectively. The model ability to deal with segmentation of illuminated images is compared with a Canny edge detector and homomorphic filtering. We apply the model to two problems: synthetic image segmentation and sea surface pollution detection from intensity images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper assesses the extent to which the equity markets of Hungary, Poland the Czech Republic and Russia have become less segmented. Using a variety of tests it is shown there has been a consistent increase in the co-movement of some Eastern European markets and developed markets. Using the variance decompositions from a vector autoregressive representation of returns it is shown that for Poland and Hungary global factors are having an increasing influence on equity returns, suggestive of increased equity market integration. In this paper we model a system of bivariate equity market correlations as a smooth transition logistic trend model in order to establish how rapidly the countries of Eastern Europe are moving away from market segmentation. We find that Hungary is the country which is becoming integrated the most quickly. © 2005 ELsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combined bioreaction separation studies have been carried out for the first time on a moving port semi-continuous counter-current chromatographic reactor-separator (SCCR-S1) consisting of twelve 5.4cm id x 75cm long columns packed with calcium charged cross-linked polystyrene resin (KORELA V07C). The inversion of sucrose to glucose and fructose in the presence of the enzyme invertase and the biochemIcal synthesis of dextran and fructose from sucrose in the presence of the enzyme dextransucrase were investigated. A dilute stream of the appropriate enzyme in deionised water was used as the eluent stream. The effect of switch time, feed concentration, enzyme activity, eluent rate and enzyme to feed concentration ratio on the combined bioreaction-separation were investigated. For the invertase reaction, at 20.77% w/v sucrose feed concentrations complete conversions were achieved. The enzyme usage was 34% of the theoretical enzyme amount needed to convert an equivalent amount of sucrose over the same time period when using a conventional fermenter. The fructose rich (FRP) and glucose rich (GRP) product purities obtained were over 90%. By operating at 35% w/v sucrose feed concentration and employing the product splitting and recycling techniques, the total concentration and purity of the GRP increased from 32% w/v to 4.6% and from 92.3% to 95% respectively. The FRP concentration also increased from 1.82% w/v to 2.88% w/v. A mathematical model was developed for the combined reaction-separation and used to simulate the continuous inversion of sucrose and product separation using the SCCR-S1. In the biosynthesis of dextran studies, 52% conversion of a 2% w/v sucrose concentration feed was achieved. An average dextran molecular weight of 4 millIon was obtained in the dextran rich (DRP) product stream. The enzyme dextransucrase was purifed successfully using centrifugation and ultrafiltration techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Practitioners and academics are in broad agreement that, above all, organizations need to be able to learn, to innovate and to question existing ways of working. This thesis develops a model to take into account, firstly, what determines whether or not organizations endorse practices designed to facilitate learning. Secondly, the model evaluates the impact of such practices upon organizational outcomes, measured in terms of products and technological innovation. Researchers have noted that organizations that are committed to producing innovation show great resilience in dealing with adverse business conditions (e.g. Pavitt, 1991; Leonard Barton, 1998). In effect, such organizations bear many of the characteristics associated with the achievement of ‘learning organization’ status (Garvin, 1993; Pedler, Burgoyne & Boydell, 1999; Senge, 1990). Seven studies are presented to support this theoretical framework. The first empirical study explores the antecedents to effective learning. The three following studies present data to suggest that people management practices are highly significant in determining whether or not organizations are able to produce sustained innovation. The thesis goes on to explore the relationship between organizational-level job satisfaction, learning and innovation, and provides evidence to suggest that there is a strong, positive relationship between these variables. The final two chapters analyze learning and innovation within two similar manufacturing organizations. One manifests relatively low levels of innovation whilst the other is generally considered to be outstandingly innovative. I present the comparative framework for exploring the different approaches to learning manifested by the two organizations. The thesis concludes by assessing the extent to which the theoretical model presented in the second chapter is borne out by the findings of the study. Whilst this is a relatively new field of inquiry, findings reveal that organizations have a much stronger chance of producing sustained innovation where they manage people proactively where people process themselves to be satisfied at work. Few studies to date have presented empirical evidence to substantiate theoretical endorsements to engage in higher order learning, so this research makes an important contribution to existing literature in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the difficulties that we have regarding the use of English in tertiary education in Turkey, we argue that it is necessary for those involved to study in the medium of English. Furthermore, significant advances have been made on this front. These efforts have been for the most part language-oriented, but also include research into needs analysis and the pedagogy of team-teaching. Considering the current situation at this level of education, however, there still seems to be more to do. And the question is, what more can we do? What further contribution can we make? Or, how can we take this process further? The purpose of the study reported here is to respond to this last question. We test the proposition that it is possible to take this process further by investigating the efficient management of transition from Turkish-medium to English-medium at the tertiary level of education in Turkey. Beyond what is achieved by only the language orientation of the EAP approach, and moving conceptually deeper than what has been achieved by the team-teaching approach, the research undertaken for the purpose of this study focuses on the idea of the discourse community that people want to belong to. It then pursues an adaptation of the essentially psycho-social approach of apprenticeship, as people become aspirants and apprentices to that discourse community. In this thesis, the researcher recognises that she cannot follow all the way through to the full implementation of her ideas in a fully-taught course. She is not in a position to change the education system. What she does here is to introduce a concept and sample its effects in terms of motivation, and thereby of integration and of success, for individuals and groups of learners. Evaluation is provided by acquiring both qualitative and quantitative data concerning mature members' perceptions of apprenticed-neophytes functioning as members in the new community, apprenticed-neophytes' perceptions of their own membership and of the preparation process undertaken, and the comparison of these neophytes' performance with that of other neophytes in the community. The data obtained provide strong evidence in support of the potential usefulness of this apprenticeship model towards the declared purpose of improving the English-medium tertiary education of Turkish students in their chosen fields of study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics (MD) has been used to identify the relative distribution of dysprosium in the phosphate glass DyAl0.30P3.05O9.62. The MD model has been compared directly with experimental data obtained from neutron diffraction to enable a detailed comparison beyond the total structure factor level. The MD simulation gives Dy ... Dy correlations at 3.80(5) and 6.40(5) angstrom with relative coordination numbers of 0.8(1) and 7.3(5), thus providing evidence of minority rare-earth clustering within these glasses. The nearest neighbour Dy-O peak occurs at 2.30 angstrom with each Dy atom having on average 5.8 nearest neighbour oxygen atoms. The MD simulation is consistent with the phosphate network model based on interlinked PO4 tetrahedra where the addition of network modifiers Dy3+ depolymerizes the phosphate network through the breakage of P-(O)-P bonds whilst leaving the tetrahedral units intact. The role of aluminium within the network has been taken into explicit account, and A1 is found to be predominantly (78 tetrahedrally coordinated. In fact all four A1 bonds are found to be to P (via an oxygen atom) with negligible amounts of Al-O-Dy bonds present. This provides an important insight into the role of Al additives in improving the mechanical properties of these glasses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.