854 resultados para Autoregressive moving average (ARMA)
Resumo:
This paper provides a solution for predicting moving/moving and moving/static collisions of objects within a virtual environment. Feasible prediction in real-time virtual worlds can be obtained by encompassing moving objects within a sphere and static objects within a convex polygon. Fast solutions are then attainable by describing the movement of objects parametrically in time as a polynomial.
Resumo:
Nanoscience and technology (NST) are widely cited to be the defining technology for the 21st century. In recent years, the debate surrounding NST has become increasingly public, with much of this interest stemming from two radically opposing long-term visions of a NST-enabled future: ‘nano-optimism’ and ‘nano-pessimism’. This paper demonstrates that NST is a complex and wide-ranging discipline, the future of which is characterised by uncertainty. It argues that consideration of the present-day issues surrounding NST is essential if the public debate is to move forwards. In particular, the social constitution of an emerging technology is crucial if any meaningful discussion surrounding costs and benefits is to be realised. An exploration of the social constitution of NST raises a number of issues, of which unintended consequences and the interests of those who own and control new technologies are highlighted.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
This chapter explores some of the implications of adopting a research approach that focuses on people and their livelihoods in the rice-wheat system of the Indo-Gangetic Plains. We draw on information from a study undertaken by the authors in Bangladesh and then consider the transferability of our findings to other situations. We conclude that if our research is to bridge the researcher-farmer interface, ongoing technical research must be supported by research that explores how institutional, policy, and communication strategies determine livelihood outcomes. The challenge that now faces researchers is to move beyond their involvement in participatory research to understand how to facilitate a process in which they provide information and products for others to test. Building capacity at various levels for openness in sharing information and products–seeing research as a public good for all–seems to be a prerequisite for more effective dissemination of the available information and technologies.
Resumo:
A series of coupled atmosphere–ocean–ice aquaplanet experiments is described in which topological constraints on ocean circulation are introduced to study the role of ocean circulation on the mean climate of the coupled system. It is imagined that the earth is completely covered by an ocean of uniform depth except for the presence or absence of narrow barriers that extend from the bottom of the ocean to the sea surface. The following four configurations are described: Aqua (no land), Ridge (one barrier extends from pole to pole), Drake (one barrier extends from the North Pole to 35°S), and DDrake (two such barriers are set 90° apart and join at the North Pole, separating the ocean into a large basin and a small basin, connected to the south). On moving from Aqua to Ridge to Drake to DDrake, the energy transports in the equilibrium solutions become increasingly “realistic,” culminating in DDrake, which has an uncanny resemblance to the present climate. Remarkably, the zonal-average climates of Drake and DDrake are strikingly similar, exhibiting almost identical heat and freshwater transports, and meridional overturning circulations. However, Drake and DDrake differ dramatically in their regional climates. The small and large basins of DDrake exhibit distinctive Atlantic-like and Pacific-like characteristics, respectively: the small basin is warmer, saltier, and denser at the surface than the large basin, and is the main site of deep water formation with a deep overturning circulation and strong northward ocean heat transport. A sensitivity experiment with DDrake demonstrates that the salinity contrast between the two basins, and hence the localization of deep convection, results from a deficit of precipitation, rather than an excess of evaporation, over the small basin. It is argued that the width of the small basin relative to the zonal fetch of atmospheric precipitation is the key to understanding this salinity contrast. Finally, it is argued that many gross features of the present climate are consequences of two topological asymmetries that have profound effects on ocean circulation: a meridional asymmetry (circumpolar flow in the Southern Hemisphere; blocked flow in the Northern Hemisphere) and a zonal asymmetry (a small basin and a large basin).
Resumo:
This paper presents a critical history of the concept of ‘structured deposition’. It examines the long-term development of this idea in archaeology, from its origins in the early 1980s through to the present day, looking at how it has been moulded and transformed. On the basis of this historical account, a number of problems are identified with the way that ‘structured deposition’ has generally been conceptualized and applied. It is suggested that the range of deposits described under a single banner as being ‘structured’ is unhelpfully broad, and that archaeologists have been too willing to view material culture patterning as intentionally produced – the result of symbolic or ritual action. It is also argued that the material signatures of ‘everyday’ practice have been undertheorized and all too often ignored. Ultimately, it is suggested that if we are ever to understand fully the archaeological signatures of past practice, it is vital to consider the ‘everyday’ as well as the ‘ritual’ processes which lie behind the patterns we uncover in the ground.
Resumo:
Neutron diffraction at 11.4 and 295 K and solid-state 67Zn NMR are used to determine both the local and average structures in the disordered, negative thermal expansion (NTE) material, Zn(CN)2. Solid-state NMR not only confirms that there is head-to-tail disorder of the C≡N groups present in the solid, but yields information about the relative abundances of the different Zn(CN)4-n(NC)n tetrahedral species, which do not follow a simple binomial distribution. The Zn(CN)4 and Zn(NC)4 species occur with much lower probabilities than are predicted by binomial theory, supporting the conclusion that they are of higher energy than the other local arrangements. The lowest energy arrangement is Zn(CN)2(NC)2. The use of total neutron diffraction at 11.4 K, with analysis of both the Bragg diffraction and the derived total correlation function, yields the first experimental determination of the individual Zn−N and Zn−C bond lengths as 1.969(2) and 2.030(2) Å, respectively. The very small difference in bond lengths, of ~0.06 Å, means that it is impossible to obtain these bond lengths using Bragg diffraction in isolation. Total neutron diffraction also provides information on both the average and local atomic displacements responsible for NTE in Zn(CN)2. The principal motions giving rise to NTE are shown to be those in which the carbon and nitrogen atoms within individual Zn−C≡N−Zn linkages are displaced to the same side of the Zn···Zn axis. Displacements of the carbon and nitrogen atoms to opposite sides of the Zn···Zn axis, suggested previously in X-ray studies as being responsible for NTE behavior, in fact make negligible contribution at temperatures up to 295 K.
Resumo:
Methods of improving the coverage of Box–Jenkins prediction intervals for linear autoregressive models are explored. These methods use bootstrap techniques to allow for parameter estimation uncertainty and to reduce the small-sample bias in the estimator of the models’ parameters. In addition, we also consider a method of bias-correcting the non-linear functions of the parameter estimates that are used to generate conditional multi-step predictions.
Resumo:
The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.
Resumo:
Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
A key step in many numerical schemes for time-dependent partial differential equations with moving boundaries is to rescale the problem to a fixed numerical mesh. An alternative approach is to use a moving mesh that can be adapted to focus on specific features of the model. In this paper we present and discuss two different velocity-based moving mesh methods applied to a two-phase model of avascular tumour growth formulated by Breward et al. (2002) J. Math. Biol. 45(2), 125-152. Each method has one moving node which tracks the moving boundary. The first moving mesh method uses a mesh velocity proportional to the boundary velocity. The second moving mesh method uses local conservation of volume fraction of cells (masses). Our results demonstrate that these moving mesh methods produce accurate results, offering higher resolution where desired whilst preserving the balance of fluxes and sources in the governing equations.
Resumo:
Enterprise Architecture (EA) has been recognised as an important tool in modern business management for closing the gap between strategy and its execution. The current literature implies that for EA to be successful, it should have clearly defined goals. However, the goals of different stakeholders are found to be different, even contradictory. In our explorative research, we seek an answer to the questions: What kind of goals are set for the EA implementation? How do the goals evolve during the time? Are the goals different among stakeholders? How do they affect the success of EA? We analysed an EA pilot conducted among eleven Finnish Higher Education Institutions (HEIs) in 2011. The goals of the pilot were gathered from three different stages of the pilot: before the pilot, during the pilot, and after the pilot, by means of a project plan, interviews during the pilot and a questionnaire after the pilot. The data was analysed using qualitative and quantitative methods. Eight distinct goals were recognised by the coding: Adopt EA Method, Build Information Systems, Business Development, Improve Reporting, Process Improvement, Quality Assurance, Reduce Complexity, and Understand the Big Picture. The success of the pilot was analysed statistically using the scale 1-5. Results revealed that goals set before the pilot were very different from those mentioned during the pilot, or after the pilot. Goals before the pilot were mostly related to expected benefits from the pilot, whereas the most important result was to adopt the EA method. Results can be explained by possibly different roles of respondents, which in turn were most likely caused by poor communication. Interestingly, goals mentioned by different stakeholders were not limited to their traditional areas of responsibility. For example, in some cases Chief Information Officers' goals were Quality Assurance and Process Improvement, whereas managers’ goals were Build Information Systems and Adopt EA Method. This could be a result of a good understanding of the meaning of EA, or stakeholders do not regard EA as their concern at all. It is also interesting to notice that regardless of the different perceptions of goals among stakeholders, all HEIs felt the pilot to be successful. Thus the research does not provide support to confirm the link between clear goals and success.