10 resultados para Residual-Based Panel Cointegration Test

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article investigates the behaviour of exchange rates across different regimes for a post-Bretton Woods period. The exchange rate regime classification is based on the classification of Frankel et al. (2004) who condensed the 10 categories of exchange rate regimes reported by the International Monetary Fund (IMF) into three categories. Panel unitroot tests and panel cointegration are used to examine the Purchasing Power Parity (PPP) hypothesis. The latter test is used to check for both the weak and strong forms of PPP. The panel unit-root tests show no evidence of PPP and suggest there is no difference in the behaviour of exchange rates across different regimes. However, failure to detect PPP across any of the regimes could be due to structural breaks. This assumption is reinforced by the results of cointegration tests, which suggest that there exists at least a weak form of PPP for the different regimes. The evidence for strong PPP decreases as the exchange rate regime moves away from a flexible exchange rate regime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the relationship between executive directors’ remuneration and the financial performance and corporate governance arrangements of the UK and Spanish listed firms. These countries’ corporate governance framework has been shaped by differences in legal origin, culture and backgrounds. For example, the UK legal arrangements can be defined as to be constituted in common-law, whereas for Spanish firms, the legal arrangement is based on civil law. We estimate both static and dynamic regression models to test our hypotheses and we estimate our regression using Ordinary Least Squares (OLS) and the Generalised Method of Moments (GMM). Estimated results for both countries show that directors’ remuneration levels are positively related with measures of firm value and financial performance. This means that remuneration levels do not lead to a point whereby firm value is reduced due to excessive remuneration. These results hold for our long-run estimates. That is, estimates based on panel cointegration and panel error correction. Measures of corporate governance also impacts on the level of executive pay. Our results have important implications for existing corporate governance arrangements and how the interests of stakeholders are protected. For example, long-run results suggest that directors’ remuneration adjusts in a way to capture variation in financial performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a semiparametric smooth-coefficient (SPSC) stochastic production frontier model where regression coefficients are unknown smooth functions of environmental factors (ZZ). Technical inefficiency is specified in the form of a parametric scaling function which also depends on the ZZ variables. Thus, in our SPSC model the ZZ variables affect productivity directly via the technology parameters as well as through inefficiency. A residual-based bootstrap test of the relevance of the environmental factors in the SPSC model is suggested. An empirical application is also used to illustrate the technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background/aims: Network 1000 is a UK-based panel survey of a representative sample of adults with registered visual impairment, with the aim of gathering information about people’s opinions and circumstances. Method: Participants were interviewed (Survey 1, n = 1007: 2005; Survey 2, n = 922: 2006/07) on a range of topics including the nature of their eye condition, details of other health issues, use of low vision aids (LVAs) and their experiences in eye clinics. Results: Eleven percent of individuals did not know the name of their eye condition. Seventy percent of participants reported having long-term health problems or disabilities in addition to visual impairment and 43% reported having hearing difficulties. Seventy one percent reported using LVAs for reading tasks. Participants who had become registered as visually impaired in the previous 8 years (n = 395) were asked questions about non-medical information received in the eye clinic around that time. Reported information received included advice about ‘registration’ (48%), low vision aids (45%) and social care routes (43%); 17% reported receiving no information. While 70% of people were satisfied with the information received, this was lower for those of working age (56%) compared with retirement age (72%). Those who recalled receiving additional non-medical information and advice at the time of registration also recalled their experiences more positively. Conclusions: Whilst caution should be applied to the accuracy of recall of past events, the data provide a valuable insight into the types of information and support that visually impaired people feel they would benefit from in the eye clinic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many attempts have been made to overcome problems involved in character recognition which have resulted in the manufacture of character reading machines. An investigation into a new approach to character recognition is described. Features for recognition are Fourier coefficients. These are generated optically by convolving characters with periodic gratings. The development of hardware to enable automatic measurement of contrast and position of periodic shadows produced by the convolution is described. Fourier coefficients of character sets were measured, many of which are tabulated. Their analysis revealed that a few low frequency sampling points could be selected to recognise sets of numerals. Limited treatment is given to show the effect of type face variations on the values of coefficients which culminated in the location of six sampling frequencies used as features to recognise numerals in two type fonts. Finally, the construction of two character recognition machines is compared and contrasted. The first is a pilot plant based on a test bed optical Fourier analyser, while the second is a more streamlined machine d(3signed for high speed reading. Reasons to indicate that the latter machine would be the most suitable to adapt for industrial and commercial applications are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous work has indicated the presence of collapsing and structured soils in the surface layers underlying Sana's, the capital of Yemen Republic. This study set out initially to define and, ultimately, to alleviate the problem by investigating the deformation behaviour of these soils through both field and laboratory programmes. The field programme was carried out in Sana'a while the laboratory work consisted of two parts, an initial phase at Sana's University carried out in parallel with the field programme on natural and treated soils and the major phase at Aston University carried out on natural, destructured and selected treated soils. The initial phase of the laboratory programme included classification, permeability, and single (collapsing) and double oedometer tests while the major phase, at Aston, was extended to also include extensive single and double oedometer tests, Scanning Electron Microscopy and Energy Dispersive Spectrum analysis. The mechanical tests were carried out on natural and destructed samples at both the in situ and soaked moisture conditions. The engineering characteristics of the natural intact, field-treated and laboratory destructured soils are reported, including their collapsing potentials which show them to be weakly bonded with nil to severe collapsing susceptibility. Flooding had no beneficial effect, with limited to moderate improvement being achieved by preloading and roller compaction, while major benefits were achieved from deep compaction. From these results a comparison between the soil response to the different treatments and general field remarks were presented. Laboratory destructuring reduced the stiffness of the soils while their compressibility was increasing. Their collapsing and destructuring mechanisms have been examined by studying the changes in structure accompanying these phenomena. Based on the test results for the intact and the laboratory destructured soils, a simplified framework has been developed to represent the collapsing and deformation behaviour at both the partially saturated and soaked states, and comments are given on its general applicability and limitations. It has been used to evaluate all the locations subjected to field treatment. It provided satisfactory results for the deformation behaviour of the soils destructed by field treatment. Finally attention is drawn to the design considerations together with the recommendations for the selection of potential improvement techniques to be used for foundation construction on the particular soils of the Sana's region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study is to develop econometric models to better understand the economic factors affecting inbound tourist flows from each of six origin countries that contribute to Hong Kong’s international tourism demand. To this end, we test alternative cointegration and error correction approaches to examine the economic determinants of tourist flows to Hong Kong, and to produce accurate econometric forecasts of inbound tourism demand. Our empirical findings show that permanent income is the most significant determinant of tourism demand in all models. The variables of own price, weighted substitute prices, trade volume, the share price index (as an indicator of changes in wealth in origin countries), and a dummy variable representing the Beijing incident (1989) are also found to be important determinants for some origin countries. The average long-run income and own price elasticity was measured at 2.66 and – 1.02, respectively. It was hypothesised that permanent income is a better explanatory variable of long-haul tourism demand than current income. A novel approach (grid search process) has been used to empirically derive the weights to be attached to the lagged income variable for estimating permanent income. The results indicate that permanent income, estimated with empirically determined relatively small weighting factors, was capable of producing better results than the current income variable in explaining long-haul tourism demand. This finding suggests that the use of current income in previous empirical tourism demand studies may have produced inaccurate results. The share price index, as a measure of wealth, was also found to be significant in two models. Studies of tourism demand rarely include wealth as an explanatory forecasting long-haul tourism demand. However, finding a satisfactory proxy for wealth common to different countries is problematic. This study indicates with the ECM (Error Correction Models) based on the Engle-Granger (1987) approach produce more accurate forecasts than ECM based on Pesaran and Shin (1998) and Johansen (1988, 1991, 1995) approaches for all of the long-haul markets and Japan. Overall, ECM produce better forecasts than the OLS, ARIMA and NAÏVE models, indicating the superiority of the application of a cointegration approach for tourism demand forecasting. The results show that permanent income is the most important explanatory variable for tourism demand from all countries but there are substantial variations between countries with the long-run elasticity ranging between 1.1 for the U.S. and 5.3 for U.K. Price is the next most important variable with the long-run elasticities ranging between -0.8 for Japan and -1.3 for Germany and short-run elasticities ranging between – 0.14 for Germany and -0.7 for Taiwan. The fastest growing market is Mainland China. The findings have implications for policies and strategies on investment, marketing promotion and pricing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Evaluation and benchmarking in content-based image retrieval has always been a somewhat neglected research area, making it difficult to judge the efficacy of many presented approaches. In this paper we investigate the issue of benchmarking for colour-based image retrieval systems, which enable users to retrieve images from a database based on lowlevel colour content alone. We argue that current image retrieval evaluation methods are not suited to benchmarking colour-based image retrieval systems, due in main to not allowing users to reflect upon the suitability of retrieved images within the context of a creative project and their reliance on highly subjective ground-truths. As a solution to these issues, the research presented here introduces the Mosaic Test for evaluating colour-based image retrieval systems, in which test-users are asked to create an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. We report on our findings from a user study which suggests that the Mosaic Test overcomes the major drawbacks associated with existing image retrieval evaluation methods, by enabling users to reflect upon image selections and automatically measuring image relevance in a way that correlates with the perception of many human assessors. We therefore propose that the Mosaic Test be adopted as a standardised benchmark for evaluating and comparing colour-based image retrieval systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.