95 resultados para conditional autoregression
Resumo:
In 2007 futures contracts were introduced based upon the listed real estate market in Europe. Following their launch they have received increasing attention from property investors, however, few studies have considered the impact their introduction has had. This study considers two key elements. Firstly, a traditional Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model, the approach of Bessembinder & Seguin (1992) and the Gray’s (1996) Markov-switching-GARCH model are used to examine the impact of futures trading on the European real estate securities market. The results show that futures trading did not destabilize the underlying listed market. Importantly, the results also reveal that the introduction of a futures market has improved the speed and quality of information flowing to the spot market. Secondly, we assess the hedging effectiveness of the contracts using two alternative strategies (naïve and Ordinary Least Squares models). The empirical results also show that the contracts are effective hedging instruments, leading to a reduction in risk of 64 %.
Resumo:
The authors model retail rents in the United Kingdom with use of vector-autoregressive and time-series models. Two retail rent series are used, compiled by LaSalle Investment Management and CB Hillier Parker, and the emphasis is on forecasting. The results suggest that the use of the vector-autoregression and time-series models in this paper can pick up important features of the data that are useful for forecasting purposes. The relative forecasting performance of the models appears to be subject to the length of the forecast time-horizon. The results also show that the variables which were appropriate for inclusion in the vector-autoregression systems differ between the two rent series, suggesting that the structure of optimal models for predicting retail rents could be specific to the rent index used. Ex ante forecasts from our time-series suggest that both LaSalle Investment Management and CB Hillier Parker real retail rents will exhibit an annual growth rate above their long-term mean.
Resumo:
This paper contributes to the debate on the effects of the financialization of commodity futures markets by studying the conditional volatility of long–short commodity portfolios and their conditional correlations with traditional assets (stocks and bonds). Using several groups of trading strategies that hedge fund managers are known to implement, we show that long–short speculators do not cause changes in the volatilities of the portfolios they hold or changes in the conditional correlations between these portfolios and traditional assets. Thus calls for increased regulation of commodity money managers are, at this stage, premature. Additionally, long–short speculators can take comfort in knowing that their trades do not alter the risk and diversification properties of their portfolios.
Resumo:
Strong winds equatorwards and rearwards of a cyclone core have often been associated with two phenomena, the cold conveyor belt (CCB) jet and sting jets. Here, detailed observations of the mesoscale structure in this region of an intense cyclone are analysed. The {\it in-situ} and dropsonde observations were obtained during two research flights through the cyclone during the DIAMET (DIAbatic influences on Mesoscale structures in ExTratropical storms) field campaign. A numerical weather prediction model is used to link the strong wind regions with three types of ``air streams'', or coherent ensembles of trajectories: two types are identified with the CCB, hooking around the cyclone center, while the third is identified with a sting jet, descending from the cloud head to the west of the cyclone. Chemical tracer observations show for the first time that the CCB and sting jet air streams are distinct air masses even when the associated low-level wind maxima are not spatially distinct. In the model, the CCB experiences slow latent heating through weak resolved ascent and convection, while the sting jet experiences weak cooling associated with microphysics during its subsaturated descent. Diagnosis of mesoscale instabilities in the model shows that the CCB passes through largely stable regions, while the sting jet spends relatively long periods in locations characterized by conditional symmetric instability (CSI). The relation of CSI to the observed mesoscale structure of the bent-back front and its possible role in the cloud banding is discussed.
Resumo:
The Bollène-2002 Experiment was aimed at developing the use of a radar volume-scanning strategy for conducting radar rainfall estimations in the mountainous regions of France. A developmental radar processing system, called Traitements Régionalisés et Adaptatifs de Données Radar pour l’Hydrologie (Regionalized and Adaptive Radar Data Processing for Hydrological Applications), has been built and several algorithms were specifically produced as part of this project. These algorithms include 1) a clutter identification technique based on the pulse-to-pulse variability of reflectivity Z for noncoherent radar, 2) a coupled procedure for determining a rain partition between convective and widespread rainfall R and the associated normalized vertical profiles of reflectivity, and 3) a method for calculating reflectivity at ground level from reflectivities measured aloft. Several radar processing strategies, including nonadaptive, time-adaptive, and space–time-adaptive variants, have been implemented to assess the performance of these new algorithms. Reference rainfall data were derived from a careful analysis of rain gauge datasets furnished by the Cévennes–Vivarais Mediterranean Hydrometeorological Observatory. The assessment criteria for five intense and long-lasting Mediterranean rain events have proven that good quantitative precipitation estimates can be obtained from radar data alone within 100-km range by using well-sited, well-maintained radar systems and sophisticated, physically based data-processing systems. The basic requirements entail performing accurate electronic calibration and stability verification, determining the radar detection domain, achieving efficient clutter elimination, and capturing the vertical structure(s) of reflectivity for the target event. Radar performance was shown to depend on type of rainfall, with better results obtained with deep convective rain systems (Nash coefficients of roughly 0.90 for point radar–rain gauge comparisons at the event time step), as opposed to shallow convective and frontal rain systems (Nash coefficients in the 0.6–0.8 range). In comparison with time-adaptive strategies, the space–time-adaptive strategy yields a very significant reduction in the radar–rain gauge bias while the level of scatter remains basically unchanged. Because the Z–R relationships have not been optimized in this study, results are attributed to an improved processing of spatial variations in the vertical profile of reflectivity. The two main recommendations for future work consist of adapting the rain separation method for radar network operations and documenting Z–R relationships conditional on rainfall type.
Resumo:
In this paper we address two topical questions: How do the quality of governance and agricultural intensification impact on spatial expansion of agriculture? Which aspects of governance are more likely to ensure that agricultural intensification allows sparing land for nature? Using data from the Food and Agriculture Organization, the World Bank, the World Database on Protected Areas, and the Yale Center for Environmental Law and Policy, we estimate a panel data model for six South American countries and quantify the effects of major determinants of agricultural land expansion, including various dimensions of governance, over the period 1970–2006. The results indicate that the effect of agricultural intensification on agricultural expansion is conditional on the quality and type of governance. When considering conventional aspects of governance, agricultural intensification leads to an expansion of agricultural area when governance scores are high. When looking specifically at environmental aspects of governance, intensification leads to a spatial contraction of agriculture when governance scores are high, signaling a sustainable intensification process.
Resumo:
BACKGROUND: Integrin-linked kinase (ILK) and its associated complex of proteins are involved in many cellular activation processes, including cell adhesion and integrin signaling. We have previously demonstrated that mice with induced platelet ILK deficiency show reduced platelet activation and aggregation, but only a minor bleeding defect. Here, we explore this apparent disparity between the cellular and hemostatic phenotypes. METHODS: The impact of ILK inhibition on integrin αII b β3 activation and degranulation was assessed with the ILK-specific inhibitor QLT0267, and a conditional ILK-deficient mouse model was used to assess the impact of ILK deficiency on in vivo platelet aggregation and thrombus formation. RESULTS: Inhibition of ILK reduced the rate of both fibrinogen binding and α-granule secretion, but was accompanied by only a moderate reduction in the maximum extent of platelet activation or aggregation in vitro. The reduction in the rate of fibrinogen binding occurred prior to degranulation or translocation of αII b β3 to the platelet surface. The change in the rate of platelet activation in the absence of functional ILK led to a reduction in platelet aggregation in vivo, but did not change the size of thrombi formed following laser injury of the cremaster arteriole wall in ILK-deficient mice. It did, however, result in a marked decrease in the stability of thrombi formed in ILK-deficient mice. CONCLUSION: Taken together, the findings of this study indicate that, although ILK is not essential for platelet activation, it plays a critical role in facilitating rapid platelet activation, which is essential for stable thrombus formation.
Resumo:
Windstorm Kyrill affected large parts of Europe in January 2007 and caused widespread havoc and loss of life. In this study the formation of a secondary cyclone, Kyill II, along the occluded front of the mature cyclone Kyrill and the occurrence of severe wind gusts as Kyrill II passed over Germany are investigated with the help of high-resolution regional climate model simulations. Kyrill underwent an explosive cyclogenesis south of Greenland as the storm crossed polewards of an intense upper-level jet stream. Later in its life cycle secondary cyclogenesis occurred just west of the British Isles. The formation of Kyrill II along the occluded front was associated (a) with frontolytic strain and (b) with strong diabatic heating in combination with a developing upper-level shortwave trough. Sensitivity studies with reduced latent heat release feature a similar development but a weaker secondary cyclone, revealing the importance of diabatic processes during the formation of Kyrill II. Kyrill II moved further towards Europe and its development was favored by a split jet structure aloft, which maintained the cyclone’s exceptionally deep core pressure (below 965 hPa) for at least 36 hours. The occurrence of hurricane force winds related to the strong cold front over North and Central Germany is analyzed using convection-permitting simulations. The lower troposphere exhibits conditional instability, a turbulent flow and evaporative cooling. Simulation at high spatio-temporal resolution suggests that the downward mixing of high momentum (the wind speed at 875 hPa widely exceeded 45 m s-1) accounts for widespread severe surface wind gusts, which is in agreement with observed widespread losses.
Resumo:
The performance of rank dependent preference functionals under risk is comprehensively evaluated using Bayesian model averaging. Model comparisons are made at three levels of heterogeneity plus three ways of linking deterministic and stochastic models: the differences in utilities, the differences in certainty equivalents and contextualutility. Overall, the"bestmodel", which is conditional on the form of heterogeneity is a form of Rank Dependent Utility or Prospect Theory that cap tures the majority of behaviour at both the representative agent and individual level. However, the curvature of the probability weighting function for many individuals is S-shaped, or ostensibly concave or convex rather than the inverse S-shape commonly employed. Also contextual utility is broadly supported across all levels of heterogeneity. Finally, the Priority Heuristic model, previously examined within a deterministic setting, is estimated within a stochastic framework, and allowing for endogenous thresholds does improve model performance although it does not compete well with the other specications considered.
Resumo:
We investigated the plume structure of a piezo-electric sprayer system, set up to release ethanol in a wind tunnel, using a fast response mini-photoionizaton detector. We recorded the plume structure of four different piezo-sprayer configurations: the sprayer alone; with a 1.6-mm steel mesh shield; with a 3.2-mm steel mesh shield; and with a 5 cm circular upwind baffle. We measured a 12 × 12-mm core at the center of the plume, and both a horizontal and vertical cross-section of the plume, all at 100-, 200-, and 400-mm downwind of the odor source. Significant differences in plume structure were found among all configurations in terms of conditional relative mean concentration, intermittency, ratio of peak concentration to conditional mean concentration, and cross-sectional area of the plume. We then measured the flight responses of the almond moth, Cadra cautella, to odor plumes generated with the sprayer alone, and with the upwind baffle piezo-sprayer configuration, releasing a 13:1 ratio of (9Z,12E)-tetradecadienyl acetate and (Z)-9-tetradecenyl acetate diluted in ethanol at release rates of 1, 10, 100, and 1,000 pg/min. For each configuration, differences in pheromone release rate resulted in significant differences in the proportions of moths performing oriented flight and landing behaviors. Additionally, there were apparent differences in the moths’ behaviors between the two sprayer configurations, although this requires confirmation with further experiments. This study provides evidence that both pheromone concentration and plume structure affect moth orientation behavior and demonstrates that care is needed when setting up experiments that use a piezo-electric release system to ensure the optimal conditions for behavioral observations.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
This paper examines the effects of internationalization (international diversification) and diversification across industries (product diversification) through mergers and acquisitions (M&As) on the firm’s risk-return profile. Drawing on the theoretical work of Vachani (1991) and Rugman and Verbeke’s (2004) metrics, we classify firms according to their degree of product diversification and global reach. These two dimensions at the firm-level are moderators for the performance–expansion relationship. To account for the endogeneity of market entry decisions, we develop a panel vector autoregression. We show that global and host-triad multinational enterprises (MNEs) benefit from cross-border M&As, which reinforces their geographic footprint. In contrast to all other types of firms, home-triad firms exhibit higher firm value without a change in risk when conducting cross-industry M&As. This effect, however, depends on the degree of product diversification. For home-triad firms with a small product range engaging in cross- industry transactions is a value-enhancing growth strategy.
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
Real estate securities have a number of distinct characteristics that differentiate them from stocks generally. Key amongst them is that under-pinning the firms are both real as well as investment assets. The connections between the underlying macro-economy and listed real estate firms is therefore clearly demonstrated and of heightened importance. To consider the linkages with the underlying macro-economic fundamentals we extract the ‘low-frequency’ volatility component from aggregate volatility shocks in 11 international markets over the 1990-2014 period. This is achieved using Engle and Rangel’s (2008) Spline-Generalized Autoregressive Conditional Heteroskedasticity (Spline-GARCH) model. The estimated low-frequency volatility is then examined together with low-frequency macro data in a fixed-effect pooled regression framework. The analysis reveals that the low-frequency volatility of real estate securities has strong and positive association with most of the macroeconomic risk proxies examined. These include interest rates, inflation, GDP and foreign exchange rates.
Resumo:
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.