808 resultados para Empirical Predictions
Resumo:
It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.
Resumo:
Coupling a review of previous studies on the acquisition of grammatical aspects undertaken from contrasting paradigmatic views of second language acquisition (SLA) with new experimental data from L2 Portuguese, the present study contributes to this specific literature as well as general debates in L2 epistemology. We tested 31 adult English learners of L2 Portuguese across three experiments, examining the extent to which they had acquired the syntax and (subtle) semantics of grammatical aspect. Demonstrating that many individuals acquired target knowledge of what we contend is a poverty-of-the-stimulus semantic entailment related to the checking of aspectual features encoded in Portuguese preterit and imperfect morphology, namely, a [±accidental] distinction that obtains in a restricted subset of contexts, we conclude that UG-based approaches to SLA are in a better position to tap and gauge underlying morphosyntactic competence, since based on independent theoretical linguistic descriptions, they make falsifiable predictions that are amenable to empirical scrutiny, seek to describe and explain beyond performance, and can account for L2 convergence on poverty-of-the-stimulus knowledge as well as L2 variability/optionality.
Resumo:
A statistical model is derived relating the diurnal variation of sea surface temperature (SST) to the net surface heat flux and surface wind speed from a numerical weather prediction (NWP) model. The model is derived using fluxes and winds from the European Centre for Medium-Range Weather Forecasting (ECMWF) NWP model and SSTs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). In the model, diurnal warming has a linear dependence on the net surface heat flux integrated since (approximately) dawn and an inverse quadratic dependence on the maximum of the surface wind speed in the same period. The model coefficients are found by matching, for a given integrated heat flux, the frequency distributions of the maximum wind speed and the observed warming. Diurnal cooling, where it occurs, is modelled as proportional to the integrated heat flux divided by the heat capacity of the seasonal mixed layer. The model reproduces the statistics (mean, standard deviation, and 95-percentile) of the diurnal variation of SST seen by SEVIRI and reproduces the geographical pattern of mean warming seen by the Advanced Microwave Scanning Radiometer (AMSR-E). We use the functional dependencies in the statistical model to test the behaviour of two physical model of diurnal warming that display contrasting systematic errors.
Resumo:
An updated empirical approach is proposed for specifying coexistence requirements for genetically modified (GM) maize (Zea mays L.) production to ensure compliance with the 0.9% labeling threshold for food and feed in the European Union. The model improves on a previously published (Gustafson et al., 2006) empirical model by adding recent data sources to supplement the original database and including the following additional cases: (i) more than one GM maize source field adjacent to the conventional or organic field, (ii) the possibility of so-called “stacked” varieties with more than one GM trait, and (iii) lower pollen shed in the non-GM receptor field. These additional factors lead to the possibility for somewhat wider combinations of isolation distance and border rows than required in the original version of the empirical model. For instance, in the very conservative case of a 1-ha square non-GM maize field surrounded on all four sides by homozygous GM maize with 12 m isolation (the effective isolation distance for a single GM field), non-GM border rows of 12 m are required to be 95% confident of gene flow less than 0.9% in the non-GM field (with adventitious presence of 0.3%). Stacked traits of higher GM mass fraction and receptor fields of lower pollen shed would require a greater number of border rows to comply with the 0.9% threshold, and an updated extension to the model is provided to quantify these effects.
Resumo:
[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.
Resumo:
The 1991 decision of the European Commission on the Tetra Pak case was based on information which seemed to prove the firm's anti-competitive behavior. The Tetra Pak case is investigated here focusing on the meaning of multimarket dominance, using empirical techniques. We find that a more rigorous analysis of the data available would not confirm the Commission's assertions. That is, it cannot be concluded with certainty that the Commission was right to relate Tetra Pak's dominance in the aseptic sector to its market power in the non-aseptic sector. Our results suggest a general framework for the analysis of abusive transfer of market power across vertically or/and horizontally related markets.
Resumo:
It has been reported that the ability to solve syllogisms is highly g-loaded. In the present study, using a self-administered shortened version of a syllogism-solving test, the BAROCO Short, we examined whether robust findings generated by previous research regarding IQ scores were also applicable to BAROCO Short scores. Five syllogism-solving problems were included in a questionnaire as part of a postal survey conducted by the Keio Twin Research Center. Data were collected from 487 pairs of twins (1021 individuals) who were Japanese junior high or high school students (ages 13–18) and from 536 mothers and 431 fathers. Four findings related to IQ were replicated: 1) The mean level increased gradually during adolescence, stayed unchanged from the 30s to the early 50s, and subsequently declined after the late 50s. 2) The scores for both children and parents were predicted by the socioeconomic status of the family. 3) The genetic effect increased, although the shared environmental effect decreased during progression from adolescence to adulthood. 4) Children's scores were genetically correlated with school achievement. These findings further substantiate the close association between syllogistic reasoning ability and g.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.
Resumo:
This paper highlights some communicative and institutional challenges to using ensemble prediction systems (EPS) in operational flood forecasting, warning, and civil protection. Focusing in particular on the Swedish experience, as part of the PREVIEW FP6 project, of applying EPS to operational flood forecasting, the paper draws on a wider set of site visits, interviews, and participant observation with flood forecasting centres and civil protection authorities (CPAs) in Sweden and 15 other European states to reflect on the comparative success of Sweden in enabling CPAs to make operational use of EPS for flood risk management. From that experience, the paper identifies four broader lessons for other countries interested in developing the operational capacity to make, communicate, and use EPS for flood forecasting and civil protection. We conclude that effective training and clear communication of EPS, while clearly necessary, are by no means sufficient to ensure effective use of EPS. Attention must also be given to overcoming the institutional obstacles to their use and to identifying operational choices for which EPS is seen to add value rather than uncertainty to operational decision making by CPAs.
Resumo:
The purpose of this paper is to explore how companies that hold carbon trading accounts under European Union Emissions Trading Scheme (EU ETS) respond to the climate change by using disclosures on carbon emissions as a means to generate legitimacy compared to others. The study is based on disclosures made in annual reports and stand-alone sustainability reports of UK listed companies from 2001- 2012. The study uses content analysis to capture both the quality and volume of the carbon disclosures. The results show that there is a significant increase in both the quality and volume of the carbon disclosures after the launch of EU ETS. Companies with carbon trading accounts provide greater detailed disclosures as compared to the others without an account. We also find that company size is positively correlated with the disclosures while the association with the industry produces an inconclusive result.
Resumo:
Drawing upon a national database of unimplemented planning permissions and 18 in-depth case studies, this paper provides both a quantitative and qualitative analysis of the phenomenon of stalled sites in England. The practical and conceptual difficulties of classifying sites as stalled are critically reviewed. From the literature, it is suggested that planning permission may not be implemented due to lack of financial viability, strategic behaviour by landowners and house-builders and other problems associated with the development process. Consistent with poor viability, the analysis of the national database indicates that a substantial proportion of the stalled sites is high density apartment development and/or is located in low house value areas. The case studies suggest that a combination of interlinked issues may need to be resolved before a planning permission can be implemented. These include; the sale of the land to house-builders, re-negotiation of the planning permission and, most importantly, improvement in housing market conditions.
Resumo:
• This is a study of the relationship between institutional settings and managerial compensation systems, based on extensive cross-national survey evidence. • We compare differences in practices between Multinational Corporations (MNCs) and domestic firms across a range of capitalist archetypes. • We find that MNCs are more likely to promote compensation systems that incentivise managers in line with organisational performance compared to domestic firms. Our findings also reveal persistent diversity reflecting firm type and institutional setting. We find that the gap between MNCs and domestic firms in terms of the usage of incentive-related compensation is less pronounced in Liberal Market Economies than in other settings. This suggests that it is a combination of being an MNC and the specific home locale that moulds approaches to managerial compensation. This reflects considerable hybridisation of practices within and between settings.
Resumo:
We examine whether and under what circumstances World Bank and International Monetary Fund (IMF) programs affect the likelihood of major government crises. We find that crises are, on average, more likely as a consequence of World Bank programs. We also find that governments face an increasing risk of entering a crisis when they remain under an IMF or World Bank arrangement once the economy's performance improves. The international financial institution's (IFI) scapegoat function thus seems to lose its value when the need for financial support is less urgent. While the probability of a crisis increases when a government turns to the IFIs, programs inherited by preceding governments do not affect the probability of a crisis. This is in line with two interpretations. First, the conclusion of IFI programs can signal the government's incompetence, and second, governments that inherit programs might be less likely to implement program conditions agreed to by their predecessors.
Resumo:
I consider the possibility that respondents to the Survey of Professional Forecasters round their probability forecasts of the event that real output will decline in the future, as well as their reported output growth probability distributions. I make various plausible assumptions about respondents’ rounding practices, and show how these impinge upon the apparent mismatch between probability forecasts of a decline in output and the probabilities of this event implied by the annual output growth histograms. I find that rounding accounts for about a quarter of the inconsistent pairs of forecasts.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.