899 resultados para Models performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study investigates the initiation of precipitating deep convection in an ensemble of convection-resolving mesoscale models. Results of eight different model runs from five non-hydrostatic models are compared for a case of the Convective and Orographically-induced Precipitation Study (COPS). An isolated convective cell initiated east of the Black Forest crest in southwest Germany, although convective available potential energy was only moderate and convective inhibition was high. Measurements revealed that, due to the absence of synoptic forcing, convection was initiated by local processes related to the orography. In particular, the lifting by low-level convergence in the planetary boundary layer is assumed to be the dominant process on that day. The models used different configurations as well as different initial and boundary conditions. By comparing the different model performance with each other and with measurements, the processes which need to be well represented to initiate convection at the right place and time are discussed. Besides an accurate specification of the thermodynamic and kinematic fields, the results highlight the role of boundary-layer convergence features for quantitative precipitation forecasts in mountainous terrain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzes the issue of American option valuation when the underlying exhibits a GARCH-type volatility process. We propose the usage of Rubinstein's Edgeworth binomial tree (EBT) in contrast to simulation-based methods being considered in previous studies. The EBT-based valuation approach makes an implied calibration of the pricing model feasible. By empirically analyzing the pricing performance of American index and equity options, we illustrate the superiority of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the relation between corporate social performance and stock returns in the UK. We closely evaluate the interactions between social and financial performance with a set of disaggregated social performance indicators for environment, employment, and community activities instead of using an aggregate measure. While scores on a composite social performance indicator are negatively related to stock returns, we find the poor financial reward offered by such firms is attributable to their good social performance on the environment and, to a lesser extent, the community aspects. Considerable abnormal returns are available from holding a portfolio of the socially least desirable stocks. These relationships between social and financial performance can be rationalized by multi-factor models for explaining the cross-sectional variation in returns, but not by industry effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of normal word production are well specified about the effects of frequency of linguistic stimuli on lexical access, but are less clear regarding the same effects on later stages of word production, particularly word articulation. In aphasia, this lack of specificity of down-stream frequency effects is even more noticeable because there is relatively limited amount of data on the time course of frequency effects for this population. This study begins to fill this gap by comparing the effects of variation of word frequency (lexical, whole word) and bigram frequency (sub-lexical, within word) on word production abilities in ten normal speakers and eight mild–moderate individuals with aphasia. In an immediate repetition paradigm, participants repeated single monosyllabic words in which word frequency (high or low) was crossed with bigram frequency (high or low). Indices for mapping the time course for these effects included reaction time (RT) for linguistic processing and motor preparation, and word duration (WD) for speech motor performance (word articulation time). The results indicated that individuals with aphasia had significantly longer RT and WD compared to normal speakers. RT showed a significant main effect only for word frequency (i.e., high-frequency words had shorter RT). WD showed significant main effects of word and bigram frequency; however, contrary to our expectations, high-frequency items had longer WD. Further investigation of WD revealed that independent of the influence of word and bigram frequency, vowel type (tense or lax) had the expected effect on WD. Moreover, individuals with aphasia differed from control speakers in their ability to implement tense vowel duration, even though they could produce an appropriate distinction between tense and lax vowels. The results highlight the importance of using temporal measures to identify subtle deficits in linguistic and speech motor processing in aphasia, the crucial role of phonetic characteristics of stimuli set in studying speech production and the need for the language production models to account more explicitly for word articulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual observation of human actions provokes more motor activation than observation of robotic actions. We investigated the extent to which this visuomotor priming effect is mediated by bottom-up or top-down processing. The bottom-up hypothesis suggests that robotic movements are less effective in activating the ‘mirror system’ via pathways from visual areas via the superior temporal sulcus to parietal and premotor cortices. The top-down hypothesis postulates that beliefs about the animacy of a movement stimulus modulate mirror system activity via descending pathways from areas such as the temporal pole and prefrontal cortex. In an automatic imitation task, subjects performed a prespecified movement (e.g. hand opening) on presentation of a human or robotic hand making a compatible (opening) or incompatible (closing) movement. The speed of responding on compatible trials, compared with incompatible trials, indexed visuomotor priming. In the first experiment, robotic stimuli were constructed by adding a metal and wire ‘wrist’ to a human hand. Questionnaire data indicated that subjects believed these movements to be less animate than those of the human stimuli but the visuomotor priming effects of the human and robotic stimuli did not differ. In the second experiment, when the robotic stimuli were more angular and symmetrical than the human stimuli, human movements elicited more visuomotor priming than the robotic movements. However, the subjects’ beliefs about the animacy of the stimuli did not affect their performance. These results suggest that bottom-up processing is primarily responsible for the visuomotor priming advantage of human stimuli.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides evidence regarding the risk-adjusted performance of 19 UK real estate funds in the UK, over the period 1991-2001. Using Jensen’s alpha the results are generally favourable towards the hypothesis that real estate fund managers showed superior risk-adjusted performance over this period. However, using three widely known parametric statistical procedures to jointly test for timing and selection ability the results are less conclusive. The paper then utilises the meta-analysis technique to further examine the regression results in an attempt to estimate the proportion of variation in results attributable to sampling error. The meta-analysis results reveal strong evidence, across all models, that the variation in findings is real and may not be attributed to sampling error. Thus, the meta-analysis results provide strong evidence that on average the sample of real estate funds analysed in this study delivered significant risk-adjusted performance over this period. The meta-analysis for the three timing and selection models strongly indicating that this out performance of the benchmark resulted from superior selection ability, while the evidence for the ability of real estate fund managers to time the market is at best weak. Thus, we can say that although real estate fund managers are unable to outperform a passive buy and hold strategy through timing, they are able to improve their risk-adjusted performance through selection ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an intercomparison and verification analysis of 20 GCMs (Global Circulation Models) included in the 4th IPCC assessment report regarding their representation of the hydrological cycle on the Danube river basin for 1961–2000 and for the 2161–2200 SRESA1B scenario runs. The basin-scale properties of the hydrological cycle are computed by spatially integrating the precipitation, evaporation, and runoff fields using the Voronoi-Thiessen tessellation formalism. The span of the model- simulated mean annual water balances is of the same order of magnitude of the observed Danube discharge of the Delta; the true value is within the range simulated by the models. Some land components seem to have deficiencies since there are cases of violation of water conservation when annual means are considered. The overall performance and the degree of agreement of the GCMs are comparable to those of the RCMs (Regional Climate Models) analyzed in a previous work, in spite of the much higher resolution and common nesting of the RCMs. The reanalyses are shown to feature several inconsistencies and cannot be used as a verification benchmark for the hydrological cycle in the Danubian region. In the scenario runs, for basically all models the water balance decreases, whereas its interannual variability increases. Changes in the strength of the hydrological cycle are not consistent among models: it is confirmed that capturing the impact of climate change on the hydrological cycle is not an easy task over land areas. Moreover, in several cases we find that qualitatively different behaviors emerge among the models: the ensemble mean does not represent any sort of average model, and often it falls between the models’ clusters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear models of property market performance may be misspecified if there exist distinct states where the market drivers behave in different ways. This paper examines the applicability of non-linear regime-based models. A Self Exciting Threshold Autoregressive (SETAR) model is applied to property company share data, using the real rate of interest to define regimes. Distinct regimes appear exhibiting markedly different market behaviour. The model both casts doubt on the specification of conventional linear models and offers the possibility of developing effective trading rules for real estate equities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea of incorporating multiple models of linear rheology into a superensemble, to forge a consensus forecast from the individual model predictions, is investigated. The relative importance of the individual models in the so-called multimodel superensemble (MMSE) was inferred by evaluating their performance on a set of experimental training data, via nonlinear regression. The predictive ability of the MMSE model was tested by comparing its predictions on test data that were similar (in-sample) and dissimilar (out-of-sample) to the training data used in the calibration. For the in-sample forecasts, we found that the MMSE model easily outperformed the best constituent model. The presence of good individual models greatly enhanced the MMSE forecast, while the presence of some bad models in the superensemble also improved the MMSE forecast modestly. While the performance of the MMSE model on the out-of-sample training data was not as spectacular, it demonstrated the robustness of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formulation and implementation of LEAF-2, the Land Ecosystem–Atmosphere Feedback model, which comprises the representation of land–surface processes in the Regional Atmospheric Modeling System (RAMS), is described. LEAF-2 is a prognostic model for the temperature and water content of soil, snow cover, vegetation, and canopy air, and includes turbulent and radiative exchanges between these components and with the atmosphere. Subdivision of a RAMS surface grid cell into multiple areas of distinct land-use types is allowed, with each subgrid area, or patch, containing its own LEAF-2 model, and each patch interacts with the overlying atmospheric column with a weight proportional to its fractional area in the grid cell. A description is also given of TOPMODEL, a land hydrology model that represents surface and subsurface downslope lateral transport of groundwater. Details of the incorporation of a modified form of TOPMODEL into LEAF-2 are presented. Sensitivity tests of the coupled system are presented that demonstrate the potential importance of the patch representation and of lateral water transport in idealized model simulations. Independent studies that have applied LEAF-2 and verified its performance against observational data are cited. Linkage of RAMS and TOPMODEL through LEAF-2 creates a modeling system that can be used to explore the coupled atmosphere–biophysical–hydrologic response to altered climate forcing at local watershed and regional basin scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.