198 resultados para Forecasting accuracy
Resumo:
This paper proposes and tests a new framework for weighting recursive out-of-sample prediction errors according to their corresponding levels of in-sample estimation uncertainty. In essence, we show how to use the maximum possible amount of information from the sample in the evaluation of the prediction accuracy, by commencing the forecasts at the earliest opportunity and weighting the prediction errors. Via a Monte Carlo study, we demonstrate that the proposed framework selects the correct model from a set of candidate models considerably more often than the existing standard approach when only a small sample is available. We also show that the proposed weighting approaches result in tests of equal predictive accuracy that have much better sizes than the standard approach. An application to an exchange rate dataset highlights relevant differences in the results of tests of predictive accuracy based on the standard approach versus the framework proposed in this paper.
Resumo:
Met Office station data from 1980 to 2012 has been used to characterise the interannual variability of incident solar irradiance across the UK. The same data are used to evaluate four popular historical irradiance products to determine which are most suitable for use by the UK PV industry for site selection and system design. The study confirmed previous findings that interannual variability is typically 3–6% and weighted average probability of a particular percentage deviation from the mean at an average site in the UK was calculated. This weighted average showed that fewer than 2% of site-years could be expected to fall below 90% of the long-term site mean. The historical irradiance products were compared against Met Office station data from the input years of each product. This investigation has found that all products perform well. No products have a strong spatial trend. Meteonorm 7 is most conservative (MBE = −2.5%), CMSAF is most optimistic (MBE = +3.4%) and an average of all four products performs better than any one individual product (MBE = 0.3%)
Resumo:
OBJECTIVE: Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no 'cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. APPROACH: Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. MAIN RESULTS: It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). SIGNIFICANCE: The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.
Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach
Resumo:
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.
Resumo:
Existing empirical evidence has frequently observed that professional forecasters are conservative and display herding behaviour. Whilst a large number of papers have considered equities as well as macroeconomic series, few have considered the accuracy of forecasts in alternative asset classes such as real estate. We consider the accuracy of forecasts for the UK commercial real estate market over the period 1999-2011. The results illustrate that forecasters display a tendency to under-estimate growth rates during strong market conditions and over-estimate when the market is performing poorly. This conservatism not only results in smoothed estimates but also implies that forecasters display herding behaviour. There is also a marked difference in the relative accuracy of capital and total returns versus rental figures. Whilst rental growth forecasts are relatively accurate, considerable inaccuracy is observed with respect to capital value and total returns.
Resumo:
Objective: Psychological problems should be identified in breast cancer patients proactively if doctors and nurses are to help them cope with the challenges imposed by their illness. Screening is one possible way to identify emotional problems proactively. Self-report questionnaires can be useful alternatives to carrying out psychiatric interviews during screening, because interviewing a large number of patients can be impractical due to limited resources. Two such measures are the Hospital Anxiety and Depression Scale (HADS) and the General Health Questionnaire-12 (GHQ-12). Method: The present study aimed to compare the performance of the GHQ-12, and the HADS Unitary Scale and its subscales to that of the Schedule for Affective Disorders and Schizophrenia (SADS) in identifying patients with affective disorders, including DSM major depression and generalized anxiety disorder. The sample consisted of 296 female breast cancer patients who underwent surgery for breast cancer a year previously. Results: A small number of patients (11%) were identified as having DSM major depression or generalized anxiety disorder based on SADS score. The findings indicate that the optimal thresholds in detecting generalized anxiety disorder and DSM major depression with the HADS anxiety and depression subscales were ≥ 8 and ≥ 7, with 93.3% and 77.3% sensitivity, respectively, and 77.9% and 87.1% specificity, respectively. They also had a 21% and 36% positive predictive value, respectively. Using the HADS Unitary Scale the optimal threshold for detecting affective disorders was ≥ 12, with 88.9% sensitivity, 80.7% specificity, and a 35% positive predictive value. In detecting affective disorders, the optimal threshold on the GHQ-12 was ≥ 2, with 77.8% sensitivity and 70.2% specificity. This scale also had a 24% positive predictive value. In detecting generalized anxiety disorder and DSM major depression, the optimal thresholds on the GHQ-12 were ≥ 2 and ≥ 4 with 73.3% and 77.3% sensitivity, respectively, and 67.5% and 82% specificity, respectively. The scale also had 12% and 29% positive predictive values, respectively. Conclusion: The HADS Unitary Scale and its subscales were effective in identifying affective disorders. They can be used as screening measures in breast cancer patients. The GHQ-12 was less accurate in detecting affective disorders than the HADS, but it can also be used as a screening instrument to detect affective disorders, generalized anxiety disorder, and DSM major depression.
Resumo:
Objectives In this study a prototype of a new health forecasting alert system is developed, which is aligned to the approach used in the Met Office’s (MO) National Severe Weather Warning Service (NSWWS). This is in order to improve information available to responders in the health and social care system by linking temperatures more directly to risks of mortality, and developing a system more coherent with other weather alerts. The prototype is compared to the current system in the Cold Weather and Heatwave plans via a case-study approach to verify its potential advantages and shortcomings. Method The prototype health forecasting alert system introduces an “impact vs likelihood matrix” for the health impacts of hot and cold temperatures which is similar to those used operationally for other weather hazards as part of the NSWWS. The impact axis of this matrix is based on existing epidemiological evidence, which shows an increasing relative risk of death at extremes of outdoor temperature beyond a threshold which can be identified epidemiologically. The likelihood axis is based on a probability measure associated with the temperature forecast. The new method is tested for two case studies (one during summer 2013, one during winter 2013), and compared to the performance of the current alert system. Conclusions The prototype shows some clear improvements over the current alert system. It allows for a much greater degree of flexibility, provides more detailed regional information about the health risks associated with periods of extreme temperatures, and is more coherent with other weather alerts which may make it easier for front line responders to use. It will require validation and engagement with stakeholders before it can be considered for use.
Resumo:
We consider the extent to which long-horizon survey forecasts of consumption, investment and output growth are consistent with theory-based steady-state values, and whether imposing these restrictions on long-horizon forecasts will enhance their accuracy. The restrictions we impose are consistent with a two-sector model in which the variables grow at different rates in steady state. The restrictions are imposed by exponential-tilting of simple auxiliary forecast densities. We show that imposing the consumption-output restriction yields modest improvements in the long-horizon output growth forecasts, and larger improvements in the forecasts of the cointegrating combination of consumption and output: the transformation of the data on which accuracy is assessed plays an important role.
Resumo:
Annual losses of cocoa in Ghana to mirids are significant. Therefore, accurate timing of insecticide application is critical to enhance yields. However, cocoa farmers often lack information on the expected mirid population for each season to enable them to optimise pesticide use. This study assessed farmers’ knowledge and perceptions of mirid control and their willingness to use forecasting systems informing them of expected mirid peaks and time of application of pesticides. A total of 280 farmers were interviewed in the Eastern and Ashanti regions of Ghana with a structured open and closed ended questionnaire. Most farmers (87%) considered mirids as the most important insect pest on cocoa with 47% of them attributing 30-40% annual crop loss to mirid damage. There was wide variation in the timing of insecticide application as a result of farmers using different sources of information to guide the start of application. The majority of farmers (56%) do not have access to information on the type, frequency and timing of insecticides to use. However, respondents who are members of farmer groups had better access to such information. Extension officers were the preferred channel for information transfer to farmers with 72% of farmers preferring them to other available methods of communication. Almost all the respondents (99%) saw the need for a comprehensive forecasting system to help farmers manage cocoa mirids. The importance of accurate timing for mirid control based on forecasted information to farmer groups and extension officers was discussed.
Resumo:
Ecological forecasting is difficult but essential, because reactive management results in corrective actions that are often too late to avert significant environmental damage. Here, we appraise different forecasting methods with a particular focus on the modelling of species populations. We show how simple extrapolation of current trends in state is often inadequate because environmental drivers change in intensity over time and new drivers emerge. However, statistical models, incorporating relationships with drivers, simply offset the prediction problem, requiring us to forecast how the drivers will themselves change over time. Some authors approach this problem by focusing in detail on a single driver, whilst others use ‘storyline’ scenarios, which consider projected changes in a wide range of different drivers. We explain why both approaches are problematic and identify a compromise to model key drivers and interactions along with possible response options to help inform environmental management. We also highlight the crucial role of validation of forecasts using independent data. Although these issues are relevant for all types of ecological forecasting, we provide examples based on forecasts for populations of UK butterflies. We show how a high goodness-of-fit for models used to calibrate data is not sufficient for good forecasting. Long-term biological recording schemes rather than experiments will often provide data for ecological forecasting and validation because these schemes allow capture of landscape-scale land-use effects and their interactions with other drivers.
Resumo:
Ocean prediction systems are now able to analyse and predict temperature, salinity and velocity structures within the ocean by assimilating measurements of the ocean’s temperature and salinity into physically based ocean models. Data assimilation combines current estimates of state variables, such as temperature and salinity, from a computational model with measurements of the ocean and atmosphere in order to improve forecasts and reduce uncertainty in the forecast accuracy. Data assimilation generally works well with ocean models away from the equator but has been found to induce vigorous and unrealistic overturning circulations near the equator. A pressure correction method was developed at the University of Reading and the Met Office to control these circulations using ideas from control theory and an understanding of equatorial dynamics. The method has been used for the last 10 years in seasonal forecasting and ocean prediction systems at the Met Office and European Center for Medium-range Weather Forecasting (ECMWF). It has been an important element in recent re-analyses of the ocean heat uptake that mitigates climate change.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
Human Body Thermoregulation Models have been widely used in the field of human physiology or thermal comfort studies. However there are few studies on the evaluation method for these models. This paper summarises the existing evaluation methods and critically analyses the flaws. Based on that, a method for the evaluating the accuracy of the Human Body Thermoregulation models is proposed. The new evaluation method contributes to the development of Human Body Thermoregulation models and validates their accuracy both statistically and empirically. The accuracy of different models can be compared by the new method. Furthermore, the new method is not only suitable for the evaluation of Human Body Thermoregulation Models, but also can be theoretically applied to the evaluation of the accuracy of the population-based models in other research fields.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.