979 resultados para Forecast error variance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Commencing 13 March 2000, the Corporate Law Economic Reform Program Act 1999 (Cth) introduced changes to the regulation of corporate fundraising in Australia. In particular, it effected a reduction in the litigation risk associated with initial public offering prospectus disclosure. We find that the change is associated with a reduction in forecast frequency and an increase in forecast value relevance, but not with forecast error or bias. These results confirm previous findings that changes in litigation risk affect the level but not the quality of disclosure. They also suggest that the reforms' objectives of reducing fundraising costs while improving investor protection, have been achieved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cystic echinococcosis, caused by Echinococcus grantilosus, is highly endemic in North Africa and the Middle East. This paper examines the abundance and prevalence of infection of E. granulosus in camels in Tunisia. No cysts were found in 103 camels from Kebili, whilst 19 of 188 camels from Benguerden (10.1%) were infected. Of the cysts found 95% were considered fertile with the presence of protoscolices and 80% of protoscolices were considered viable by their ability to exclude aqueous eosin. Molecular techniques were used on cyst material from camels and this demonstrated that the study animals were infected with the G1 sheep strain of E. granulosus. Observed data were fitted to a mathematical model by maximum likelihood techniques to define the parameters and their confidence limits and the negative binomial distribution was used to define the error variance in the observed data. The infection pressure to camels was somewhat lower in comparison to sheep reported in an earlier study. However, because camels are much longer-lived animals, the results of the model fit suggested that older camels have a relatively high prevalence rate, reaching a most likely value of 32% at age 15 years. This could represent an important source of transmission to dogs and hence indirectly to man of this zonotic strain. In common with similar studies on other species, there was no evidence of parasite-induced immunity in camels. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to determine the age of adult wild dogs, we compared two methods ( that of Thomson and Rose (TR method) and that of Knowlton and Whittemore (KW method)) of measuring and calculating pulp cavity : tooth width ratios on upper and lower canine teeth from 68 mixed-sex, known-age wild dogs of 9 months to 13 years of age reared at two localities. Although significant relationships ( P = 0.0001) were found between age and pulp cavity ratios by both methods, the TR ratio calculation and measurement showed heteroscedasity in error variance whereas the KW ratios had a more stable error variance and were normally distributed. The KW method also found significant differences between pulp cavity ratios between teeth of the upper and lower jaws ( P < 0.0001) and sex ( P = 0.01) but not geographic origin ( P = 0.1). Regressions and formulae for fitted curves are presented separately for male and female wild dogs. Males show greater variability in pulp cavity decrements with age than do females, suggesting a physiological difference between the sexes. We conclude that the KW method of using pulp cavity as a proportion of tooth width, measured 15 mm from the root tip and averaged over both upper canines, is the more accurate method of estimating the age of adult wild dogs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analysis of covariance (ANCOVA) is a useful method of ‘error control’, i.e., it can reduce the size of the error variance in an experimental or observational study. An initial measure obtained before the experiment, which is closely related to the final measurement, is used to adjust the final measurements, thus reducing the error variance. When this method is used to reduce the error term, the X variable must not itself be affected by the experimental treatments, because part of the treatment effect would then also be removed. Hence, the method can only be safely used when X is measured before an experiment. A further limitation of the analysis is that only the linear effect of Y on X is being removed and it is possible that Y could be a curvilinear function of X. A question often raised is whether ANCOVA should be used routinely in experiments rather than a randomized blocks or split-plot design, which may also reduce the error variance. The answer to this question depends on the relative precision of the difference methods with reference to each scenario. Considerable judgment is often required to select the best experimental design and statistical help should be sought at an early stage of an investigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we investigate whether consideration of store-level heterogeneity in marketing mix effects improves the accuracy of the marketing mix elasticities, fit, and forecasting accuracy of the widely-applied SCAN*PRO model of store sales. Models with continuous and discrete representations of heterogeneity, estimated using hierarchical Bayes (HB) and finite mixture (FM) techniques, respectively, are empirically compared to the original model, which does not account for store-level heterogeneity in marketing mix effects, and is estimated using ordinary least squares (OLS). The empirical comparisons are conducted in two contexts: Dutch store-level scanner data for the shampoo product category, and an extensive simulation experiment. The simulation investigates how between- and within-segment variance in marketing mix effects, error variance, the number of weeks of data, and the number of stores impact the accuracy of marketing mix elasticities, model fit, and forecasting accuracy. Contrary to expectations, accommodating store-level heterogeneity does not improve the accuracy of marketing mix elasticities relative to the homogeneous SCAN*PRO model, suggesting that little may be lost by employing the original homogeneous SCAN*PRO model estimated using ordinary least squares. Improvements in fit and forecasting accuracy are also fairly modest. We pursue an explanation for this result since research in other contexts has shown clear advantages from assuming some type of heterogeneity in market response models. In an Afterthought section, we comment on the controversial nature of our result, distinguishing factors inherent to household-level data and associated models vs. general store-level data and associated models vs. the unique SCAN*PRO model specification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

According to the significance of the econometric models in foreign exchange market, the purpose of this research is to give a closer examination on some important issues in this area. The research covers exchange rate pass-through into import prices, liquidity risk and expected returns in the currency market, and the common risk factors in currency markets. Firstly, with the significant of the exchange rate pass-through in financial economics, the first empirical chapter studies on the degree of exchange rate pass-through into import in emerging economies and developed countries in panel evidences for comparison covering the time period of 1970-2009. The pooled mean group estimation (PMGE) is used for the estimation to investigate the short run coefficients and error variance. In general, the results present that the import prices are affected positively, though incompletely, by the exchange rate. Secondly, the following study addresses the question whether there is a relationship between cross-sectional differences in foreign exchange returns and the sensitivities of the returns to fluctuations in liquidity, known as liquidity beta, by using a unique dataset of weekly order flow. Finally, the last study is in keeping with the study of Lustig, Roussanov and Verdelhan (2011), which shows that the large co-movement among exchange rates of different currencies can explain a risk-based view of exchange rate determination. The exploration on identifying a slope factor in exchange rate changes is brought up. The study initially constructs monthly portfolios of currencies, which are sorted on the basis of their forward discounts. The lowest interest rate currencies are contained in the first portfolio and the highest interest rate currencies are in the last. The results performs that portfolios with higher forward discounts incline to contain higher real interest rates in overall by considering the first portfolio and the last portfolio though the fluctuation occurs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doutoramento em Economia

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doctor of Philosophy in Mathematics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data.

METHODS: We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features.

RESULTS: Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014.

CONCLUSIONS: In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to investigate the effects of sex and deprivation on participation in a population-based faecal immunochemical test (FIT) colorectal cancer screening programme. The study population included 9785 individuals invited to participate in two rounds of a population-based biennial FIT-based screening programme, in a relatively deprived area of Dublin, Ireland. Explanatory variables included in the analysis were sex, deprivation category of area of residence and age (at end of screening). The primary outcome variable modelled was participation status in both rounds combined (with “participation” defined as having taken part in either or both rounds of screening). Poisson regression with a log link and robust error variance was used to estimate relative risks (RR) for participation. As a sensitivity analysis, data were stratified by screening round. In both the univariable and multivariable models deprivation was strongly associated with participation. Increasing affluence was associated with higher participation; participation was 26% higher in people resident in the most affluent compared to the most deprived areas (multivariable RR = 1.26: 95% CI 1.21–1.30). Participation was significantly lower in males (multivariable RR = 0.96: 95%CI 0.95–0.97) and generally increased with increasing age (trend per age group, multivariable RR = 1.02: 95%CI, 1.01–1.02). No significant interactions between the explanatory variables were found. The effects of deprivation and sex were similar by screening round. Deprivation and male gender are independently associated with lower uptake of population-based FIT colorectal cancer screening, even in a relatively deprived setting. Development of evidence-based interventions to increase uptake in these disadvantaged groups is urgently required.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.