58 resultados para Probability densities
Resumo:
Field experiments were conducted in northern Greece in 2003 and 2004 to evaluate effects of tillage regimes (moldboard plowing, chisel plowing, and rotary tilling), cropping sequences(continuous cotton, cotton-sugar beet rotation,and continuous tobacco) and herbicide treatments with inter-row hand hoeing on weed population densities. Total weed densities were not affected by tillage treatment except that of barnyardgrass (Echinochloa crus-galli), which increased only in moldboard plowing treated plots during 2003. Redroot pigweed (Amaranthus retroflexus)and black nightshade (Solanum nigrum) densities were reduced in continuous cotton, while purple nutsedge (Cyperus rotundus), E. crus-galli, S. nigrum, and johnsongras(Sorghum halepense) densities were reduced in tobacco. A. retroflexus and S. nigrum were effectively controlled by all herbicide treatments with inter-row hand hoeing,whereas E. crus-galli was effectively reduced by herbicides applied to cotton and tobacco. S. halepense density reduction was a result of herbicide applied to tobacco with inter-row hand hoeing. Yield of all crops was higher under moldboard plowing and herbicide treatments. Pre-sowing and pre-emergence herbicide treatments in cotton and pre-transplant in tobacco integrated with inter-row cultivation resulted in efficient control of annual weed species and good crop yields. These observations are of practical relevance to crop selection by farmers in order to maintain weed populations at economically acceptable densities through the integration of various planting dates, sustainable herbicide use and inter-row cultivation; tools of great importance in integrated weed management systems. Keywords: cropping sequence, herbicide, integrated weed management, inter-row cultivation,tillage.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
In this paper I analyze the general equilibrium in a random Walrasian economy. Dependence among agents is introduced in the form of dependency neighborhoods. Under the uncertainty, an agent may fail to survive due to a meager endowment in a particular state (direct effect), as well as due to unfavorable equilibrium price system at which the value of the endowment falls short of the minimum needed for survival (indirect terms-of-trade effect). To illustrate the main result I compute the stochastic limit of equilibrium price and probability of survival of an agent in a large Cobb-Douglas economy.
Resumo:
From 2001, the construction of flats and high-density developments increased in England and the building of houses declined. Does this indicate a change in taste or is it a result of government planning policies? In this paper, an analysis is made of the long-term effects of the policy of constraint which has existed for the past 50 years but the increase in density is identified as occurring primarily after new, revised, planning guidance was issued in England in 2000 which discouraged low-density development. To substantiate this, it is pointed out that the change which occurred in England did not occur in Scotland where guidance was not changed to encourage high-density residential development. The conclusion that the change is the result of planning policies and not of a change in taste is confirmed by surveys of the occupants of new high-rise developments in Leeds. The new flat-dwellers were predominantly young and childless and expressed the intention, in the near future, when they could, of moving out of the city centre and into houses. From recent changes in guidance by the new coalition government, it is expected that the construction of flats in England will fall back to earlier levels over the next few years.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
This paper examines the impact of the auction process of residential properties that whilst unsuccessful at auction sold subsequently. The empirical analysis considers both the probability of sale and the premium of the subsequent sale price over the guide price, reserve and opening bid. The findings highlight that the final achieved sale price is influenced by key price variables revealed both prior to and during the auction itself. Factors such as auction participation, the number of individual bidders and the number of bids are significant in a number of the alternative specifications.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
We consider methods of evaluating multivariate density forecasts. A recently proposed method is found to lack power when the correlation structure is mis-specified. Tests that have good power to detect mis-specifications of this sort are described. We also consider the properties of the tests in the presence of more general mis-specifications.
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.