77 resultados para Discrete Regression and Qualitative Choice Models
em CentAUR: Central Archive University of Reading - UK
Resumo:
With the increasing pressure on crop production from the evolution of herbicide resistance, farmers are increasingly adopting Integrated Weed Management (IWM) strategies to augment their weed control. These include measures to increase the competitiveness of the crop canopy such as increased sowing rate and the use of more competitive cultivars. While there are data on the relative impact of these non-chemical weed control methods assessed in isolation, there is uncertainty about their combined contribution, which may be hindering their adoption. In this article, the INTERCOM simulation model of crop / weed competition was used to examine the combined impact of crop density, sowing date and cultivar choice on the outcomes of competition between wheat (Triticum aestivum) and Alopecurus myosuroides. Alopecurus myosuroides is a problematic weed of cereal crops in North-Western Europe and the primary target for IWM in the UK because it has evolved resistance to a range of herbicides. The model was parameterised for two cultivars with contrasting competitive ability, and simulations run across 10 years at different crop densities and two sowing dates. The results suggest that sowing date, sowing density and cultivar choice largely work in a complementary fashion, allowing enhanced competitive ability against weeds when used in combination. However, the relative benefit of choosing a more competitive cultivar decreases at later sowing dates and higher crop densities. Modelling approaches could be further employed to examine the effectiveness of IWM, reducing the need for more expensive and cumbersome long-term in situ experimentation.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.
Resumo:
Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.
Resumo:
In this paper we employ a hypothetical discrete choice experiment (DCE) to examine how much consumers are willing to pay to use technology to customize their food shopping. We conjecture that customized information provision can aid in the composition of a healthier shop. Our results reveal that consumers are prepared to pay relatively more for individual specic information as opposed to generic nutritional information that is typically provided on food labels. In arriving at these results we have examined various model specications including those that make use of ex-post de-brieng questions on attribute nonattendance and attribute ranking information and those that consider the time taken to complete the survey. Our main results are robust to the various model specications we examine
Resumo:
This study's purpose is to investigate the effects of self-congruence and functional congruence on tourists' destination choice. The present research contributes to the gap in the consumer behavior literature by examining the relationships among self-congruence, functional congruence, and destination choice. Based on a sample of 367 British residents, the three research hypotheses are tested using multinomial logistic regression analysis. The study results suggest that a tourist's destination choice is influenced strongly by functional congruence, but not by self-congruence. The article closes with theoretical and managerial implications as well as future research directions.
Resumo:
1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.
Resumo:
The ground surface net solar radiation is the energy that drives physical and chemical processes at the ground surface. In this paper, multi-spectral data from the Landsat-5 TM, topographic data from a gridded digital elevation model, field measurements, and the atmosphere model LOWTRAN 7 are used to estimate surface net solar radiation over the FIFE site. Firstly an improved method is presented and used for calculating total surface incoming radiation. Then, surface albedo is integrated from surface reflectance factors derived from remotely sensed data from Landsat-5 TM. Finally, surface net solar radiation is calculated by subtracting surface upwelling radiation from the total surface incoming radiation.
Resumo:
Two experiments investigated transfer effects in implicit memory and consumer choice, using a preference judgement task. Experiment 1 examined whether it is possible to obtain priming for unfamiliar food labels. Additionally, it investigated whether the experience of seeing a brand name with a particular product type would benefit subsequent processing of the brand name when linked with a different product type. Experiment 2 examined whether changes in modality between study and test would affect priming for unfamiliar brand names. Both questions are theoretically important, as well as pertaining to practical concerns in the consumer choice literature. Experiment 1 demonstrated significant priming for unfamiliar food labels, and established that priming was unaffected by changing the product type with which the brand name was associated. In Experiment 2, priming on both auditory and visual versions of the preference judgement task was reduced by changes in modality. The results and implications are discussed in relation to consumer choice and current theories of implicit memory.
Resumo:
The gradualist approach to trade liberalization views the uniform tariffs implied by MFN status as an important step on the path to free trade. We investigate whether a regime of uniform tariffs will be preferable to discriminatory tariffs when countries engage in non-cooperative interaction in multilateral trade. The analysis includes product differentiation and asymmetric costs. We show that with the cost asymmetry the countries will disagree on the choice of tariff regime. When the choice of import tariffs and export subsidies is made sequentially the uniform tariff regime may not be sustainable, because of an incentive to deviate to a discriminatory regime. Hence, an international body is needed to ensure compliance with tariff agreement.
Resumo:
Background and Aims: The aims of this investigation were to highlight the qualitative and quantitative diversity apparent between nine diploid Fragaria species and produce interspecific populations segregating for a large number of morphological characters suitable for quantitative trait loci analysis. Methods: A qualitative comparison of eight described diploid Fragaria species was performed and measurements were taken of 23 morphological traits from 19 accessions including eight described species and one previously undescribed species. A principal components analysis was performed on 14 mathematically unrelated traits from these accessions, which partitioned the species accessions into distinct morphological groups. Interspecific crosses were performed with accessions of species that displayed significant quantitative divergence and, from these, populations that should segregate for a range of quantitative traits were raised. Key Results: Significant differences between species were observed for all 23 morphological traits quantified and three distinct groups of species accessions were observed after the principal components analysis. Interspecific crosses were performed between these groups, and F2 and backcross populations were raised that should segregate for a range of morphological characters. In addition, the study highlighted a number of distinctive morphological characters in many of the species studied. Conclusions: Diploid Fragaria species are morphologically diverse, yet remain highly interfertile, making the group an ideal model for the study of the genetic basis of phenotypic differences between species through map-based investigation using quantitative trait loci. The segregating interspecific populations raised will be ideal for such investigations and could also provide insights into the nature and extent of genome evolution within this group.
Resumo:
This is the first of two articles presenting a detailed review of the historical evolution of mathematical models applied in the development of building technology, including conventional buildings and intelligent buildings. After presenting the technical differences between conventional and intelligent buildings, this article reviews the existing mathematical models, the abstract levels of these models, and their links to the literature for intelligent buildings. The advantages and limitations of the applied mathematical models are identified and the models are classified in terms of their application range and goal. We then describe how the early mathematical models, mainly physical models applied to conventional buildings, have faced new challenges for the design and management of intelligent buildings and led to the use of models which offer more flexibility to better cope with various uncertainties. In contrast with the early modelling techniques, model approaches adopted in neural networks, expert systems, fuzzy logic and genetic models provide a promising method to accommodate these complications as intelligent buildings now need integrated technologies which involve solving complex, multi-objective and integrated decision problems.