965 resultados para nested multinomial logit
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.
Resumo:
Attribute non-attendance in choice experiments affects WTP estimates and therefore the validity of the method. A recent strand of literature uses attenuated estimates of marginal utilities of ignored attributes. Following this approach, we propose a generalisation of the mixed logit model whereby the distribution of marginal utility coefficients of a stated non-attender has a potentially lower mean and lower variance than those of a stated attender. Model comparison shows that our shrinkage approach fits the data better and produces more reliable WTP estimates. We further find that while reliability of stated attribute non-attendance increases in successive choice experiments, it does not increase when respondents report having ignored the same attribute twice.
Resumo:
Acrylamide forms during cooking and processing predominately from the reaction of free asparagine and reducing sugars in the Maillard reaction. The identification of low free asparagine and reducing sugar varieties of crops is therefore an important target. In this study, nine varieties of potato (French fry varieties Maris Piper (from two suppliers), Pentland Dell, King Edward, Daisy, and Markies; and chipping varieties Lady Claire, Lady Rosetta, Saturna, and Hermes) grown in the United Kingdom in 2009 were analyzed at monthly intervals through storage from November 2009 to July 2010. Acrylamide formation was measured in heated flour and chips fried in oil. Analysis of variance revealed significant interactions between varieties nested within type (French fry and chipping) and storage time for most free amino acids, glucose, fructose, and acrylamide formation. Acrylamide formed in chips correlated significantly with acrylamide formed in flour and with chip color. There were significant correlations between glucose or total reducing sugar concentration and acrylamide formation in both variety types, but with fructose the correlation was much stronger for chipping than for French fry varieties. Conversely, there were significant correlations with acrylamide formation for both total free amino acid and free asparagine concentration in the French fry but not chipping varieties. The study showed the potential of variety selection for preventing unacceptable levels of acrylamide formation in potato products and the variety-dependent effect of long-term storage on acrylamide risk. It also highlighted the complex relationship between precursor concentration and acrylamide risk in potatoes.
Resumo:
This paper investigates a puzzling feature of social conventions: the fact that they are both arbitrary and normative. We examine how this tension is addressed in sociological accounts of conventional phenomena. Traditional approaches tend to generate either synchronic accounts that fail to consider the arbitrariness of conventions, or diachronic accounts that miss central aspects of their normativity. As a remedy, we propose a processual conception that considers conventions as both the outcome and material cause of much human activity. This conceptualization, which borrows from the économie des conventions as well as critical realism, provides a novel perspective on how conventions are nested and defined, and on how they are established, maintained and challenged.
Resumo:
Wine production is largely governed by atmospheric conditions, such as air temperature and precipitation, together with soil management and viticultural/enological practices. Therefore, anthropogenic climate change is likely to have important impacts on the winemaking sector worldwide. An important winemaking region is the Portuguese Douro Valley, which is known by its world-famous Port Wine. The identification of robust relationships between atmospheric factors and wine parameters is of great relevance for the region. A multivariate linear regression analysis of a long wine production series (1932–2010) reveals that high rainfall and cool temperatures during budburst, shoot and inflorescence development (February-March) and warm temperatures during flowering and berry development (May) are generally favourable to high production. The probabilities of occurrence of three production categories (low, normal and high) are also modelled using multinomial logistic regression. Results show that both statistical models are valuable tools for predicting the production in a given year with a lead time of 3–4 months prior to harvest. These statistical models are applied to an ensemble of 16 regional climate model experiments following the SRES A1B scenario to estimate possible future changes. Wine production is projected to increase by about 10 % by the end of the 21st century, while the occurrence of high production years is expected to increase from 25 % to over 60 %. Nevertheless, further model development will be needed to include other aspects that may shape production in the future. In particular, the rising heat stress and/or changes in ripening conditions could limit the projected production increase in future decades.
Resumo:
With the exceptions of the bifidobacteria, propionibacteria and coriobacteria, the Actinobacteria associated with the human gastrointestinal tract have received little attention. This has been due to the seeming absence of these bacteria from most clone libraries. In addition, many of these bacteria have fastidious growth and atmospheric requirements. A recent cultivation-based study has shown that the Actinobacteria of the human gut may be more diverse than previously thought. The aim of this study was to develop a denaturing gradient gel electrophoresis (DGGE) approach for characterizing Actinobacteria present in faecal samples. Amount of DNA added to the Actinobacteria-specific PCR used to generate strong PCR products of equal intenstity from faecal samples of five infants, nine adults and eight elderly adults was anti-correlated with counts of bacteria obtained using fluorescence in situ hybridization probe HGC69A. A nested PCR using Actinobacteria-specific and universal PCR-DGGE primers was used to generate profiles for the Actinobacteria. Cloning of sequences from the DGGE bands confirmed the specificity of the Actinobacteria-specific primers. In addition to members of the genus Bifidobacterium, species belonging to the genera Propionibacterium, Microbacterium, Brevibacterium, Actinomyces and Corynebacterium were found to be part of the faecal microbiota of healthy humans.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
Future climate change projections are often derived from ensembles of simulations from multiple global circulation models using heuristic weighting schemes. This study provides a more rigorous justification for this by introducing a nested family of three simple analysis of variance frameworks. Statistical frameworks are essential in order to quantify the uncertainty associated with the estimate of the mean climate change response. The most general framework yields the “one model, one vote” weighting scheme often used in climate projection. However, a simpler additive framework is found to be preferable when the climate change response is not strongly model dependent. In such situations, the weighted multimodel mean may be interpreted as an estimate of the actual climate response, even in the presence of shared model biases. Statistical significance tests are derived to choose the most appropriate framework for specific multimodel ensemble data. The framework assumptions are explicit and can be checked using simple tests and graphical techniques. The frameworks can be used to test for evidence of nonzero climate response and to construct confidence intervals for the size of the response. The methodology is illustrated by application to North Atlantic storm track data from the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel ensemble. Despite large variations in the historical storm tracks, the cyclone frequency climate change response is not found to be model dependent over most of the region. This gives high confidence in the response estimates. Statistically significant decreases in cyclone frequency are found on the flanks of the North Atlantic storm track and in the Mediterranean basin.
Resumo:
BACKGROUND: Genetic polymorphisms of transcription factor 7-like 2 (TCF7L2) have been associated with type 2 diabetes and BMI. OBJECTIVE: The objective was to investigate whether TCF7L2 HapA is associated with weight development and whether such an association is modulated by protein intake or by the glycemic index (GI). DESIGN: The investigation was based on prospective data from 5 cohort studies nested within the European Prospective Investigation into Cancer and Nutrition. Weight change was followed up for a mean (±SD) of 6.8 ± 2.5 y. TCF7L2 rs7903146 and rs10885406 were successfully genotyped in 11,069 individuals and used to derive HapA. Multiple logistic and linear regression analysis was applied to test for the main effect of HapA and its interaction with dietary protein or GI. Analyses from the cohorts were combined by random-effects meta-analysis. RESULTS: HapA was associated neither with baseline BMI (0.03 ± 0.07 BMI units per allele; P = 0.6) nor with annual weight change (8.8 ± 11.7 g/y per allele; P = 0.5). However, a previously shown positive association between intake of protein, particularly of animal origin, and subsequent weight change in this population proved to be attenuated by TCF7L2 HapA (P-interaction = 0.01). We showed that weight gain becomes independent of protein intake with an increasing number of HapA alleles. Substitution of protein with either fat or carbohydrates showed the same effects. No interaction with GI was observed. CONCLUSION: TCF7L2 HapA attenuates the positive association between animal protein intake and long-term body weight change in middle-aged Europeans but does not interact with the GI of the diet.
Resumo:
The Tropical Rainfall Measuring Mission 3B42 precipitation estimates are widely used in tropical regions for hydrometeorological research. Recently, version 7 of the product was released. Major revisions to the algorithm involve the radar refl ectivity - rainfall rates relationship, surface clutter detection over high terrain, a new reference database for the passive microwave algorithm, and a higher quality gauge analysis product for monthly bias correction. To assess the impacts of the improved algorithm, we compare the version 7 and the older version 6 product with data from 263 rain gauges in and around the northern Peruvian Andes. The region covers humid tropical rainforest, tropical mountains, and arid to humid coastal plains. We and that the version 7 product has a significantly lower bias and an improved representation of the rainfall distribution. We further evaluated the performance of versions 6 and 7 products as forcing data for hydrological modelling, by comparing the simulated and observed daily streamfl ow in 9 nested Amazon river basins. We find that the improvement in the precipitation estimation algorithm translates to an increase in the model Nash-Sutcliffe effciency, and a reduction in the percent bias between the observed and simulated flows by 30 to 95%.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
Background: Stable-isotope ratios of carbon (13C/12C, expressed as δ13C) and nitrogen (15N/14N, or δ15N) have been proposed as potential nutritional biomarkers to distinguish between meat, fish, and plant-based foods. Objective: The objective was to investigate dietary correlates of δ13C and δ15N and examine the association of these biomarkers with incident type 2 diabetes in a prospective study. Design: Serum δ13C and δ15N (‰) were measured by using isotope ratio mass spectrometry in a case-cohort study (n = 476 diabetes cases; n = 718 subcohort) nested within the European Prospective Investigation into Cancer and Nutrition (EPIC)–Norfolk population-based cohort. We examined dietary (food-frequency questionnaire) correlates of δ13C and δ15N in the subcohort. HRs and 95% CIs were estimated by using Prentice-weighted Cox regression. Results: Mean (±SD) δ13C and δ15N were −22.8 ± 0.4‰ and 10.2 ± 0.4‰, respectively, and δ13C (r = 0.22) and δ15N (r = 0.20) were positively correlated (P < 0.001) with fish protein intake. Animal protein was not correlated with δ13C but was significantly correlated with δ15N (dairy protein: r = 0.11; meat protein: r = 0.09; terrestrial animal protein: r = 0.12, P ≤ 0.013). δ13C was inversely associated with diabetes in adjusted analyses (HR per tertile: 0.74; 95% CI: 0.65, 0.83; P-trend < 0.001], whereas δ15N was positively associated (HR: 1.23; 95% CI: 1.09, 1.38; P-trend = 0.001). Conclusions: The isotope ratios δ13C and δ15N may both serve as potential biomarkers of fish protein intake, whereas only δ15N may reflect broader animal-source protein intake in a European population. The inverse association of δ13C but a positive association of δ15N with incident diabetes should be interpreted in the light of knowledge of dietary intake and may assist in identifying dietary components that are associated with health risks and benefits.
Resumo:
The ability of the HiGEM climate model to represent high-impact, regional, precipitation events is investigated in two ways. The first focusses on a case study of extreme regional accumulation of precipitation during the passage of a summer extra-tropical cyclone across southern England on 20 July 2007 that resulted in a national flooding emergency. The climate model is compared with a global Numerical Weather Prediction (NWP) model and higher resolution, nested limited area models. While the climate model does not simulate the timing and location of the cyclone and associated precipitation as accurately as the NWP simulations, the total accumulated precipitation in all models is similar to the rain gauge estimate across England and Wales. The regional accumulation over the event is insensitive to horizontal resolution for grid spacings ranging from 90km to 4km. Secondly, the free-running climate model reproduces the statistical distribution of daily precipitation accumulations observed in the England-Wales precipitation record. The model distribution diverges increasingly from the record for longer accumulation periods with a consistent under-representation of more intense multi-day accumulations. This may indicate a lack of low-frequency variability associated with weather regime persistence. Despite this, the overall seasonal and annual precipitation totals from the model are still comparable to those from ERA-Interim.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.