842 resultados para Discrete Choice Model
Resumo:
Taste acuity for the bitter taste of 6-n-propylthiouracil (PROP) is a heritable trait. Some individuals perceive concentrated levels of PROP to taste extremely bitter (supertasters) or moderately bitter (medium tasters), whereas others detect only a mild taste or none at all (non-tasters). Heightened PROP acuity has been reported to be associated with greater acuity for a variety of compounds found in ordinary foods, although there are some inconsistent findings. The extent to which these compounds are perceived may affect food likes/dislikes and dietary intake. The majority of studies have tended to measure food likes and intake using questionnaires or laboratory preparations of a single taste quality. The present study used food diaries and sensory responses to real foods to be better able to generalise to real eating situations. There was no substantial evidence that genetically mediated taste acuity for PROP had a direct influence on food likes/dislikes or intake, although there was evidence that dietary restraint could have influenced these findings among the female samples. However; investigation of PROP tasting among individuals with coronary heart disease (CHD) and a control group suggested that PROP acuity could function as a genetic taste marker for heart disease and potentially other diet-related conditions. CHD was associated with decreased PROP acuity among men. This is consistent with the findings that decreased PROP acuity tended to be associated with increased likelihood to be a smoker and higher body mass index. It is concluded that there is not a simple and direct relationship between PROP tasting ability and food choice. An interaction between PROP acuity and other mediating factors may be involved in a more complex model of food choice. The evidence that PROP taste acuity may function as a genetic taste marker for coronary heart disease could have wide implications for understanding the aetiology, and ultimately the prevention, of diet-related disease.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
This thesis reports the results of DEM (Discrete Element Method) simulations of rotating drums operated in a number of different flow regimes. DEM simulations of drum granulation have also been conducted. The aim was to demonstrate that a realistic simulation is possible, and further understanding of the particle motion and granulation processes in a rotating drum. The simulation model has shown good qualitative and quantitative agreement with other published experimental results. A two-dimensional bed of 5000 disc particles, with properties similar to glass has been simulated in the rolling mode (Froude number 0.0076) with a fractional drum fill of approximately 30%. Particle velocity fields in the cascading layer, bed cross-section, and at the drum wall have shown good agreement with experimental PEPT data. Particle avalanches in the cascading layer have been shown to be consistent with single layers of particles cascading down the free surface towards the drum wall. Particle slip at the drum wall has been shown to depend on angular position, and ranged from 20% at the toe and shoulder, to less than 1% at the mid-point. Three-dimensional DEM simulations of a moderately cascading bed of 50,000 spherical elastic particles (Froude number 0.83) with a fractional fill of approximately 30% have also been performed. The drum axis was inclined by 50 to the horizontal with periodic boundaries at the ends of the drum. The mean period of bed circulation was found to be 0.28s. A liquid binder was added to the system using a spray model based on the concept of a wet surface energy. Granule formation and breakage processes have been demonstrated in the system.
Resumo:
Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.
Resumo:
Contrast sensitivity improves with the area of a sine-wave grating, but why? Here we assess this phenomenon against contemporary models involving spatial summation, probability summation, uncertainty, and stochastic noise. Using a two-interval forced-choice procedure we measured contrast sensitivity for circular patches of sine-wave gratings with various diameters that were blocked or interleaved across trials to produce low and high extrinsic uncertainty, respectively. Summation curves were steep initially, becoming shallower thereafter. For the smaller stimuli, sensitivity was slightly worse for the interleaved design than for the blocked design. Neither area nor blocking affected the slope of the psychometric function. We derived model predictions for noisy mechanisms and extrinsic uncertainty that was either low or high. The contrast transducer was either linear (c1.0) or nonlinear (c2.0), and pooling was either linear or a MAX operation. There was either no intrinsic uncertainty, or it was fixed or proportional to stimulus size. Of these 10 canonical models, only the nonlinear transducer with linear pooling (the noisy energy model) described the main forms of the data for both experimental designs. We also show how a cross-correlator can be modified to fit our results and provide a contemporary presentation of the relation between summation and the slope of the psychometric function.
Resumo:
In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS) model in different task domains. The HVS model is a discrete hidden Markov model (HMM) in which each HMM state represents the state of a push-down automaton with a finite stack size. In previous applications, maximum-likelihood estimation (MLE) is used to derive the parameters of the HVS model. However, MLE makes a number of assumptions and unfortunately some of these assumptions do not hold. Discriminative training, without making such assumptions, can improve the performance of the HVS model by discriminating the correct hypothesis from the competing hypotheses. Experiments have been conducted in two domains: the travel domain for the semantic parsing task using the DARPA Communicator data and the Air Travel Information Services (ATIS) data and the bioinformatics domain for the information extraction task using the GENIA corpus. The results demonstrate modest improvements of the performance of the HVS model using discriminative training. In the travel domain, discriminative training of the HVS model gives a relative error reduction rate of 31 percent in F-measure when compared with MLE on the DARPA Communicator data and 9 percent on the ATIS data. In the bioinformatics domain, a relative error reduction rate of 4 percent in F-measure is achieved on the GENIA corpus.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
In this paper we investigate whether consideration of store-level heterogeneity in marketing mix effects improves the accuracy of the marketing mix elasticities, fit, and forecasting accuracy of the widely-applied SCAN*PRO model of store sales. Models with continuous and discrete representations of heterogeneity, estimated using hierarchical Bayes (HB) and finite mixture (FM) techniques, respectively, are empirically compared to the original model, which does not account for store-level heterogeneity in marketing mix effects, and is estimated using ordinary least squares (OLS). The empirical comparisons are conducted in two contexts: Dutch store-level scanner data for the shampoo product category, and an extensive simulation experiment. The simulation investigates how between- and within-segment variance in marketing mix effects, error variance, the number of weeks of data, and the number of stores impact the accuracy of marketing mix elasticities, model fit, and forecasting accuracy. Contrary to expectations, accommodating store-level heterogeneity does not improve the accuracy of marketing mix elasticities relative to the homogeneous SCAN*PRO model, suggesting that little may be lost by employing the original homogeneous SCAN*PRO model estimated using ordinary least squares. Improvements in fit and forecasting accuracy are also fairly modest. We pursue an explanation for this result since research in other contexts has shown clear advantages from assuming some type of heterogeneity in market response models. In an Afterthought section, we comment on the controversial nature of our result, distinguishing factors inherent to household-level data and associated models vs. general store-level data and associated models vs. the unique SCAN*PRO model specification.
Resumo:
Purpose: Our study explores the mediating role of discrete emotions in the relationships between employee perceptions of distributive and procedural injustice, regarding an annual salary raise, and counterproductive work behaviors (CWBs). Design/Methodology/Approach: Survey data were provided by 508 individuals from telecom and IT companies in Pakistan. Confirmatory factor analysis, structural equation modeling, and bootstrapping were used to test our hypothesized model. Findings: We found a good fit between the data and our tested model. As predicted, anger (and not sadness) was positively related to aggressive CWBs (abuse against others and production deviance) and fully mediated the relationship between perceived distributive injustice and these CWBs. Against predictions, however, neither sadness nor anger was significantly related to employee withdrawal. Implications: Our findings provide organizations with an insight into the emotional consequences of unfair HR policies, and the potential implications for CWBs. Such knowledge may help employers to develop training and counseling interventions that support the effective management of emotions at work. Our findings are particularly salient for national and multinational organizations in Pakistan. Originality/Value: This is one of the first studies to provide empirical support for the relationships between in/justice, discrete emotions and CWBs in a non-Western (Pakistani) context. Our study also provides new evidence for the differential effects of outward/inward emotions on aggressive/passive CWBs. © 2012 Springer Science+Business Media, LLC.
Resumo:
Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.
Estimation of productivity in Korean electric power plants:a semiparametric smooth coefficient model
Resumo:
This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to compare performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implication for generation policy in Korea as outlined in this study.
Resumo:
We study the strong coupling (SC) limit of the anisotropic Kardar-Parisi-Zhang (KPZ) model. A systematic mapping of the continuum model to its lattice equivalent shows that in the SC limit, anisotropic perturbations destroy all spatial correlations but retain a temporal scaling which shows a remarkable crossover along one of the two spatial directions, the choice of direction depending on the relative strength of anisotropicity. The results agree with exact numerics and are expected to settle the long-standing SC problem of a KPZ model in the infinite range limit. © 2007 The American Physical Society.
Resumo:
Building on a previous conceptual article, we present an empirically derived model of network learning - learning by a group of organizations as a group. Based on a qualitative, longitudinal, multiple-method empirical investigation, five episodes of network learning were identified. Treating each episode as a discrete analytic case, through cross-case comparison, a model of network learning is developed which reflects the common, critical features of the episodes. The model comprises three conceptual themes relating to learning outcomes, and three conceptual themes of learning process. Although closely related to conceptualizations that emphasize the social and political character of organizational learning, the model of network learning is derived from, and specifically for, more extensive networks in which relations among numerous actors may be arms-length or collaborative, and may be expected to change over time.