10 resultados para Model choice
em Aston University Research Archive
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.
Resumo:
Taste acuity for the bitter taste of 6-n-propylthiouracil (PROP) is a heritable trait. Some individuals perceive concentrated levels of PROP to taste extremely bitter (supertasters) or moderately bitter (medium tasters), whereas others detect only a mild taste or none at all (non-tasters). Heightened PROP acuity has been reported to be associated with greater acuity for a variety of compounds found in ordinary foods, although there are some inconsistent findings. The extent to which these compounds are perceived may affect food likes/dislikes and dietary intake. The majority of studies have tended to measure food likes and intake using questionnaires or laboratory preparations of a single taste quality. The present study used food diaries and sensory responses to real foods to be better able to generalise to real eating situations. There was no substantial evidence that genetically mediated taste acuity for PROP had a direct influence on food likes/dislikes or intake, although there was evidence that dietary restraint could have influenced these findings among the female samples. However; investigation of PROP tasting among individuals with coronary heart disease (CHD) and a control group suggested that PROP acuity could function as a genetic taste marker for heart disease and potentially other diet-related conditions. CHD was associated with decreased PROP acuity among men. This is consistent with the findings that decreased PROP acuity tended to be associated with increased likelihood to be a smoker and higher body mass index. It is concluded that there is not a simple and direct relationship between PROP tasting ability and food choice. An interaction between PROP acuity and other mediating factors may be involved in a more complex model of food choice. The evidence that PROP taste acuity may function as a genetic taste marker for coronary heart disease could have wide implications for understanding the aetiology, and ultimately the prevention, of diet-related disease.
Resumo:
Contrast sensitivity improves with the area of a sine-wave grating, but why? Here we assess this phenomenon against contemporary models involving spatial summation, probability summation, uncertainty, and stochastic noise. Using a two-interval forced-choice procedure we measured contrast sensitivity for circular patches of sine-wave gratings with various diameters that were blocked or interleaved across trials to produce low and high extrinsic uncertainty, respectively. Summation curves were steep initially, becoming shallower thereafter. For the smaller stimuli, sensitivity was slightly worse for the interleaved design than for the blocked design. Neither area nor blocking affected the slope of the psychometric function. We derived model predictions for noisy mechanisms and extrinsic uncertainty that was either low or high. The contrast transducer was either linear (c1.0) or nonlinear (c2.0), and pooling was either linear or a MAX operation. There was either no intrinsic uncertainty, or it was fixed or proportional to stimulus size. Of these 10 canonical models, only the nonlinear transducer with linear pooling (the noisy energy model) described the main forms of the data for both experimental designs. We also show how a cross-correlator can be modified to fit our results and provide a contemporary presentation of the relation between summation and the slope of the psychometric function.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
We study the strong coupling (SC) limit of the anisotropic Kardar-Parisi-Zhang (KPZ) model. A systematic mapping of the continuum model to its lattice equivalent shows that in the SC limit, anisotropic perturbations destroy all spatial correlations but retain a temporal scaling which shows a remarkable crossover along one of the two spatial directions, the choice of direction depending on the relative strength of anisotropicity. The results agree with exact numerics and are expected to settle the long-standing SC problem of a KPZ model in the infinite range limit. © 2007 The American Physical Society.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate) where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.
Resumo:
Our goal here is a more complete understanding of how information about luminance contrast is encoded and used by the binocular visual system. In two-interval forced-choice experiments we assessed observers' ability to discriminate changes in contrast that could be an increase or decrease of contrast in one or both eyes, or an increase in one eye coupled with a decrease in the other (termed IncDec). The base or pedestal contrasts were either in-phase or out-of-phase in the two eyes. The opposed changes in the IncDec condition did not cancel each other out, implying that along with binocular summation, information is also available from mechanisms that do not sum the two eyes' inputs. These might be monocular mechanisms. With a binocular pedestal, monocular increments of contrast were much easier to see than monocular decrements. These findings suggest that there are separate binocular (B) and monocular (L,R) channels, but only the largest of the three responses, max(L,B,R), is available to perception and decision. Results from contrast discrimination and contrast matching tasks were described very accurately by this model. Stimuli, data, and model responses can all be visualized in a common binocular contrast space, allowing a more direct comparison between models and data. Some results with out-of-phase pedestals were not accounted for by the max model of contrast coding, but were well explained by an extended model in which gratings of opposite polarity create the sensation of lustre. Observers can discriminate changes in lustre alongside changes in contrast.