971 resultados para relative utility models
Resumo:
We invostigated the validity of food intake estimates obtained by a self-administered FFQ relative to weighed food records (WFR) and the extent to which demographic, anthropometric, and social characteristics explain differences between these methods. A community-based sample of 96 Australian adults completed a FFQ and 12 d of WFR over 12 mo. The FFQ was adapted to the Australian setting from the questionnaire used in the US Nurses' Health Study. Spearman rank correlation coefficients ranged from 0.08 for other vegetables to 0.88 for tea. Exact agreement by quartiles of intake ranged from 27% (eggs) to 63% (tea). Differences between FFQ and WFR regressed on personal characteristics were significantly associated with at least 1 characteristic for 20 of the 37 foods. Sex was significantly associated with differences for 17 food groups, including 5 specific vegetable groups and 2 total fruit and vegetable groups. Use of dietary supplements and the presence of a medical condition were associated with differences for 5 foods; age, school leaving age, and occupation were associated with differences for 1-3 foods. BMI was rot associated with differences for any foods. Regression models explained from 3% (wholemeal bread) to 37% (for all cereals and products) of variation in differences between methods. We conclude that the relative validity of intake estimates obtained by FFQ is different for men and women for a large number of foods. These results highlight the need for appropriate adjustment of diet-disease relations for factors affecting the validity of food intake estimates.
Resumo:
This article presents a three-dimensional definition space of the group development literature that differentiates group development models on three dimensions: content, population, and path dependency. The multidimensional conceptualization structures and integrates the vast group development literature, enabling direct comparison of competing theories. The utility of this definition space is demonstrated by using the relative positioning of two seemingly competing group development models-the punctuated equilibrium model and the integrative model-to demonstrate their complementarity. The authors also show how organizational researchers and practitioners can use the three-dimensional definition space to select an appropriate theoretical model for the group or group process with which they are working.
Resumo:
Objective: To explore the use of epidemiological modelling for the estimation of health effects of behaviour change interventions, using the example of computer-tailored nutrition education aimed at fruit and vegetable consumption in The Netherlands. Design: The effects of the intervention on changes in consumption were obtained from an earlier evaluation study. The effect on health outcomes was estimated using an epidemiological multi-state life table model. input data for the model consisted of relative risk estimates for cardiovascular disease and cancers, data on disease occurrence and mortality, and survey data on the consumption of fruits and vegetables. Results: if the computer-tailored nutrition education reached the entire adult population and the effects were sustained, it could result in a mortality decrease of 0.4 to 0.7% and save 72 to 115 life-years per 100000 persons aged 25 years or older. Healthy life expectancy is estimated to increase by 32.7 days for men and 25.3 days for women. The true effect is likely to lie between this theoretical maximum and zero effect, depending mostly on durability of behaviour change and reach of the intervention. Conclusion: Epidemiological models can be used to estimate the health impact of health promotion interventions.
Resumo:
We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
The research described in this study replicates and extends the Brady et al., [Brady, M. K., Knight, G. A., Cronin Jr. J. Toma, G., Hult, M. and Keillor, B. D. (2005), emoving the Contextual Lens: A Multinational, Mult-setting Comparison of Service Evaluation Models, Journal of Retailing, 81(3), pp. 215-230] study suggestion that future research in service evaluations should focus on emerging service economies such as China. The intent of the research was to examine the suitability of the models suggested by Brady and colleagues in the Chinese market. The replication somewhat successfully duplicated their finding as to the superiority of the comprehensive service evaluation model. Additionally, we also sought to examine as to whether the service evaluation model is gender invariant. Our findings indicate that there are significant differences between gender. These findings are discussed relative to the limitations associated with the study.
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
Damage to insulation materials located near to a primary circuit coolant leak may compromise the operation of the emergency core cooling system (ECCS). Insulation material in the form of mineral wool fiber agglomerates (MWFA) maybe transported to the containment sump strainers, where they may block or penetrate the strainers. Though the impact of MWFA on the pressure drop across the strainers is minimal, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effect that corrosion and erosion of the containment internals by the coolant has on the strainer pressure drop. The focus of this paper is on the verification and validation of numerical models that can predict the transport of MWFA. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Note that the relative viscosity is only significant at high concentrations. Three single effect experiments were used to provide validation data on the transport of the fiber agglomerates under conditions of sedimentation in quiescent fluid, sedimentation in a horizontal flow and suspension in a horizontal flow. The experiments were performed in a rectangular column for the quiescent fluid and a racetrack type channel that provided a near uniform horizontal flow. The numerical models of sedimentation in the column and the racetrack channel found that the sedimentation characteristics are consistent with the experiments. For channel suspension, the heavier fibers tend to accumulate at the channel base even at high velocities, while lighter phases are more likely to be transported around the channel.
Resumo:
Mineral wool insulation material applied to the primary cooling circuit of a nuclear reactor maybe damaged in the course of a loss of coolant accident (LOCA). The insulation material released by the leak may compromise the operation of the emergency core cooling system (ECCS), as it maybe transported together with the coolant in the form of mineral wool fiber agglomerates (MWFA) suspensions to the containment sump strainers, which are mounted at the inlet of the ECCS to keep any debris away from the emergency cooling pumps. In the further course of the LOCA, the MWFA may block or penetrate the strainers. In addition to the impact of MWFA on the pressure drop across the strainers, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. Therefore, it is essential to understand the transport characteristics of the insulation materials in order to determine the long-term operability of nuclear reactors, which undergo LOCA. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz1 is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effects that particles formed due to corrosion of metallic containment internals by the coolant medium have on the strainer pressure drop. The focus of this presentation is on the numerical models that are used to predict the transport of MWFA by CFD simulations. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates can represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Only one of the single effect experimental scenarios is described here that are used in validation of the numerical models. The scenario examines the suspension and horizontal transport of the fiber agglomerates in a racetrack type channel. The corresponding experiments will be described in an accompanying presentation (see abstract of Seeliger et al.).
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
The research described in this study replicates and extends the Brady et al., [Brady, M. K., Knight, G. A., Cronin Jr. J. Toma, G., Hult, M. and Keillor, B. D. (2005), emoving the Contextual Lens: A Multinational, Mult-setting Comparison of Service Evaluation Models, Journal of Retailing, 81(3), pp. 215-230] study suggestion that future research in service evaluations should focus on emerging service economies such as China. The intent of the research was to examine the suitability of the models suggested by Brady and colleagues in the Chinese market. The replication somewhat successfully duplicated their finding as to the superiority of the comprehensive service evaluation model. Additionally, we also sought to examine as to whether the service evaluation model is gender invariant. Our findings indicate that there are significant differences between gender. These findings are discussed relative to the limitations associated with the study.
Resumo:
This paper compares the UK/US exchange rate forecasting performance of linear and nonlinear models based on monetary fundamentals, to a random walk (RW) model. Structural breaks are identified and taken into account. The exchange rate forecasting framework is also used for assessing the relative merits of the official Simple Sum and the weighted Divisia measures of money. Overall, there are four main findings. First, the majority of the models with fundamentals are able to beat the RW model in forecasting the UK/US exchange rate. Second, the most accurate forecasts of the UK/US exchange rate are obtained with a nonlinear model. Third, taking into account structural breaks reveals that the Divisia aggregate performs better than its Simple Sum counterpart. Finally, Divisia-based models provide more accurate forecasts than Simple Sum-based models provided they are constructed within a nonlinear framework.
Resumo:
Context/Motivation - Different modeling techniques have been used to model requirements and decision-making of self-adaptive systems (SASs). Specifically, goal models have been prolific in supporting decision-making depending on partial and total fulfilment of functional (goals) and non-functional requirements (softgoals). Different goalrealization strategies can have different effects on softgoals which are specified with weighted contribution-links. The final decision about what strategy to use is based, among other reasons, on a utility function that takes into account the weighted sum of the different effects on softgoals. Questions/Problems - One of the main challenges about decisionmaking in self-adaptive systems is to deal with uncertainty during runtime. New techniques are needed to systematically revise the current model when empirical evidence becomes available from the deployment. Principal ideas/results - In this paper we enrich the decision-making supported by goal models by using Dynamic Decision Networks (DDNs). Goal realization strategies and their impact on softgoals have a correspondence with decision alternatives and conditional probabilities and expected utilities in the DDNs respectively. Our novel approach allows the specification of preferences over the softgoals and supports reasoning about partial satisfaction of softgoals using probabilities. We report results of the application of the approach on two different cases. Our early results suggest the decision-making process of SASs can be improved by using DDNs. © 2013 Springer-Verlag.