18 resultados para equation-error models
em University of Queensland eSpace - Australia
Resumo:
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.
Resumo:
A framework for developing marketing category management decision support systems (DSS) based upon the Bayesian Vector Autoregressive (BVAR) model is extended. Since the BVAR model is vulnerable to permanent and temporary shifts in purchasing patterns over time, a form that can correct for the shifts and still provide the other advantages of the BVAR is a Bayesian Vector Error-Correction Model (BVECM). We present the mechanics of extending the DSS to move from a BVAR model to the BVECM model for the category management problem. Several additional iterative steps are required in the DSS to allow the decision maker to arrive at the best forecast possible. The revised marketing DSS framework and model fitting procedures are described. Validation is conducted on a sample problem.
Resumo:
The Perk-Schultz model may be expressed in terms of the solution of the Yang-Baxter equation associated with the fundamental representation of the untwisted affine extension of the general linear quantum superalgebra U-q (gl(m/n)], with a multiparametric coproduct action as given by Reshetikhin. Here, we present analogous explicit expressions for solutions of the Yang-Baxter equation associated with the fundamental representations of the twisted and untwisted affine extensions of the orthosymplectic quantum superalgebras U-q[osp(m/n)]. In this manner, we obtain generalizations of the Perk-Schultz model.
Resumo:
There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
We obtain a diagonal solution of the dual reflection equation for the elliptic A(n-1)((1)) solid-on-solid model. The isomorphism between the solutions of the reflection equation and its dual is studied. (C) 2004 American Institute of Physics.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
Previous research suggests that hurt feelings can have powerful effects on individual and relational outcomes. This study examined a typology of hurtful events in couple relationships, together with integrative models predicting ongoing effects on victims and relationships. Participants were 224 students from introductory and third-year psychology classes, who completed open-ended and structured measures concerning an event in which a partner had hurt their feelings. By tailoring Leary et al.'s (1998) typology to the context of romantic relationships, five categories of hurtful events were proposed: active disassociation, passive disassociation, criticism, infidelity, and deception. Analyses assessing similarities and differences among the categories confirmed the utility of the typology. Structural equation modeling showed that longer-term effects on the victim were predicted by relationship anxiety and by the victim's immediate reactions to the event (negative emotions and self-perceptions; feelings of rejection and powerlessness). In contrast, ongoing effects on the relationship were predicted by avoidance, the victim's attributions and perceptions of offender remorse, and the victim's own behavior. The results highlight the utility of an integrated approach to hurt, incorporating emotional, cognitive, and behavioral responses, and dimensions of attachment security.
Resumo:
An investigation was conducted to evaluate the impact of experimental designs and spatial analyses (single-trial models) of the response to selection for grain yield in the northern grains region of Australia (Queensland and northern New South Wales). Two sets of multi-environment experiments were considered. One set, based on 33 trials conducted from 1994 to 1996, was used to represent the testing system of the wheat breeding program and is referred to as the multi-environment trial (MET). The second set, based on 47 trials conducted from 1986 to 1993, sampled a more diverse set of years and management regimes and was used to represent the target population of environments (TPE). There were 18 genotypes in common between the MET and TPE sets of trials. From indirect selection theory, the phenotypic correlation coefficient between the MET and TPE single-trial adjusted genotype means [r(p(MT))] was used to determine the effect of the single-trial model on the expected indirect response to selection for grain yield in the TPE based on selection in the MET. Five single-trial models were considered: randomised complete block (RCB), incomplete block (IB), spatial analysis (SS), spatial analysis with a measurement error (SSM) and a combination of spatial analysis and experimental design information to identify the preferred (PF) model. Bootstrap-resampling methodology was used to construct multiple MET data sets, ranging in size from 2 to 20 environments per MET sample. The size and environmental composition of the MET and the single-trial model influenced the r(p(MT)). On average, the PF model resulted in a higher r(p(MT)) than the IB, SS and SSM models, which were in turn superior to the RCB model for MET sizes based on fewer than ten environments. For METs based on ten or more environments, the r(p(MT)) was similar for all single-trial models.
Resumo:
NPT and NVT Monte Carlo simulations are applied to models for methane and water to predict the PVT behaviour of these fluids over a wide range of temperatures and pressures. The potential models examined in this paper have previously been presented in the literature with their specific parameters optimised to fit phase coexistence data. The exponential-6 potential for methane gives generally good prediction of PVT behaviour over the full range of temperature and pressures studied with the only significant deviation from experimental data seen at high temperatures and pressures. The NSPCE water model shows very poor prediction of PVT behaviour, particularly at dense conditions. To improve this. the charge separation in the NSPCE model is varied with density. Improvements for vapour and liquid phase PVT predictions are achieved with this variation. No improvement was found in the prediction of the oxygen-oxygen radial distribution by varying charge separation under dense phase conditions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Understanding the contribution of marketing to economic and social outcomes is fundamental to broadening the focus of marketing. The authors develop a comprehensive model that integrates the impact of service quality and service satisfaction on both economic and societal outcomes. The model is validated using two random samples involving intensive health services. The results indicate that service quality and service satisfaction significantly enhance quality of life and behavioral intentions, highlighting that customer service has social as well as economic outcomes. This is an important finding given the movement toward recognizing social and environmental outcomes, such as emphasized through triple bottom-line reporting. The findings have important implications for managing service processes, for improving the quality of life of customers, and for enhancing customers' behavioral intentions toward the organization.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.