76 resultados para Assumptions
em University of Queensland eSpace - Australia
Resumo:
Recent research has begun to provide support for the assumptions that memories are stored as a composite and are accessed in parallel (Tehan & Humphreys, 1998). New predictions derived from these assumptions and from the Chappell and Humphreys (1994) implementation of these assumptions were tested. In three experiments, subjects studied relatively short lists of words. Some of the Lists contained two similar targets (thief and theft) or two dissimilar targets (thief and steal) associated with the same cue (ROBBERY). AS predicted, target similarity affected performance in cued recall but not free association. Contrary to predictions, two spaced presentations of a target did not improve performance in free association. Two additional experiments confirmed and extended this finding. Several alternative explanations for the target similarity effect, which incorporate assumptions about separate representations and sequential search, are rejected. The importance of the finding that, in at least one implicit memory paradigm, repetition does not improve performance is also discussed.
Resumo:
In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and methods for checking them are described. It is shown on a simple example how different modelling assumptions act on the model equations and their effect on the differential index of the resulted model is also indicated.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Background and Objective: To examine if commonly recommended assumptions for multivariable logistic regression are addressed in two major epidemiological journals. Methods: Ninety-nine articles from the Journal of Clinical Epidemiology and the American Journal of Epidemiology were surveyed for 10 criteria: six dealing with computation and four with reporting multivariable logistic regression results. Results: Three of the 10 criteria were addressed in 50% or more of the articles. Statistical significance testing or confidence intervals were reported in all articles. Methods for selecting independent variables were described in 82%, and specific procedures used to generate the models were discussed in 65%. Fewer than 50% of the articles indicated if interactions were tested or met the recommended events per independent variable ratio of 10: 1. Fewer than 20% of the articles described conformity to a linear gradient, examined collinearity, reported information on validation procedures, goodness-of-fit, discrimination statistics, or provided complete information on variable coding. There was no significant difference (P >.05) in the proportion of articles meeting the criteria across the two journals. Conclusion: Articles reviewed frequently did not report commonly recommended assumptions for using multivariable logistic regression. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The discussion about relations between research and design has a number of strands, and presumably motivations. Putting aside the question whether or not design or “creative endeavour” should be counted as research, for reasons to do with institutional recognition or reward, the question remains how, if at all, is design research? This question is unlikely to have attracted much interest but for matters external to Architecture within the modern university. But Architecture as a discipline now needs to understand research much better than in the past when ‘research’ was whatever went on in building science, history or people/environment studies. In this paper, I begin with some common assumptions about design, considered in relation to research, and suggest how the former can constitute or be a mode of the latter. Central to this consideration is an understanding of research as the production of publicly available knowledge. The method is that of conceptual analysis which is much more fruitful than is usually appreciated. This work is part of a larger project in philosophy of design, in roughly the analytical tradition.
Resumo:
Traditional waste stabilisation pond (WSP) models encounter problems predicting pond performance because they cannot account for the influence of pond features, such as inlet structure or pond geometry, on fluid hydrodynamics. In this study, two dimensional (2-D) computational fluid dynamics (CFD) models were compared to experimental residence time distributions (RTD) from literature. In one of the-three geometries simulated, the 2-D CFD model successfully predicted the experimental RTD. However, flow patterns in the other two geometries were not well described due to the difficulty of representing the three dimensional (3-D) experimental inlet in the 2-D CFD model, and the sensitivity of the model results to the assumptions used to characterise the inlet. Neither a velocity similarity nor geometric similarity approach to inlet representation in 2-D gave results correlating with experimental data. However. it was shown that 2-D CFD models were not affected by changes in values of model parameters which are difficult to predict, particularly the turbulent inlet conditions. This work suggests that 2-D CFD models cannot be used a priori to give an adequate description of the hydrodynamic patterns in WSP. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.
Resumo:
Phonemic codes are accorded a privileged role in most current models of immediate serial recall, although their effects are apparent in short-term proactive interference (PI) effects as well. The present research looks at how assumptions concerning distributed representation and distributed storage involving both semantic and phonemic codes might be operationalized to produce PI in a short-term cued recall task. The four experiments reported here attempted to generate the phonemic characteristics of a nonrhyming, interfering foil from unrelated filler items in the same list. PI was observed when a rhyme of the foil was studied or when the three phonemes of the foil were distributed across three studied filler items. The results suggest that items in short-term memory are stored in terms of feature bundles and that all items are simultaneously available at retrieval.
Resumo:
Background/Aims: Liver clearance models are based on information (or assumptions) on solute distribution kinetics within the microvasculatory system, The aim was to study albumin distribution kinetics in regenerated livers and in livers of normal adult rats, Methods: A novel mathematical model was used to evaluate the distribution space and the transit time dispersion of albumin in livers following regeneration after a two-thirds hepatectomy compared to livers of normal adult rats. Outflow curves of albumin measured after bolus injection in single-pass perfused rat livers were analyzed by correcting for the influence of catheters and fitting a long-tailed function to the data. Results: The curves were well described by the proposed model. The distribution volume and the transit time dispersion of albumin observed in the partial hepatectomy group were not significantly different from livers of normal adult rats. Conclusions: These findings suggest that the distribution space and the transit time dispersion of albumin (CV2) is relatively constant irrespective of the presence of rapid and extensive repair. This invariance of CV2 implies, as a first approximation, a similar degree of intrasinusoidal mixing, The finding that a sum of two (instead of one) inverse Gaussian densities is an appropriate empirical function to describe the outflow curve of vascular indicators has consequences for an improved prediction of hepatic solute extraction.