50 resultados para computational estimation
em University of Queensland eSpace - Australia
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-P\"orn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
Resumo:
Student attitudes towards a subject affect their learning. For students in physics service courses, relevance is emphasised by vocational applications. A similar strategy is being used for students who aspire to continued study of physics, in an introduction to fundamental skills in experimental physics – the concepts, computational tools and practical skills involved in appropriately obtaining and interpreting measurement data. An educational module is being developed that aims to enhance the student experience by embedding learning of these skills in the practicing physicist’s activity of doing an experiment (gravity estimation using a rolling pendulum). The group concentrates on particular skills prompted by challenges such as: • How can we get an answer to our question? • How good is our answer? • How can it be improved? This explicitly provides students the opportunity to consider and construct their own ideas. It gives them time to discuss, digest and practise without undue stress, thereby assisting them to internalise core skills. Design of the learning activity is approached in an iterative manner, via theoretical and practical considerations, with input from a range of teaching staff, and subject to trials of prototypes.
Resumo:
Despite its environmental (and financial) importance, there is no agreement in the literature as to which extractant most accurately estimates the phytoavailability of trace metals in soils. A large dataset was taken from the literature, and the effectiveness of various extractants to predict the phytoavailability of Cd, Zn, Ni, Cu, and Pb examined across a range of soil types and contamination levels. The data suggest that generally, the total soil trace metal content, and trace metal concentrations determined by complexing agents (such as the widely used DTPA and EDTA extractants) or acid extractants (such as 0.1 M HCl and the Mehlich 1 extractant) are only poorly correlated to plant phytoavailability. Whilst there is no consensus, it would appear that neutral salt extractants (such as 0.01 M CaCl2 and 0.1 M NaNO3) provide the most useful indication of metal phytoavailability across a range of metals of interest, although further research is required.
Resumo:
Bioelectrical impedance analysis (BIA) was used to assess body composition in rats fed on either standard laboratory diet or on a high-fat diet designed to induce obesity. Bioelectrical impedance analysis predictions of total body water and thus fat-free mass (FFM) for the group mean values were generally within 5% of the measured values by tritiated water ((H2O)-H-3) dilution. The limits of agreement for the procedure were, however, large, approximately +/-25%, limiting the applicability of the technique for measurement of body composition in individual animals.
Resumo:
Traditional waste stabilisation pond (WSP) models encounter problems predicting pond performance because they cannot account for the influence of pond features, such as inlet structure or pond geometry, on fluid hydrodynamics. In this study, two dimensional (2-D) computational fluid dynamics (CFD) models were compared to experimental residence time distributions (RTD) from literature. In one of the-three geometries simulated, the 2-D CFD model successfully predicted the experimental RTD. However, flow patterns in the other two geometries were not well described due to the difficulty of representing the three dimensional (3-D) experimental inlet in the 2-D CFD model, and the sensitivity of the model results to the assumptions used to characterise the inlet. Neither a velocity similarity nor geometric similarity approach to inlet representation in 2-D gave results correlating with experimental data. However. it was shown that 2-D CFD models were not affected by changes in values of model parameters which are difficult to predict, particularly the turbulent inlet conditions. This work suggests that 2-D CFD models cannot be used a priori to give an adequate description of the hydrodynamic patterns in WSP. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper describes U2DE, a finite-volume code that numerically solves the Euler equations. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm in a shock tube. From the simulations, the speed of the developing shock wave was recorded and compared with other estimates. The ability of U2DE to compute shock speed was confirmed by comparing numerical results with the analytic solution for an ideal shock tube. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one-dimensional models. The shock speeds computed with the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and, thus, were closer to the experimental measurements. This indicates that multi-dimensional flow effects were partly responsible for the relatively high shock speeds measured in the experiments.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
Computer models can be combined with laboratory experiments for the efficient determination of (i) peptides that bind MHC molecules and (ii) T-cell epitopes. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures. This requires the definition of standards and experimental protocols for model application. We describe the requirements for validation and assessment of computer models. The utility of combining accurate predictions with a limited number of laboratory experiments is illustrated by practical examples. These include the identification of T-cell epitopes from IDDM-, melanoma- and malaria-related antigens by combining computational and conventional laboratory assays. The success rate in determining antigenic peptides, each in the context of a specific HLA molecule, ranged from 27 to 71%, while the natural prevalence of MHC-binding peptides is 0.1-5%.
Resumo:
CXTANNEAL is a program for analysing contaminant transport in soils. The code, written in Fortran 77, is a modified version of CXTFIT, a commonly used package for estimating solute transport parameters in soils. The improvement of the present code is that it includes simulated annealing as the optimization technique for curve fitting. Tests with hypothetical data show that CXTANNEAL performs better than the original code in searching for optimal parameter estimates. To reduce the computational time, a parallel version of CXTANNEAL (CXTANNEAL_P) was also developed. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
Background From the mid-1980s to mid-1990s, the WHO MONICA Project monitored coronary events and classic risk factors for coronary heart disease (CHD) in 38 populations from 21 countries. We assessed the extent to which changes in these risk factors explain the variation in the trends in coronary-event rates across the populations. Methods In men and women aged 35-64 years, non-fatal myocardial infarction and coronary deaths were registered continuously to assess trends in rates of coronary events. We carried out population surveys to estimate trends in risk factors. Trends in event rates were regressed on trends in risk score and in individual risk factors. Findings Smoking rates decreased in most male populations but trends were mixed in women; mean blood pressures and cholesterol concentrations decreased, body-mass index increased, and overall risk scores and coronary-event rates decreased. The model of trends in 10-year coronary-event rates against risk scores and single risk factors showed a poor fit, but this was improved with a 4-year time lag for coronary events. The explanatory power of the analyses was limited by imprecision of the estimates and homogeneity of trends in the study populations. Interpretation Changes in the classic risk factors seem to partly explain the variation in population trends in CHD. Residual variance is attributable to difficulties in measurement and analysis, including time lag, and to factors that were not included, such as medical interventions. The results support prevention policies based on the classic risk factors but suggest potential for prevention beyond these.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Dendritic cells (DC) are considered to be the major cell type responsible for induction of primary immune responses. While they have been shown to play a critical role in eliciting allosensitization via the direct pathway, there is evidence that maturational and/or activational heterogeneity between DC in different donor organs may be crucial to allograft outcome. Despite such an important perceived role for DC, no accurate estimates of their number in commonly transplanted organs have been reported. Therefore, leukocytes and DC were visualized and enumerated in cryostat sections of normal mouse (C57BL/10, B10.BR, C3H) liver, heart, kidney and pancreas by immunohistochemistry (CD45 and MHC class II staining, respectively). Total immunopositive cell number and MHC class II+ cell density (C57BL/10 mice only) were estimated using established morphometric techniques - the fractionator and disector principles, respectively. Liver contained considerably more leukocytes (similar to 5-20 x 10(6)) and DC (similar to 1-3 x 10(6)) than the other organs examined (pancreas: similar to 0.6 x 10(6) and similar to 0.35 x 10(6): heart: similar to 0.8 x 10(6) and similar to 0.4 x 10(6); kidney similar to 1.2 x 10(6) and 0.65 x 10(6), respectively). In liver, DC comprised a lower proportion of all leukocytes (similar to 15-25%) than in the other parenchymal organs examined (similar to 40-60%). Comparatively, DC density in C57BL/10 mice was heart > kidney > pancreas much greater than liver (similar to 6.6 x 10(6), 5 x 10(6), 4.5 x 10(6) and 1.1 x 10(6) cells/cm(3), respectively). When compared to previously published data on allograft survival, the results indicate that the absolute number of MHC class II+ DC present in a donor organ is a poor predictor of graft outcome. Survival of solid organ allografts is more closely related to the density of the donor DC network within the graft. (C) 2000 Elsevier Science B.V. All rights reserved.