211 resultados para Extended Karplus equations
Resumo:
Background Demand for essential plasma-derived products is increasing. Purpose This prospective study aims to identify predictors of voluntary non-remunerated whole blood (WB) donors becoming plasmapheresis donors. Methods Surveys were sent to WB donors who had recently (recent n = 1,957) and not recently donated (distant n = 1,012). Theory of Planned Behavior (TPB) constructs (attitude, subjective norm, self-efficacy) were extended with moral norm, anticipatory regret, and donor identity. Intentions and objective plasmapheresis donation for 527 recent and 166 distant participants were assessed. Results Multi-group analysis revealed that the model was a good fit. Moral norm and self-efficacy were positively associated while role identity (suppressed by moral norm) was negatively associated with plasmapheresis intentions. Conclusions The extended TPB was useful in identifying factors that facilitate conversion from WB to plasmapheresis donation. A superordinate donor identity may be synonymous with WB donation and, for donors with a strong moral norm for plasmapheresis, may inhibit conversion.
Resumo:
Since ethnic differences exist in body composition, assessment methods need to be validated prior to use in different populations. This study attempts to validate the use of Sri Lankan based body composition assessment tools on a group of 5 - 15 year old Australian children of Sri Lankan origin. The study was conducted at the Body Composition Laboratory of the Children’s Nutrition Research Centre at the Royal Children’s Hospital, Brisbane, Australia. Height (Ht), weight (Wt), segmental length (Lsegment name) and skinfold thickness (SFT) were measured. The whole body and segmental bio impedance analysis (BIA) were also measured. The body composition determined by the deuterium dilution technique (criterion method) was compared with the assessments done using prediction equations developed on Sri Lankan children. 27 boys and 15 girls were studied. All predictions of body composition parameters, except percentage fat mass (FM) assessed by the SFT-FM equation in girls gave statistically significant correlations with the criterion method. They had a low mean bias and most were not influenced by the measured parameter. Although living in a different socioeconomic state, the equations developed on children of the same ethnic background gives a better predictive value of body composition. This highlights the ethnic influence on body composition.
Resumo:
Objective There are many prediction equations available in the literature for the assessment of body composition from skinfold thickness (SFT). This study aims to cross validate some of those prediction equations to determine the suitability of their use on Sri Lankan children. Methods Height, weight and SFT of 5 different sites were measured. Total body water was assessed using the isotope dilution method (D2O). Percentage Fat mass (%FM) was estimated from SFT using prediction equations described by five authors in the literature. Results Five to 15 year old healthy, 282 Sri Lankan children were studied. The equation of Brook gave Ihe lowest bias but limits of agreement were high. Equations described by Deurenberg et al gave slightly higher bias but limits of agreement were narrowest and bias was not influence by extremes of body fat. Although prediction equations did not estimate %FM adequately, the association between %FM and SFT measures, were quite satisfactory. Conclusion We conclude that SFT can be used effectively in the assessment of body composition in children. However, for the assessment of body composition using SFT, either prediction equations should be derived to suit the local populations or existing equations should be cross-validated to determine the suitability before its application.
Resumo:
A highly extended dithienothiophene comonomer building block was used in combination with highly fused aromatic furan substituted diketopyrrolopyrrole for the synthesis of novel donor–acceptor alternating copolymer PDPPF-DTT. Upon testing PDPPF-DTT as a channel semiconductor in top contact bottom gate organic field effect transistors (OFETs), it was found to exhibit p-channel behaviour. The highest hole mobility of 3.56 cm2 V−1 s−1 was reported for PDPPF-DTT. To our knowledge, this is the highest mobility reported so far for the furan flanked diketopyrrolopyrrole class of copolymers using conventional device geometry with straightforward processing.
Resumo:
The numerical solution of fractional partial differential equations poses significant computational challenges in regard to efficiency as a result of the spatial nonlocality of the fractional differential operators. The dense coefficient matrices that arise from spatial discretisation of these operators mean that even one-dimensional problems can be difficult to solve using standard methods on grids comprising thousands of nodes or more. In this work we address this issue of efficiency for one-dimensional, nonlinear space-fractional reaction–diffusion equations with fractional Laplacian operators. We apply variable-order, variable-stepsize backward differentiation formulas in a Jacobian-free Newton–Krylov framework to advance the solution in time. A key advantage of this approach is the elimination of any requirement to form the dense matrix representation of the fractional Laplacian operator. We show how a banded approximation to this matrix, which can be formed and factorised efficiently, can be used as part of an effective preconditioner that accelerates convergence of the Krylov subspace iterative solver. Our approach also captures the full contribution from the nonlinear reaction term in the preconditioner, which is crucial for problems that exhibit stiff reactions. Numerical examples are presented to illustrate the overall effectiveness of the solver.
Resumo:
Fractional differential equations are becoming increasingly used as a powerful modelling approach for understanding the many aspects of nonlocality and spatial heterogeneity. However, the numerical approximation of these models is demanding and imposes a number of computational constraints. In this paper, we introduce Fourier spectral methods as an attractive and easy-to-code alternative for the integration of fractional-in-space reaction-diffusion equations described by the fractional Laplacian in bounded rectangular domains ofRn. The main advantages of the proposed schemes is that they yield a fully diagonal representation of the fractional operator, with increased accuracy and efficiency when compared to low-order counterparts, and a completely straightforward extension to two and three spatial dimensions. Our approach is illustrated by solving several problems of practical interest, including the fractional Allen–Cahn, FitzHugh–Nagumo and Gray–Scott models, together with an analysis of the properties of these systems in terms of the fractional power of the underlying Laplacian operator.
Resumo:
In this work, we consider subordinated processes controlled by a family of subordinators which consist of a power function of a time variable and a negative power function of an α-stable random variable. The effect of parameters in the subordinators on the subordinated process is discussed. By suitable variable substitutions and the Laplace transform technique, the corresponding fractional Fokker–Planck-type equations are derived. We also compute their mean square displacements in a free force field. By choosing suitable ranges of parameters, the resulting subordinated processes may be subdiffusive, normal diffusive or superdiffusive
Resumo:
Background: Paediatric onset inflammatory bowel disease (IBD) may cause alterations in energy requirements and invalidate the use of standard prediction equations. Our aim was to evaluate four commonly used prediction equations for resting energy expenditure (REE) in children with IBD. Methods: Sixty-three children had repeated measurements of REE as part of a longitudinal research study yielding a total of 243 measurements. These were compared with predicted REE from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict equations using the Bland-Altman method. Results: Mean (±SD) age of the patients was 14.2 (2.4) years. Mean measured REE was 1566 (336) kcal per day compared with 1491 (236), 1441 (255), 1481 (232), and 1435 (212) kcal per day calculated from Schofield, Oxford, FAO/WHO/UNU, and Harris-Benedict, respectively. While the Schofield equation demonstrated the least difference between measured and predicted REE, it, along with the other equations tested, did not perform uniformly across all subjects, indicating greater errors at either end of the spectrum of energy expenditure. Smaller differences were found for all prediction equations for Crohn's disease compared with ulcerative colitis. Conclusions: Of the commonly used equations, the equation of Schofield should be used in pediatric patients with IBD when measured values are not able to be obtained. (Inflamm Bowel Dis 2010;) Copyright © 2010 Crohn's & Colitis Foundation of America, Inc.
Resumo:
The process of spray drying is applied in a number of contexts. One such application is the production of a synthetic rock used for storage of nuclear waste. To establish a framework for a model of the spray drying process for this application, we here develop a model describing evaporation from droplets of pure water, such that the model may be extended to account for the presence of colloid within the droplet. We develop a spherically-symmetric model and formulate continuum equations describing mass, momentum, and energy balance in both the liquid and gas phases from first principles. We establish appropriate boundary conditions at the surface of the droplet, including a generalised Clapeyron equation that accurately describes the temperature at the surface of the droplet. To account for experiment design, we introduce a simplified platinum ball and wire model into the system using a thin wire problem. The resulting system of equations is transformed in order to simplify a finite volume solution scheme. The results from numerical simulation are compared with data collected for validation, and the sensitivity of the model to variations in key parameters, and to the use of Clausius–Clapeyron and generalised Clapeyron equations, is investigated. Good agreement is found between the model and experimental data, despite the simplicity of the platinum phase model.
Resumo:
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
Objective To discuss generalized estimating equations as an extension of generalized linear models by commenting on the paper of Ziegler and Vens "Generalized Estimating Equations. Notes on the Choice of the Working Correlation Matrix". Methods Inviting an international group of experts to comment on this paper. Results Several perspectives have been taken by the discussants. Econometricians have established parallels to the generalized method of moments (GMM). Statisticians discussed model assumptions and the aspect of missing data Applied statisticians; commented on practical aspects in data analysis. Conclusions In general, careful modeling correlation is encouraged when considering estimation efficiency and other implications, and a comparison of choosing instruments in GMM and generalized estimating equations, (GEE) would be worthwhile. Some theoretical drawbacks of GEE need to be further addressed and require careful analysis of data This particularly applies to the situation when data are missing at random.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Selecting an appropriate working correlation structure is pertinent to clustered data analysis using generalized estimating equations (GEE) because an inappropriate choice will lead to inefficient parameter estimation. We investigate the well-known criterion of QIC for selecting a working correlation Structure. and have found that performance of the QIC is deteriorated by a term that is theoretically independent of the correlation structures but has to be estimated with an error. This leads LIS to propose a correlation information criterion (CIC) that substantially improves the QIC performance. Extensive simulation studies indicate that the CIC has remarkable improvement in selecting the correct correlation structures. We also illustrate our findings using a data set from the Madras Longitudinal Schizophrenia Study.