83 resultados para Random coefficient multinomial logit
Resumo:
A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
The purpose of this study was to improve the prediction of the quantity and type of Volatile Fatty Acids (VFA) produced from fermented substrate in the rumen of lactating cows. A model was formulated that describes the conversion of substrate (soluble carbohydrates, starch, hemi-cellulose, cellulose, and protein) into VFA (acetate, propionate, butyrate, and other VFA). Inputs to the model were observed rates of true rumen digestion of substrates, whereas outputs were observed molar proportions of VFA in rumen fluid. A literature survey generated data of 182 diets (96 roughage and 86 concentrate diets). Coefficient values that define the conversion of a specific substrate into VFA were estimated meta-analytically by regression of the model against observed VFA molar proportions using non-linear regression techniques. Coefficient estimates significantly differed for acetate and propionate production in particular, between different types of substrate and between roughage and concentrate diets. Deviations of fitted from observed VFA molar proportions could be attributed to random error for 100%. In addition to regression against observed data, simulation studies were performed to investigate the potential of the estimation method. Fitted coefficient estimates from simulated data sets appeared accurate, as well as fitted rates of VFA production, although the model accounted for only a small fraction (maximally 45%) of the variation in VFA molar proportions. The simulation results showed that the latter result was merely a consequence of the statistical analysis chosen and should not be interpreted as an indication of inaccuracy of coefficient estimates. Deviations between fitted and observed values corresponded to those obtained in simulations. (c) 2005 Elsevier Ltd. All rights reserved.
Hydrolyzable tannin structures influence relative globular and random coil protein binding strengths
Resumo:
Binding parameters for the interactions of pentagalloyl glucose (PGG) and four hydrolyzable tannins (representing gallotannins and ellagitannins) with gelatin and bovine serum albumin (BSA) have been determined from isothermal titration calorimetry data. Equilibrium binding constants determined for the interaction of PGG and isolated mixtures of tara gallotannins and of sumac gallotannins with gelatin and BSA were of the same order of magnitude for each tannin (in the range of 10(4)-10(5) M-1 for stronger binding sites when using a binding model consisting of two sets of multiple binding sites). In contrast, isolated mixtures of chestnut ellagitannins and of myrabolan ellagitannins exhibited 3-4 orders of magnitude greater equilibrium binding constants for the interaction with gelatin (similar to 2 x 10(6) M-1) than for that with BSA (similar to 8 x 10(2) M-1). Binding stoichiometries revealed that the stronger binding sites on gelatin outnumbered those on BSA by a ratio of at least similar to 2:1 for all of the hydrolyzable tannins studied. Overall, the data revealed that relative binding constants for the interactions with gelatin and BSA are dependent on the structural flexibility of the tannin molecule.
Resumo:
Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Cedrus atlantica (Pinaceae) is a large and exceptionally long-lived conifer native to the Rif and Atlas Mountains of North Africa. To assess levels and patterns of genetic diversity of this species. samples were obtained throughout the natural range in Morocco and from a forest plantation in Arbucies, Girona (Spain) and analyzed using RAPD markers. Within-population genetic diversity was high and comparable to that revealed by isozymes. Managed populations harbored levels of genetic variation similar to those found in their natural counterparts. Genotypic analyses Of Molecular variance (AMOVA) found that most variation was within populations. but significant differentiation was also found between populations. particularly in Morocco. Bayesian estimates of F,, corroborated the AMOVA partitioning and provided evidence for Population differentiation in C. atlantica. Both distance- and Bayesian-based Clustering methods revealed that Moroccan populations comprise two genetically distinct groups. Within each group, estimates of population differentiation were close to those previously reported in other gymnosperms. These results are interpreted in the context of the postglacial history of the species and human impact. The high degree of among-group differentiation recorded here highlights the need for additional conservation measures for some Moroccan Populations of C. atlantica.
Resumo:
Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.
Resumo:
In the 'rice-wheat' and the 'cotton-wheat' farming systems of Pakistan's Punjab, late planting of wheat is a perennial problem due to often delayed harvesting of the previously planted and late maturing rice and cotton crops. This leaves very limited time for land preparation for 'on-time' planting of wheat. 'No-tillage' technologies that reduce the turn-round time for wheat cultivation after rice and cotton have been developed, but their uptake has not been as expected.-This paper attempts to determine the farm and farmer characteristics and other socio-economic factors that influence the adoption of 'no-tillage' technologies'. Logit models were developed for the analysis undertaken. In the 'cotton-wheat' system personal characteristics like education, tenancy status, attitude towards risk implied in the use of new technologies and contact with extension agents are the main factors that affect adoption. As regards the 'rice-wheat' system, resource endowments such as farm size, access to a 'no-tillage' drill, clayey soils and the area sown to the rice-wheat sequence along with tenancy and contact with extension agents were dominant in explaining adoption. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The absorption cross-sections of Cl2O6 and Cl2O4 have been obtained using a fast flow reactor with a diode array spectrometer (DAS) detection system. The absorption cross-sections at the wavelengths of maximum absorption (lambda(max)) determined in this study are those of Cl2O6: (1.47 +/- 0.15) x 10(-17) cm(2) molecule(-1), at lambda(max) = 276 nm and T = 298 K; and Cl2O4: (9.0 +/- 2.0) x 10(-19) cm(2) molecule(-1), at lambda(max) = 234 nm and T = 298 K. Errors quoted are two standard deviations together with estimates of the systematic error. The shapes of the absorption spectra were obtained over the wavelength range 200-450 nm for Cl2O6 and 200-350 nm for Cl2O4, and were normalized to the absolute cross-sections obtained at lambda(max) for each oxide, and are presented at 1 nm intervals. These data are discussed in relation to previous measurements. The reaction of O with OCIO has been investigated with the objective of observing transient spectroscopic absorptions. A transient absorption was seen, and the possibility is explored of identifying the species with the elusive sym-ClO3 or ClO4, both of which have been characterized in matrices, but not in the gas-phase. The photolysis of OCIO was also re-examined, with emphasis being placed on the products of reaction. UV absorptions attributable to one of the isomers of the ClO dimer, chloryl chloride (ClClO2) were observed; some Cl2O4 was also found at long photolysis times, when much of the ClClO2 had itself been photolysed. We suggest that reports of Cl2O6 formation in previous studies could be a consequence of a mistaken identification. At low temperatures, the photolysis of OCIO leads to the formation of Cl2O3 as a result of the addition of the ClO primary product to OCIO. ClClO2 also appears to be one product of the reaction between O-3 and OCIO, especially when the reaction occurs under explosive conditions. We studied the kinetics of the non-explosive process using a stopped-flow technique, and suggest a value for the room-temperature rate coefficient of (4.6 +/- 0.9) x 10(-19) cm(3) molecule(-1) s(-1) (limit quoted is 2sigma random errors). The photochemical and thermal decomposition of Cl2O6 is described in this paper. For photolysis at k = 254 nm, the removal of Cl2O6 is not accompanied by the build up of any other strong absorber. The implications of the results are either that the photolysis of Cl2O6 produces Cl-2 directly, or that the initial photofragments are converted rapidly to Cl-2. In the thermal decomposition of Cl2O6, Cl2O4 was shown to be a product of reaction, although not necessarily the major one. The kinetics of decomposition were investigated using the stopped-flow technique. At relatively high [OCIO] present in the system, the decay kinetics obeyed a first-order law, with a limiting first-order rate coefficient of 0.002 s(-1). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we report a new method based on supercritical carbon dioxide (scCO(2)) to fill and distribute the porous magnetic nanoparticles with n-octanol in a homogeneous manner. The high solubility of n-octanol in scCO(2) and high diffusivity and permeability of the fluid allow efficient delivery of n-octanol into the porous magnetic nanoparticles. Thus, the n-octanol-loaded magnetic nanoparticles can be readily dispersed into aqueous buffer (pH 7.40) to form a homogenous suspension consisting of nano-sized n-octanol droplets. We refer this suspension as the n-octanol stock solution. The n-octanol stock solution is then mixed with bulk aqueous phase (pH 7.40) containing an organic compound prior to magnetic separation. The small-size of the particles and the efficient mixing enable a rapid establishment of the partition equilibrium of the organic compound between the solid supported n-octanol nano-droplets and the bulk aqueous phase. UV-vis spectrophotometry is then applied to determine the concentration of the organic compound in the aqueous phase both before and after partitioning (after magnetic separation). As a result, log D values of organic compounds of pharmaceutical interest determined by this modified method are found to be in excellent agreement with the literature data. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In real-world environments it is usually difficult to specify the quality of a preventive maintenance (PM) action precisely. This uncertainty makes it problematic to optimise maintenance policy.-This problem is tackled in this paper by assuming that the-quality of a PM action is a random variable following a probability distribution. Two frequently studied PM models, a failure rate PM model and an age reduction PM model, are investigated. The optimal PM policies are presented and optimised. Numerical examples are also given.
Resumo:
Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use.
Resumo:
The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the "alpha peak" in a frequency range of 8-14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdos-Renyi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks. (c) 2007 Elsevier B.V. All rights reserved.