38 resultados para Prediction method
em Aston University Research Archive
Resumo:
Predictive models of peptide-Major Histocompatibility Complex (MHC) binding affinity are important components of modern computational immunovaccinology. Here, we describe the development and deployment of a reliable peptide-binding prediction method for a previously poorly-characterized human MHC class I allele, HLA-Cw*0102.
Resumo:
Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient P (commonly expressed in logarithm form: logP), is useful for screening out unsuitable molecules and also for the development of predictive Quantitative Structure-Activity Relationships (QSARs). In this paper we develop a new approach to the prediction of LogP values for peptides based on an empirical relationship between global molecular properties and measured physical properties. Our method was successful in terms of peptide prediction (total r2 = 0.641). The final model consisted of 5 physicochemical descriptors (molecular weight, number of single bonds, 2D-VDW volume, 2D-VSA hydrophobic and 2D-VSA polar). The approach is peptide specific and its predictive accuracy was high. Overall, 67% of the peptides were able to be predicted within +/-0.5 log units from the experimental values. Our method thus represents a novel prediction method with proven predictive ability.
Resumo:
The ability to define and manipulate the interaction of peptides with MHC molecules has immense immunological utility, with applications in epitope identification, vaccine design, and immunomodulation. However, the methods currently available for prediction of peptide-MHC binding are far from ideal. We recently described the application of a bioinformatic prediction method based on quantitative structure-affinity relationship methods to peptide-MHC binding. In this study we demonstrate the predictivity and utility of this approach. We determined the binding affinities of a set of 90 nonamer peptides for the MHC class I allele HLA-A*0201 using an in-house, FACS-based, MHC stabilization assay, and from these data we derived an additive quantitative structure-affinity relationship model for peptide interaction with the HLA-A*0201 molecule. Using this model we then designed a series of high affinity HLA-A2-binding peptides. Experimental analysis revealed that all these peptides showed high binding affinities to the HLA-A*0201 molecule, significantly higher than the highest previously recorded. In addition, by the use of systematic substitution at principal anchor positions 2 and 9, we showed that high binding peptides are tolerant to a wide range of nonpreferred amino acids. Our results support a model in which the affinity of peptide binding to MHC is determined by the interactions of amino acids at multiple positions with the MHC molecule and may be enhanced by enthalpic cooperativity between these component interactions.
Resumo:
A study of information available on the settlement characteristics of backfill in restored opencast coal mining sites and other similar earthworks projects has been undertaken. In addition, the methods of opencast mining, compaction controls, monitoring and test methods have been reviewed. To consider and develop the methods of predicting the settlement of fill, three sites in the West Midlands have been examined; at each, the backfill had been placed in a controlled manner. In addition, use has been made of a finite element computer program to compare a simple two-dimensional linear elastic analysis with field observations of surface settlements in the vicinity of buried highwalls. On controlled backfill sites, settlement predictions have been accurately made, based on a linear relationship between settlement (expressed as a percentage of fill height) against logarithm of time. This `creep' settlement was found to be effectively complete within 18 months of restoration. A decrease of this percentage settlement was observed with increasing fill thickness; this is believed to be related to the speed with which the backfill is placed. A rising water table within the backfill is indicated to cause additional gradual settlement. A prediction method, based on settlement monitoring, has been developed and used to determine the pattern of settlement across highwalls and buried highwalls. The zone of appreciable differential settlement was found to be mainly limited to the highwall area, the magnitude was dictated by the highwall inclination. With a backfill cover of about 15 metres over a buried highwall the magnitude of differential settlement was negligible. Use has been made of the proposed settlement prediction method and monitoring to control the re-development of restored opencase sites. The specifications, tests and monitoring techniques developed in recent years have been used to aid this. Such techniques have been valuable in restoring land previously derelict due to past underground mining.
Resumo:
We present a novel method for prediction of the onset of a spontaneous (paroxysmal) atrial fibrilation episode by representing the electrocardiograph (ECG) output as two time series corresponding to the interbeat intervals and the lengths of the atrial component of the ECG. We will then show how different entropy measures can be calulated from both of these series and then combined in a neural network trained using the Bayesian evidence procedure to form and effective predictive classifier.
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
Resumo:
A periodic density functional theory method using the B3LYP hybrid exchange-correlation potential is applied to the Prussian blue analogue RbMn[Fe(CN)6] to evaluate the suitability of the method for studying, and predicting, the photomagnetic behavior of Prussian blue analogues and related materials. The method allows correct description of the equilibrium structures of the different electronic configurations with regard to the cell parameters and bond distances. In agreement with the experimental data, the calculations have shown that the low-temperature phase (LT; Fe(2+)(t(6)2g, S = 0)-CN-Mn(3+)(t(3)2g e(1)g, S = 2)) is the stable phase at low temperature instead of the high-temperature phase (HT; Fe(3+)(t(5)2g, S = 1/2)-CN-Mn(2+)(t(3)2g e(2)g, S = 5/2)). Additionally, the method gives an estimation for the enthalpy difference (HT LT) with a value of 143 J mol(-1) K(-1). The comparison of our calculations with experimental data from the literature and from our calorimetric and X-ray photoelectron spectroscopy measurements on the Rb0.97Mn[Fe(CN)6]0.98 x 1.03 H2O compound is analyzed, and in general, a satisfactory agreement is obtained. The method also predicts the metastable nature of the electronic configuration of the high-temperature phase, a necessary condition to photoinduce that phase at low temperatures. It gives a photoactivation energy of 2.36 eV, which is in agreement with photoinduced demagnetization produced by a green laser.
Resumo:
A rapid method for the analysis of biomass feedstocks was established to identify the quality of the pyrolysis products likely to impact on bio-oil production. A total of 15 Lolium and Festuca grasses known to exhibit a range of Klason lignin contents were analysed by pyroprobe-GC/MS (Py-GC/MS) to determine the composition of the thermal degradation products of lignin. The identification of key marker compounds which are the derivatives of the three major lignin subunits (G, H, and S) allowed pyroprobe-GC/MS to be statistically correlated to the Klason lignin content of the biomass using the partial least-square method to produce a calibration model. Data from this multivariate modelling procedure was then applied to identify likely "key marker" ions representative of the lignin subunits from the mass spectral data. The combined total abundance of the identified key markers for the lignin subunits exhibited a linear relationship with the Klason lignin content. In addition the effect of alkali metal concentration on optimum pyrolysis characteristics was also examined. Washing of the grass samples removed approximately 70% of the metals and changed the characteristics of the thermal degradation process and products. Overall the data indicate that both the organic and inorganic specification of the biofuel impacts on the pyrolysis process and that pyroprobe-GC/MS is a suitable analytical technique to asses lignin composition. © 2007 Elsevier B.V. All rights reserved.
Resumo:
The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus), Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. Ollrien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as ?i and q? with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of assigning a disruption probability to every plasma input pattern. The second method determines the novelty of an input pattern by calculating the probability density distribution of successful plasma patterns that have been run at JET. The density distribution is represented as a mixture distribution, and its parameters arc determined using the Expectation-Maximisation method. If the dataset, used to determine the distribution parameters, covers sufficiently well the machine operational space. Then, the patterns flagged as novel can be regarded as patterns belonging to a disrupting plasma. Together with these methods, a network has been designed to predict the vertical forces, that a disruption can cause, in order to avoid that too dangerous plasma configurations are run. This network can be run before the pulse using the pre-programmed plasma configuration or on line becoming a tool that allows to stop dangerous plasma configuration. All these methods have been implemented in real time on a dual Pentium Pro based machine. The Disruption Prediction and Prevention System has shown that internal plasma parameters can be determined on-line with a good accuracy. Also the disruption detection algorithms showed promising results considering the fact that JET is an experimental machine where always new plasma configurations are tested trying to improve its performances.
Resumo:
In vitro studies of drug absorption processes are undertaken to assess drug candidate or formulation suitability, mechanism investigation, and ultimately for the development of predictive models. This study included each of these approaches, with the aim of developing novel in vitro methods for inclusion in a drug absorption model. Two model analgesic drugs, ibuprofen and paracetamol, were selected. The study focused on three main areas, the interaction of the model drugs with co-administered antacids, the elucidation of the mechanisms responsible for the increased absorption rate observed in a novel paracetamol formulation and the development of novel ibuprofen tablet formulations containing alkalising excipients as dissolution promoters.Several novel dissolution methods were developed. A method to study the interaction of drug/excipient mixtures in the powder form was successfully used to select suitable dissolution enhancing exicipents. A method to study intrinsic dissolution rate using paddle apparatus was developed and used to study dissolution mechanisms. Methods to simulate stomach and intestine environments in terms of media composition and volume and drug/antacid doses were developed. Antacid addition greatly increased the dissolution of ibuprofen in the stomach model.Novel methods to measure drug permeability through rat stomach and intestine were developed, using sac methodology. The methods allowed direct comparison of the apparent permeability values obtained. Tissue stability, reproducibility and integrity was observed, with selectivity between paracellular and transcellular markers and hydrophilic and lipophilic compounds within an homologous series of beta-blockers.
Resumo:
This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.
Resumo:
This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.
Resumo:
This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.
Resumo:
Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.
Resumo:
Purpose: Phonological accounts of reading implicate three aspects of phonological awareness tasks that underlie the relationship with reading; a) the language-based nature of the stimuli (words or nonwords), b) the verbal nature of the response, and c) the complexity of the stimuli (words can be segmented into units of speech). Yet, it is uncertain which task characteristics are most important as they are typically confounded. By systematically varying response-type and stimulus complexity across speech and non-speech stimuli, the current study seeks to isolate the characteristics of phonological awareness tasks that drive the prediction of early reading. Method: Four sets of tasks were created; tone stimuli (simple non-speech) requiring a non-verbal response, phonemes (simple speech) requiring a non-verbal response, phonemes requiring a verbal response, and nonwords (complex speech) requiring a verbal response. Tasks were administered to 570 2nd grade children along with standardized tests of reading and non-verbal IQ. Results: Three structural equation models comparing matched sets of tasks were built. Each model consisted of two 'task' factors with a direct link to a reading factor. The following factors predicted unique variance in reading: a) simple speech and non-speech stimuli, b) simple speech requiring a verbal response but not simple speech requiring a non-verbal-response, and c) complex and simple speech stimuli. Conclusions: Results suggest that the prediction of reading by phonological tasks is driven by the verbal nature of the response and not the complexity or 'speechness' of the stimuli. Findings highlight the importance of phonological output processes to early reading.