27 resultados para Secondary Structure Prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature relating to haze formation, methods of separation, coalescence mechanisms, and models by which droplets <100 μm are collected, coalesced and transferred, have been reviewed with particular reference to particulate bed coalescers. The separation of secondary oil-water dispersions was studied experimentally using packed beds of monosized glass ballotini particles. The variables investigated were superficial velocity, bed depth, particle size, and the phase ratio and drop size distribution of inlet secondary dispersion. A modified pump loop was used to generate secondary dispersions of toluene or Clairsol 350 in water with phase ratios between 0.5-6.0 v/v%.Inlet drop size distributions were determined using a Malvern Particle Size Analyser;effluent, coalesced droplets were sized by photography. Single phase flow pressure drop data were correlated by means of a Carman-Kozeny type equation. Correlations were obtained relating single and two phase pressure drops, as (ΔP2/μc)/ΔP1/μd) = kp Ua Lb dcc dpd Cine A flow equation was derived to correlate the two phase pressure drop data as, ΔP2/(ρcU2) = 8.64*107 [dc/D]-0.27 [L/D]0.71 [dp/D]-0.17 [NRe]1.5 [e1]-0.14 [Cin]0.26  In a comparison between functions to characterise the inlet drop size distributions a modification of the Weibull function provided the best fit of experimental data. The general mean drop diameter was correlated by: q_p q_p p_q /β      Γ ((q-3/β) +1) d qp = d fr  .α        Γ ((P-3/β +1 The measured and predicted mean inlet drop diameters agreed within ±15%. Secondary dispersion separation depends largely upon drop capture within a bed. A theoretical analysis of drop capture mechanisms in this work indicated that indirect interception and London-van der Waal's mechanisms predominate. Mathematical models of dispersed phase concentration m the bed were developed by considering drop motion to be analogous to molecular diffusion.The number of possible channels in a bed was predicted from a model in which the pores comprised randomly-interconnected passage-ways between adjacent packing elements and axial flow occured in cylinders on an equilateral triangular pitch. An expression was derived for length of service channels in a queuing system leading to the prediction of filter coefficients. The insight provided into the mechanisms of drop collection and travel, and the correlations of operating parameters, should assist design of industrial particulate bed coalescers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus), Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. Ollrien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as ?i and q? with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of assigning a disruption probability to every plasma input pattern. The second method determines the novelty of an input pattern by calculating the probability density distribution of successful plasma patterns that have been run at JET. The density distribution is represented as a mixture distribution, and its parameters arc determined using the Expectation-Maximisation method. If the dataset, used to determine the distribution parameters, covers sufficiently well the machine operational space. Then, the patterns flagged as novel can be regarded as patterns belonging to a disrupting plasma. Together with these methods, a network has been designed to predict the vertical forces, that a disruption can cause, in order to avoid that too dangerous plasma configurations are run. This network can be run before the pulse using the pre-programmed plasma configuration or on line becoming a tool that allows to stop dangerous plasma configuration. All these methods have been implemented in real time on a dual Pentium Pro based machine. The Disruption Prediction and Prevention System has shown that internal plasma parameters can be determined on-line with a good accuracy. Also the disruption detection algorithms showed promising results considering the fact that JET is an experimental machine where always new plasma configurations are tested trying to improve its performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hospital employees who work in an environment with zero tolerance to error, face several stressors that may result in psychological, physiological, and behavioural strains, and subsequently, in suboptimal performance. This thesis includes two studies which investigate the stressor-to-strain-to-performance relationships in hospitals. The first study is a cross-sectional, multi-group investigation based on secondary data from 65,142 respondents in 172 acute/specialist UK NHS trusts. This model proposes that senior management leadership predicts social support and job design which, in turn, moderate stressors-to-strains across team structure. The results confirm the model's robustness. Regression analysis provides support for main effects and minimal support for moderation hypotheses. Therefore, based on its conclusions and inherent limitations, study one lays the framework for study two. The second study is a cross-sectional, multilevel investigation of the strain-reducing effects of social environment on externally-rated unit-level performance based on primary data from 1,137 employees in 136 units, in a hospital in Malta. The term "social environment" refers to the prediction of the moderator variables, which is to say, social support and decision latitude/control, by transformational leadership and team climate across hospital units. This study demonstrates that transformational leadership is positively associated with social support, whereas team climate is positively associated with both moderators. At the same time, it identifies a number of moderating effects which social support and decision latitude/control, both separately and together, had on specific stressor-to-strain relationships. The results show significant mediated stressor-to-strain-to-performance relationships. Furthermore, at the higher level, unit-level performance is positively associated with shared unit-level team climate and with unit-level vision, the latter being one of the five sub-dimension of transformational leadership. At the same time, performance is also positively related to both transformational leadership and team climate when the two constructs are tested together. Few studies have linked the buffering effects of the social environment in occupational stress with performance. Therefore, this research strives to make a significant contribution to the occupational stress and performance literature with a focus on hospital practice. Indeed, the study highlights the wide-ranging and far-reaching implications that these findings provide for theory, management, and practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent and potential changes in technology have resulted in the anticipation of increases in the frequency of job changes. This has led manpower policy makers to investigate the feasibility of incorporating the employment skills of job groups in the general prediction of future job learning and performance with a view to the establishment of "job families" within which transfer might be considered reciprocally high. A structured job analysis instrument (the Position Analysis Questionnaire) is evaluated in terms of two distinct sets of scores; job dimensions and synthetically established attribute/trait profiles. Studies demonstrate that estimates of a job's structure/dimensions and requisite human attributes can be reliably established. Three alternative techniques of statistically assembling profiles of the requisite human attributes for jobs are found to have differential levels of reliability and differential degrees of validity in their estimation of the "actual" ability requirements of jobs. The utility of these two sets of job descriptors to serve as representations of the cognitive structure similarity of job groups is investigated in a study which simulates a job transfer situation. The central role of the index of similarity used to assess the relationship between "target" and "present" job is demonstrated. The relative extents to which job structure similarity and job attribute similariity are associated with positive transfer are investigated. The studies demonstrate that the dimensions of jobs, and more fruitfully their requisite human attributes can serve as bases to predict job transfer learning and performance. The nature of the index of similarity used to optimally formulate predictions of transfer is such that networks of jobs might be establishable to which current job incumbents could be expected to transfer positively. The derivation of "job families" with anticipated reciprocal transfer consequences is considered to be less appropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane proteins, which constitute approximately 20% of most genomes, are poorly tractable targets for experimental structure determination, thus analysis by prediction and modelling makes an important contribution to their on-going study. Membrane proteins form two main classes: alpha helical and beta barrel trans-membrane proteins. By using a method based on Bayesian Networks, which provides a flexible and powerful framework for statistical inference, we addressed alpha-helical topology prediction. This method has accuracies of 77.4% for prokaryotic proteins and 61.4% for eukaryotic proteins. The method described here represents an important advance in the computational determination of membrane protein topology and offers a useful, and complementary, tool for the analysis of membrane proteins for a range of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient P (commonly expressed in logarithm form: logP), is useful for screening out unsuitable molecules and also for the development of predictive Quantitative Structure-Activity Relationships (QSARs). In this paper we develop a new approach to the prediction of LogP values for peptides based on an empirical relationship between global molecular properties and measured physical properties. Our method was successful in terms of peptide prediction (total r2 = 0.641). The final model consisted of 5 physicochemical descriptors (molecular weight, number of single bonds, 2D-VDW volume, 2D-VSA hydrophobic and 2D-VSA polar). The approach is peptide specific and its predictive accuracy was high. Overall, 67% of the peptides were able to be predicted within +/-0.5 log units from the experimental values. Our method thus represents a novel prediction method with proven predictive ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to define and manipulate the interaction of peptides with MHC molecules has immense immunological utility, with applications in epitope identification, vaccine design, and immunomodulation. However, the methods currently available for prediction of peptide-MHC binding are far from ideal. We recently described the application of a bioinformatic prediction method based on quantitative structure-affinity relationship methods to peptide-MHC binding. In this study we demonstrate the predictivity and utility of this approach. We determined the binding affinities of a set of 90 nonamer peptides for the MHC class I allele HLA-A*0201 using an in-house, FACS-based, MHC stabilization assay, and from these data we derived an additive quantitative structure-affinity relationship model for peptide interaction with the HLA-A*0201 molecule. Using this model we then designed a series of high affinity HLA-A2-binding peptides. Experimental analysis revealed that all these peptides showed high binding affinities to the HLA-A*0201 molecule, significantly higher than the highest previously recorded. In addition, by the use of systematic substitution at principal anchor positions 2 and 9, we showed that high binding peptides are tolerant to a wide range of nonpreferred amino acids. Our results support a model in which the affinity of peptide binding to MHC is determined by the interactions of amino acids at multiple positions with the MHC molecule and may be enhanced by enthalpic cooperativity between these component interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cellular peptide vaccines contain T-cell epitopes. The main prerequisite for a peptide to act as a T-cell epitope is that it binds to a major histocompatibility complex (MHC) protein. Peptide MHC binder identification is an extremely costly experimental challenge since human MHCs, named human leukocyte antigen, are highly polymorphic and polygenic. Here we present EpiDOCK, the first structure-based server for MHC class II binding prediction. EpiDOCK predicts binding to the 23 most frequent human, MHC class II proteins. It identifies 90% of true binders and 76% of true non-binders, with an overall accuracy of 83%. EpiDOCK is freely accessible at http://epidock.ddg-pharmfac. net. © The Author 2013. Published by Oxford University Press. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fenton-chemistry-based detemplation combined with secondary treatments offers options to tune the hierarchical porosity of SBA-15. This approach has been studied on a series of SBA-15 mesophases and has been compared to the conventional calcination. The as-synthesized and detemplated materials were studied with regard to their template content (TGA, CHN), structure (SAXS, TEM), surface hydroxylation (Blin-Carterets approach), and texture (high-resolution argon physisorption). Fenton detemplation achieves 99% of template removal, leading to highly hydroxylated materials. The structure is better preserved when a secondary treatment is applied after the Fenton oxidation, due to the intense capillary forces during drying in water. Two successful approaches are presented: drying in a low-surface-tension solvent (such as n-BuOH) and a hydrothermal stabilization to further condense the structure and make it structurally more robust. Both approaches give rise to remarkably low structural shrinkage, lower than calcination and the direct water-dried Fenton. Interestingly, the derived textural features are remarkably different. The n-BuOH exchange route gives rise to highly hierarchical structures with enhanced interconnecting pores and the highest surface areas. The hydrothermal stabilization produces large-pore SBA-15 structures with high pore volume, intermediate interconnectivity, and minimal micropores. Therefore, the hierarchical texture can be fine-tuned in these two fashions while the template is removed under mild conditions.