928 resultados para Secondary Structure Prediction
Resumo:
The literature relating to haze formation, methods of separation, coalescence mechanisms, and models by which droplets <100 μm are collected, coalesced and transferred, have been reviewed with particular reference to particulate bed coalescers. The separation of secondary oil-water dispersions was studied experimentally using packed beds of monosized glass ballotini particles. The variables investigated were superficial velocity, bed depth, particle size, and the phase ratio and drop size distribution of inlet secondary dispersion. A modified pump loop was used to generate secondary dispersions of toluene or Clairsol 350 in water with phase ratios between 0.5-6.0 v/v%.Inlet drop size distributions were determined using a Malvern Particle Size Analyser;effluent, coalesced droplets were sized by photography. Single phase flow pressure drop data were correlated by means of a Carman-Kozeny type equation. Correlations were obtained relating single and two phase pressure drops, as (ΔP2/μc)/ΔP1/μd) = kp Ua Lb dcc dpd Cine A flow equation was derived to correlate the two phase pressure drop data as, ΔP2/(ρcU2) = 8.64*107 [dc/D]-0.27 [L/D]0.71 [dp/D]-0.17 [NRe]1.5 [e1]-0.14 [Cin]0.26 In a comparison between functions to characterise the inlet drop size distributions a modification of the Weibull function provided the best fit of experimental data. The general mean drop diameter was correlated by: q_p q_p p_q /β Γ ((q-3/β) +1) d qp = d fr .α Γ ((P-3/β +1 The measured and predicted mean inlet drop diameters agreed within ±15%. Secondary dispersion separation depends largely upon drop capture within a bed. A theoretical analysis of drop capture mechanisms in this work indicated that indirect interception and London-van der Waal's mechanisms predominate. Mathematical models of dispersed phase concentration m the bed were developed by considering drop motion to be analogous to molecular diffusion.The number of possible channels in a bed was predicted from a model in which the pores comprised randomly-interconnected passage-ways between adjacent packing elements and axial flow occured in cylinders on an equilateral triangular pitch. An expression was derived for length of service channels in a queuing system leading to the prediction of filter coefficients. The insight provided into the mechanisms of drop collection and travel, and the correlations of operating parameters, should assist design of industrial particulate bed coalescers.
Resumo:
The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus), Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. Ollrien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as ?i and q? with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of assigning a disruption probability to every plasma input pattern. The second method determines the novelty of an input pattern by calculating the probability density distribution of successful plasma patterns that have been run at JET. The density distribution is represented as a mixture distribution, and its parameters arc determined using the Expectation-Maximisation method. If the dataset, used to determine the distribution parameters, covers sufficiently well the machine operational space. Then, the patterns flagged as novel can be regarded as patterns belonging to a disrupting plasma. Together with these methods, a network has been designed to predict the vertical forces, that a disruption can cause, in order to avoid that too dangerous plasma configurations are run. This network can be run before the pulse using the pre-programmed plasma configuration or on line becoming a tool that allows to stop dangerous plasma configuration. All these methods have been implemented in real time on a dual Pentium Pro based machine. The Disruption Prediction and Prevention System has shown that internal plasma parameters can be determined on-line with a good accuracy. Also the disruption detection algorithms showed promising results considering the fact that JET is an experimental machine where always new plasma configurations are tested trying to improve its performances.
Resumo:
Hospital employees who work in an environment with zero tolerance to error, face several stressors that may result in psychological, physiological, and behavioural strains, and subsequently, in suboptimal performance. This thesis includes two studies which investigate the stressor-to-strain-to-performance relationships in hospitals. The first study is a cross-sectional, multi-group investigation based on secondary data from 65,142 respondents in 172 acute/specialist UK NHS trusts. This model proposes that senior management leadership predicts social support and job design which, in turn, moderate stressors-to-strains across team structure. The results confirm the model's robustness. Regression analysis provides support for main effects and minimal support for moderation hypotheses. Therefore, based on its conclusions and inherent limitations, study one lays the framework for study two. The second study is a cross-sectional, multilevel investigation of the strain-reducing effects of social environment on externally-rated unit-level performance based on primary data from 1,137 employees in 136 units, in a hospital in Malta. The term "social environment" refers to the prediction of the moderator variables, which is to say, social support and decision latitude/control, by transformational leadership and team climate across hospital units. This study demonstrates that transformational leadership is positively associated with social support, whereas team climate is positively associated with both moderators. At the same time, it identifies a number of moderating effects which social support and decision latitude/control, both separately and together, had on specific stressor-to-strain relationships. The results show significant mediated stressor-to-strain-to-performance relationships. Furthermore, at the higher level, unit-level performance is positively associated with shared unit-level team climate and with unit-level vision, the latter being one of the five sub-dimension of transformational leadership. At the same time, performance is also positively related to both transformational leadership and team climate when the two constructs are tested together. Few studies have linked the buffering effects of the social environment in occupational stress with performance. Therefore, this research strives to make a significant contribution to the occupational stress and performance literature with a focus on hospital practice. Indeed, the study highlights the wide-ranging and far-reaching implications that these findings provide for theory, management, and practice.
Resumo:
This thesis reports the development of a reliable method for the prediction of response to electromagnetically induced vibration in large electric machines. The machines of primary interest are DC ship-propulsion motors but much of the work reported has broader significance. The investigation has involved work in five principal areas. (1) The development and use of dynamic substructuring methods. (2) The development of special elements to represent individual machine components. (3) Laboratory scale investigations to establish empirical values for properties which affect machine vibration levels. (4) Experiments on machines on the factory test-bed to provide data for correlation with prediction. (5) Reasoning with regard to the effect of various design features. The limiting factor in producing good models for machines in vibration is the time required for an analysis to take place. Dynamic substructuring methods were adopted early in the project to maximise the efficiency of the analysis. A review of existing substructure- representation and composite-structure assembly methods includes comments on which are most suitable for this application. In three appendices to the main volume methods are presented which were developed by the author to accelerate analyses. Despite significant advances in this area, the limiting factor in machine analyses is still time. The representation of individual machine components was addressed as another means by which the time required for an analysis could be reduced. This has resulted in the development of special elements which are more efficient than their finite-element counterparts. The laboratory scale experiments reported were undertaken to establish empirical values for the properties of three distinct features - lamination stacks, bolted-flange joints in rings and cylinders and the shimmed pole-yoke joint. These are central to the preparation of an accurate machine model. The theoretical methods are tested numerically and correlated with tests on two machines (running and static). A system has been devised with which the general electromagnetic forcing may be split into its most fundamental components. This is used to draw some conclusions about the probable effects of various design features.
Resumo:
Recent and potential changes in technology have resulted in the anticipation of increases in the frequency of job changes. This has led manpower policy makers to investigate the feasibility of incorporating the employment skills of job groups in the general prediction of future job learning and performance with a view to the establishment of "job families" within which transfer might be considered reciprocally high. A structured job analysis instrument (the Position Analysis Questionnaire) is evaluated in terms of two distinct sets of scores; job dimensions and synthetically established attribute/trait profiles. Studies demonstrate that estimates of a job's structure/dimensions and requisite human attributes can be reliably established. Three alternative techniques of statistically assembling profiles of the requisite human attributes for jobs are found to have differential levels of reliability and differential degrees of validity in their estimation of the "actual" ability requirements of jobs. The utility of these two sets of job descriptors to serve as representations of the cognitive structure similarity of job groups is investigated in a study which simulates a job transfer situation. The central role of the index of similarity used to assess the relationship between "target" and "present" job is demonstrated. The relative extents to which job structure similarity and job attribute similariity are associated with positive transfer are investigated. The studies demonstrate that the dimensions of jobs, and more fruitfully their requisite human attributes can serve as bases to predict job transfer learning and performance. The nature of the index of similarity used to optimally formulate predictions of transfer is such that networks of jobs might be establishable to which current job incumbents could be expected to transfer positively. The derivation of "job families" with anticipated reciprocal transfer consequences is considered to be less appropriate.
Resumo:
Membrane proteins, which constitute approximately 20% of most genomes, are poorly tractable targets for experimental structure determination, thus analysis by prediction and modelling makes an important contribution to their on-going study. Membrane proteins form two main classes: alpha helical and beta barrel trans-membrane proteins. By using a method based on Bayesian Networks, which provides a flexible and powerful framework for statistical inference, we addressed alpha-helical topology prediction. This method has accuracies of 77.4% for prokaryotic proteins and 61.4% for eukaryotic proteins. The method described here represents an important advance in the computational determination of membrane protein topology and offers a useful, and complementary, tool for the analysis of membrane proteins for a range of applications.
Resumo:
Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient P (commonly expressed in logarithm form: logP), is useful for screening out unsuitable molecules and also for the development of predictive Quantitative Structure-Activity Relationships (QSARs). In this paper we develop a new approach to the prediction of LogP values for peptides based on an empirical relationship between global molecular properties and measured physical properties. Our method was successful in terms of peptide prediction (total r2 = 0.641). The final model consisted of 5 physicochemical descriptors (molecular weight, number of single bonds, 2D-VDW volume, 2D-VSA hydrophobic and 2D-VSA polar). The approach is peptide specific and its predictive accuracy was high. Overall, 67% of the peptides were able to be predicted within +/-0.5 log units from the experimental values. Our method thus represents a novel prediction method with proven predictive ability.
Resumo:
Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.
Resumo:
The ability to define and manipulate the interaction of peptides with MHC molecules has immense immunological utility, with applications in epitope identification, vaccine design, and immunomodulation. However, the methods currently available for prediction of peptide-MHC binding are far from ideal. We recently described the application of a bioinformatic prediction method based on quantitative structure-affinity relationship methods to peptide-MHC binding. In this study we demonstrate the predictivity and utility of this approach. We determined the binding affinities of a set of 90 nonamer peptides for the MHC class I allele HLA-A*0201 using an in-house, FACS-based, MHC stabilization assay, and from these data we derived an additive quantitative structure-affinity relationship model for peptide interaction with the HLA-A*0201 molecule. Using this model we then designed a series of high affinity HLA-A2-binding peptides. Experimental analysis revealed that all these peptides showed high binding affinities to the HLA-A*0201 molecule, significantly higher than the highest previously recorded. In addition, by the use of systematic substitution at principal anchor positions 2 and 9, we showed that high binding peptides are tolerant to a wide range of nonpreferred amino acids. Our results support a model in which the affinity of peptide binding to MHC is determined by the interactions of amino acids at multiple positions with the MHC molecule and may be enhanced by enthalpic cooperativity between these component interactions.
Resumo:
Cellular peptide vaccines contain T-cell epitopes. The main prerequisite for a peptide to act as a T-cell epitope is that it binds to a major histocompatibility complex (MHC) protein. Peptide MHC binder identification is an extremely costly experimental challenge since human MHCs, named human leukocyte antigen, are highly polymorphic and polygenic. Here we present EpiDOCK, the first structure-based server for MHC class II binding prediction. EpiDOCK predicts binding to the 23 most frequent human, MHC class II proteins. It identifies 90% of true binders and 76% of true non-binders, with an overall accuracy of 83%. EpiDOCK is freely accessible at http://epidock.ddg-pharmfac. net. © The Author 2013. Published by Oxford University Press. All rights reserved.
Modifying the hierarchical porosity of SBA-15 via mild-detemplation followed by secondary treatments
Resumo:
Fenton-chemistry-based detemplation combined with secondary treatments offers options to tune the hierarchical porosity of SBA-15. This approach has been studied on a series of SBA-15 mesophases and has been compared to the conventional calcination. The as-synthesized and detemplated materials were studied with regard to their template content (TGA, CHN), structure (SAXS, TEM), surface hydroxylation (Blin-Carterets approach), and texture (high-resolution argon physisorption). Fenton detemplation achieves 99% of template removal, leading to highly hydroxylated materials. The structure is better preserved when a secondary treatment is applied after the Fenton oxidation, due to the intense capillary forces during drying in water. Two successful approaches are presented: drying in a low-surface-tension solvent (such as n-BuOH) and a hydrothermal stabilization to further condense the structure and make it structurally more robust. Both approaches give rise to remarkably low structural shrinkage, lower than calcination and the direct water-dried Fenton. Interestingly, the derived textural features are remarkably different. The n-BuOH exchange route gives rise to highly hierarchical structures with enhanced interconnecting pores and the highest surface areas. The hydrothermal stabilization produces large-pore SBA-15 structures with high pore volume, intermediate interconnectivity, and minimal micropores. Therefore, the hierarchical texture can be fine-tuned in these two fashions while the template is removed under mild conditions.
Resumo:
The most fundamental and challenging function of government is the effective and efficient delivery of services to local taxpayers and businesses. Counties, once known as the “dark continent” of American government, have recently become a major player in the provision of services. Population growth and suburbanization have increased service demands while the counties' role as service provider to incorporated residents has also expanded due to additional federal and state mandates. County governments are under unprecedented pressure and scrutiny to meet citizens' and elected officials' demands for high quality, and equitable delivery of services at the lowest possible cost while contending with anti-tax sentiments, greatly decreased state and federal support, and exceptionally costly and complex health and public safety problems. ^ This study tested the reform government theory proposition that reformed structures of county government positively correlate with efficient service delivery. A county government reformed index was developed for this dissertation comprised of form of government, home-rule status, method of election, number of government jurisdictions, and number of elected officials. The county government reform index and a measure of relative structural fragmentation were used to assess their impact on two measures of service output: mean county road pavement condition and county road maintenance expenditures. The study's multi-level design triangulated results from different data sources and methods of analysis. Data were collected from semi-structured interviews of county officials, secondary archival sources, and a survey of 544 elected and appointed officials from Florida's 67 counties. The results of the three sources of data converged in finding that reformed Florida counties are more likely than unreformed counties to provide better road service and to spend less on road expenditures. The same results were found for unfragmented Florida counties. Because both the county government reform index and the fragmentation variables were specified acknowledging the reform theory as well as elements from the public-choice model, the results help explain contradicting findings in the urban service research. ^ Therefore, as suggested by the corroborated findings of this dissertation, reformed as well as unfragmented counties are better providers of road maintenance service and do so in a less costly manner. These findings hold although the variables were specified to capture theoretical arguments from the consolidated as well as the public-choice theories suggesting a way to advance the debate from the consolidated-fragmented dichotomy of urban governance. ^
Resumo:
Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles' location and motion information, range queries on current and history data, and prediction of vehicles' movement in the near future. ^ To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. ^ Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. ^ An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed. ^
Resumo:
The research presented in this dissertation investigated selected processes involving baryons and nuclei in hard scattering reactions. These processes are characterized by the production of particles with large energies and transverse momenta. Through these processes, this work explored both, the constituent (quark) structure of baryons (specifically nucleons and Δ-Isobars), and the mechanisms through which the interactions between these constituents ultimately control the selected reactions. The first of such reactions is the hard nucleon-nucleon elastic scattering, which was studied here considering the quark exchange between the nucleons to be the dominant mechanism of interaction in the constituent picture. In particular, it was found that an angular asymmetry exhibited by proton-neutron elastic scattering data is explained within this framework if a quark-diquark picture dominates the nucleon’s structure instead of a more traditional SU(6) three quarks picture. The latter yields an asymmetry around 90o center of mass scattering with a sign opposite to what is experimentally observed. The second process is the hard breakup by a photon of a nucleon-nucleon system in light nuclei. Proton-proton (pp) and proton-neutron (pn) breakup in 3He, and ΔΔ-isobars production in deuteron breakup were analyzed in the hard rescattering model (HRM), which in conjunction with the quark interchange mechanism provides a Quantum Chromodynamics (QCD) description of the reaction. Through the HRM, cross sections for both channels in 3He photodisintegration were computed without the need of a fitting parameter. The results presented here for pp breakup show excellent agreement with recent experimental data. In ΔΔ-isobars production in deuteron breakup, HRM angular distributions for the two ΔΔ channels were compared to the pn channel and to each other. An important prediction fromthis study is that the Δ++Δ- channel consistently dominates Δ+Δ0, which is in contrast with models that unlike the HRM consider a ΔΔ system in the initial state of the interaction. For such models both channels should have the same strength. These results are important in developing a QCD description of the atomic nucleus.
Resumo:
Anthropogenic alterations of natural hydrology are common in wetlands and often increase water permanence, converting ephemeral habitats into permanent ones. Since aquatic organisms segregate strongly along hydroperiod gradients, added water permanence caused by canals can dramatically change the structure of aquatic communities. We examined the impact of canals on the abundance and structure of wetland communities in South Florida, USA. We sampled fishes and macroinvertebrates from marsh transects originating at canals in the central and southern Everglades. Density of all aquatic organisms sampled increased in the immediate proximity of canals, but was accompanied by few compositional changes based on analysis of relative abundance. Large fish (>8 cm), small fish (<8 >cm) and macroinvertebrates (>5 mm) increased in density within 5 m of canals. This pattern was most pronounced in the dry season, suggesting that canals may serve as dry-down refugia. Increases in aquatic animal density closely matched gradients of phosphorus enrichment that decreased with distance from canals. Thus, the most apparent impact of canals on adjacent marsh communities was as conduits for nutrients that stimulated local productivity; any impact of their role as sources of increased sources of predators was not apparent. The effect of predation close to canals was overcompensated by increased secondary productivity and/or immigration toward areas adjacent to canals in the dry season. Alternatively, the consumptive effect of predatory fishes using canals as dry-season refuges is very small or spread over the expanse of marshes with open access to canals.