56 resultados para logical structure method
Resumo:
We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.
Resumo:
Motivation: A new method that uses support vector machines (SVMs) to predict protein secondary structure is described and evaluated. The study is designed to develop a reliable prediction method using an alternative technique and to investigate the applicability of SVMs to this type of bioinformatics problem. Methods: Binary SVMs are trained to discriminate between two structural classes. The binary classifiers are combined in several ways to predict multi-class secondary structure. Results: The average three-state prediction accuracy per protein (Q3) is estimated by cross-validation to be 77.07 ± 0.26% with a segment overlap (Sov) score of 73.32 ± 0.39%. The SVM performs similarly to the 'state-of-the-art' PSIPRED prediction method on a non-homologous test set of 121 proteins despite being trained on substantially fewer examples. A simple consensus of the SVM, PSIPRED and PROFsec achieves significantly higher prediction accuracy than the individual methods. Availability: The SVM classifier is available from the authors. Work is in progress to make the method available on-line and to integrate the SVM predictions into the PSIPRED server.
Resumo:
Motivation: In order to enhance genome annotation, the fully automatic fold recognition method GenTHREADER has been improved and benchmarked. The previous version of GenTHREADER consisted of a simple neural network which was trained to combine sequence alignment score, length information and energy potentials derived from threading into a single score representing the relationship between two proteins, as designated by CATH. The improved version incorporates PSI-BLAST searches, which have been jumpstarted with structural alignment profiles from FSSP, and now also makes use of PSIPRED predicted secondary structure and bi-directional scoring in order to calculate the final alignment score. Pairwise potentials and solvation potentials are calculated from the given sequence alignment which are then used as inputs to a multi-layer, feed-forward neural network, along with the alignment score, alignment length and sequence length. The neural network has also been expanded to accommodate the secondary structure element alignment (SSEA) score as an extra input and it is now trained to learn the FSSP Z-score as a measurement of similarity between two proteins. Results: The improvements made to GenTHREADER increase the number of remote homologues that can be detected with a low error rate, implying higher reliability of score, whilst also increasing the quality of the models produced. We find that up to five times as many true positives can be detected with low error rate per query. Total MaxSub score is doubled at low false positive rates using the improved method.
Resumo:
If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.
Resumo:
The elucidation of the domain content of a given protein sequence in the absence of determined structure or significant sequence homology to known domains is an important problem in structural biology. Here we address how successfully the delineation of continuous domains can be accomplished in the absence of sequence homology using simple baseline methods, an existing prediction algorithm (Domain Guess by Size), and a newly developed method (DomSSEA). The study was undertaken with a view to measuring the usefulness of these prediction methods in terms of their application to fully automatic domain assignment. Thus, the sensitivity of each domain assignment method was measured by calculating the number of correctly assigned top scoring predictions. We have implemented a new continuous domain identification method using the alignment of predicted secondary structures of target sequences against observed secondary structures of chains with known domain boundaries as assigned by Class Architecture Topology Homology (CATH). Taking top predictions only, the success rate of the method in correctly assigning domain number to the representative chain set is 73.3%. The top prediction for domain number and location of domain boundaries was correct for 24% of the multidomain set (±20 residues). These results have been put into context in relation to the results obtained from the other prediction methods assessed
Resumo:
The PSIPRED protein structure prediction server allows users to submit a protein sequence, perform a prediction of their choice and receive the results of the prediction both textually via e-mail and graphically via the web. The user may select one of three prediction methods to apply to their sequence: PSIPRED, a highly accurate secondary structure prediction method; MEMSAT 2, a new version of a widely used transmembrane topology prediction method; or GenTHREADER, a sequence profile based fold recognition method.
Resumo:
We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Resumo:
Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
Particulate antigen assemblies in the nanometer range and DNA plasmids are particularly interesting for designing vaccines. We hypothesised that a combination of these approaches could result in a new delivery method of gp160 envelope HIV-1 vaccine which could combine the potency of virus-like particles (VLPs) and the simplicity of use of DNA vaccines. Characterisation of lentivirus-like particles (lentiVLPs) by western blot, dynamic light scattering and electron microscopy revealed that their protein pattern, size and structure make them promising candidates for HIV-1 vaccines. Although all particles were similar with regard to size and distribution, they clearly differed in p24 capsid protein content suggesting that Rev may be required for particle maturation and Gag processing. In vivo, lentiVLP pseudotyping with the gp160 envelope or with a combination of gp160 and VSV-G envelopes did not influence the magnitude of the immune response but the combination of lentiVLPs with Alum adjuvant resulted in a more potent response. Interestingly, the strongest immune response was obtained when plasmids encoding lentiVLPs were co-delivered to mice muscles by electrotransfer, suggesting that lentiVLPs were efficiently produced in vivo or the packaging genes mediate an adjuvant effect. DNA electrotransfer of plasmids encoding lentivirus-like particles offers many advantages and appears therefore as a promising delivery method of HIV-1 vaccines. Keywords:VLP, Electroporation, Electrotransfer, HIV vaccine, DNA vaccine
Resumo:
Purpose – This study aims to examine the moderating effects of external environment and organisational structure in the relationship between business-level strategy and organisational performance. Design/methodology/approach – The focus of the study is on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors, and respondents were CEOs. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and appropriate measures taken to minimise the impact of common method variance (CMV). Findings – The results indicate that environmental dynamism and hostility act as moderators in the relationship between business-level strategy and relative competitive performance. In low-hostility environments a cost-leadership strategy and in high-hostility environments a differentiation strategy lead to better performance compared with competitors. In highly dynamic environments a cost-leadership strategy and in low dynamism environments a differentiation strategy are more helpful in improving financial performance. Organisational structure moderates the relationship of both the strategic types with ROS. However, in the case of ROA, the moderating effect of structure was found only in its relationship with cost-leadership strategy. A mechanistic structure is helpful in improving the financial performance of organisations adopting either a cost-leadership or a differentiation strategy. Originality/value – Unlike many other empirical studies, the study makes an important contribution to the literature by examining the moderating effects of both environment and structure on the relationship between business-level strategy and performance in a detailed manner, using moderated regression analysis.
Resumo:
The affinity of anthocyanins for human serum albumin (HSA) was determined by a fluorescence quenching method. The effects of pH and structure of anthocyanins on the binding constants were studied. The constants for binding of anthocyanins to HSA ranged from 1.08 x 10^5 M-1 to 13.16 x 10^5 M-1. A hydrophobic effect at acidic pH was shown by the relatively high positive entropy values under the conditions studied. Electrostatic interactions including hydrogen bonding contributed to the binding at pH 7.4. The effect of structure of anthocyanins on the affinity was pH dependent, particularly the effect of additional hydroxyl substituents. Hydroxyl substituents and glycosylation of anthocyanins decreased the affinity for binding to HSA at lower pH (especially pH 4), but increased the strength of binding at pH 7.4. In contrast, methylation of a hydroxyl group enhanced the binding at acidic pH, while this substitution reduced the strength of binding at pH 7.4. This paper has shown that changes in anthocyanin structure or reductions in pH, which may occur in the region of inflammatory sites, have an effect of the binding of anthocyanins to HSA.
Resumo:
Background: Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. Aims: We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Method: Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Results: Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. Conclusions: The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.