797 resultados para interval prediction
Resumo:
This paper provides a solution for predicting moving/moving and moving/static collisions of objects within a virtual environment. Feasible prediction in real-time virtual worlds can be obtained by encompassing moving objects within a sphere and static objects within a convex polygon. Fast solutions are then attainable by describing the movement of objects parametrically in time as a polynomial.
Resumo:
A number of new and newly improved methods for predicting protein structure developed by the Jones–University College London group were used to make predictions for the CASP6 experiment. Structures were predicted with a combination of fold recognition methods (mGenTHREADER, nFOLD, and THREADER) and a substantially enhanced version of FRAGFOLD, our fragment assembly method. Attempts at automatic domain parsing were made using DomPred and DomSSEA, which are based on a secondary structure parsing algorithm and additionally for DomPred, a simple local sequence alignment scoring function. Disorder prediction was carried out using a new SVM-based version of DISOPRED. Attempts were also made at domain docking and “microdomain” folding in order to build complete chain models for some targets.
Resumo:
A number of state-of-the-art protein structure prediction servers have been developed by researchers working in the Bioinformatics Unit at University College London. The popular PSIPRED server allows users to perform secondary structure prediction, transmembrane topology prediction and protein fold recognition. More recent servers include DISOPRED for the prediction of protein dynamic disorder and DomPred for domain boundary prediction.
Resumo:
Dynamically disordered regions appear to be relatively abundant in eukaryotic proteomes. The DISOPRED server allows users to submit a protein sequence, and returns a probability estimate of each residue in the sequence being disordered. The results are sent in both plain text and graphical formats, and the server can also supply predictions of secondary structure to provide further structural information.
Resumo:
An automatic method for recognizing natively disordered regions from amino acid sequence is described and benchmarked against predictors that were assessed at the latest critical assessment of techniques for protein structure prediction (CASP) experiment. The method attains a Wilcoxon score of 90.0, which represents a statistically significant improvement on the methods evaluated on the same targets at CASP. The classifier, DISOPRED2, was used to estimate the frequency of native disorder in several representative genomes from the three kingdoms of life. Putative, long (>30 residue) disordered segments are found to occur in 2.0% of archaean, 4.2% of eubacterial and 33.0% of eukaryotic proteins. The function of proteins with long predicted regions of disorder was investigated using the gene ontology annotations supplied with the Saccharomyces genome database. The analysis of the yeast proteome suggests that proteins containing disorder are often located in the cell nucleus and are involved in the regulation of transcription and cell signalling. The results also indicate that native disorder is associated with the molecular functions of kinase activity and nucleic acid binding.
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
Motivation: A new method that uses support vector machines (SVMs) to predict protein secondary structure is described and evaluated. The study is designed to develop a reliable prediction method using an alternative technique and to investigate the applicability of SVMs to this type of bioinformatics problem. Methods: Binary SVMs are trained to discriminate between two structural classes. The binary classifiers are combined in several ways to predict multi-class secondary structure. Results: The average three-state prediction accuracy per protein (Q3) is estimated by cross-validation to be 77.07 ± 0.26% with a segment overlap (Sov) score of 73.32 ± 0.39%. The SVM performs similarly to the 'state-of-the-art' PSIPRED prediction method on a non-homologous test set of 121 proteins despite being trained on substantially fewer examples. A simple consensus of the SVM, PSIPRED and PROFsec achieves significantly higher prediction accuracy than the individual methods. Availability: The SVM classifier is available from the authors. Work is in progress to make the method available on-line and to integrate the SVM predictions into the PSIPRED server.
Resumo:
If secondary structure predictions are to be incorporated into fold recognition methods, an assessment of the effect of specific types of errors in predicted secondary structures on the sensitivity of fold recognition should be carried out. Here, we present a systematic comparison of different secondary structure prediction methods by measuring frequencies of specific types of error. We carry out an evaluation of the effect of specific types of error on secondary structure element alignment (SSEA), a baseline fold recognition method. The results of this evaluation indicate that missing out whole helix or strand elements, or predicting the wrong type of element, is more detrimental than predicting the wrong lengths of elements or overpredicting helix or strand. We also suggest that SSEA scoring is an effective method for assessing accuracy of secondary structure prediction and perhaps may also provide a more appropriate assessment of the “usefulness” and quality of predicted secondary structure, if secondary structure alignments are to be used in fold recognition.
Resumo:
The PSIPRED protein structure prediction server allows users to submit a protein sequence, perform a prediction of their choice and receive the results of the prediction both textually via e-mail and graphically via the web. The user may select one of three prediction methods to apply to their sequence: PSIPRED, a highly accurate secondary structure prediction method; MEMSAT 2, a new version of a widely used transmembrane topology prediction method; or GenTHREADER, a sequence profile based fold recognition method.
Resumo:
Despite the fact that mites were used at the dawn of forensic entomology to elucidate the postmortem interval, their use in current cases remains quite low for procedural reasons such as inadequate taxonomic knowledge. A special interest is focused on the phoretic stages of some mite species, because the phoront-host specificity allows us to deduce in many occasions the presence of the carrier (usually Diptera or Coleoptera) although it has not been seen in the sampling performed in situ or in the autopsy room. In this article, we describe two cases where Poecilochirus austroasiaticus Vitzthum (Acari: Parasitidae) was sampled in the autopsy room. In the first case, we could sample the host, Thanatophilus ruficornis (Küster) (Coleoptera: Silphidae), which was still carrying phoretic stages of the mite on the body. That attachment allowed, by observing starvation/feeding periods as a function of the digestive tract filling, the establishment of chronological cycles of phoretic behavior, showing maximum peaks of phoronts during arrival and departure from the corpse and the lowest values in the phase of host feeding. From the sarcosaprophagous fauna, we were able to determine in this case a minimum postmortem interval of 10 days. In the second case, we found no Silphidae at the place where the corpse was found or at the autopsy, but a postmortem interval of 13 days could be established by the high specificity of this interspecific relationship and the departure from the corpse of this family of Coleoptera.
Resumo:
Some of the techniques used to model nitrogen (N) and phosphorus (P) discharges from a terrestrial catchment to an estuary are discussed and applied to the River Tamar and Tamar Estuary system in Southwest England, U.K. Data are presented for dissolved inorganic nutrient concentrations in the Tamar Estuary and compared with those from the contrasting, low turbidity and rapidly flushed Tweed Estuary in Northeast England. In the Tamar catchment, simulations showed that effluent nitrate loads for typical freshwater flows contributed less than 1% of the total N load. The effect of effluent inputs on ammonium loads was more significant (∼10%). Cattle, sheep and permanent grassland dominated the N catchment export, with diffuse-source N export greatly dominating that due to point sources. Cattle, sheep, permanent grassland and cereal crops generated the greatest rates of diffuse-source P export. This reflected the higher rates of P fertiliser applications to arable land and the susceptibility of bare, arable land to P export in wetter winter months. N and P export to the Tamar Estuary from human sewage was insignificant. Non-conservative behaviour of phosphate was particularly marked in the Tamar Estuary. Silicate concentrations were slightly less than conservative levels, whereas nitrate was essentially conservative. The coastal sea acted as a sink for these terrestrially derived nutrients. A pronounced sag in dissolved oxygen that was associated with strong nitrite and ammonium peaks occurred in the turbidity maximum region of the Tamar Estuary. Nutrient behaviour within the Tweed was very different. The low turbidity and rapid flushing ensured that nutrients there were essentially conservative, so that flushing of nutrients to the coastal zone from the river occurred with little estuarine modification.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.