992 resultados para Wide Prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a conceptually novel structured prediction model, GPstruct, which is kernelized, non-parametric and Bayesian, by design. We motivate the model with respect to existing approaches, among others, conditional random fields (CRFs), maximum margin Markov networks (M3N), and structured support vector machines (SVMstruct), which embody only a subset of its properties. We present an inference procedure based on Markov Chain Monte Carlo. The framework can be instantiated for a wide range of structured objects such as linear chains, trees, grids, and other general graphs. As a proof of concept, the model is benchmarked on several natural language processing tasks and a video gesture segmentation task involving a linear chain structure. We show prediction accuracies for GPstruct which are comparable to or exceeding those of CRFs and SVMstruct.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study develops a single-stream jet noise prediction model for a family of chevron nozzles. An original equation is proposed for the fourth-order space-time cross-correlations. They are expressed in flow parameters such as streamwise circulation and turbulent kinetic energy. The cross-correlations based on a Reynolds Averaged Navier-Stokes (RANS) flowfield showed a good agreement with those based on a Large Eddy Simulation (LES) flowfield. This proves that the proposed equation describes the cross-correlations accurately. With this novel source description, there is an excellent agreement between our model's far-field noise predictions and measurements1 for a wide range of frequencies and radiation angles. Our model captures the spectral shape, amplitude and peak frequency very well. This establishes that our model holds good for a family of chevron nozzles. As our model provides quick and accurate predictions, a parametric study was performed to understand the effects of a chevron nozzle geometry on jet noise and thrust loss. Chevron penetration is the underpinning factor for jet noise reduction. The reduction of jet noise per unit thrust loss decreases linearly with chevron penetration. The number of chevrons also has a considerable effect on jet noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and based on predictions of the Markov model. The evolution of the skin color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and re-sampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. Quantitative evaluation of the method was conducted on labeled ground-truth video sequences taken from popular movies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note presents a simple model for prediction of liquid hold-up in two-phase horizontal pipe flow for the stratified roll wave (St+RW) flow regime. Liquid hold-up data for horizontal two-phase pipe flow [1, 2, 3, 4, 5 and 6] exhibit a steady increase with liquid velocity and a more dramatic fall with increasing gas rate as shown by Hand et al. [7 and 8] for example. In addition the liquid hold-up is reported to show an additional variation with pipe diameter. Generally, if the initial liquid rate for the no-gas flow condition gives a liquid height below the pipe centre line, the flow patterns pass successively through the stratified (St), stratified ripple (St+R), stratified roll wave, film plus droplet (F+D) and finally the annular (A+D, A+RW, A+BTS) regimes as the gas rate is increased. Hand et al. [7 and 8] have given a detailed description of this progression in flow regime development and definitions of the patterns involved. Despite the fact that there are over one hundred models which have been developed to predict liquid hold-up, none have been shown to be universally useful, while only a handful have proven to be applicable to specific flow regimes [9, 10, 11 and 12]. One of the most intractable regimes to predict has been the stratified roll wave pattern where the liquid hold-up shows the most dramatic change with gas flow rate. It has been suggested that the momentum balance-type models, which give both hold-up and pressure drop prediction, can predict universally for all flow regimes but particularly in the case of the difficult stratified roll wave pattern. Donnelly [1] recently demonstrated that the momentum balance models experienced some difficulties in the prediction of this regime. Without going into lengthy details, these models differ in the assumed friction factor or shear stress on the surfaces within the pipe particularly at the liquid–gas interface. The Baker–Jardine model [13] when tested against the 0.0454 m i.d. data of Nguyen [2] exhibited a wide scatter for both liquid hold-up and pressure drop as shown in Fig. 1. The Andritsos–Hanratty model [14] gave better prediction of pressure drop but a wide scatter for liquid hold-up estimation (cf. Fig. 2) when tested against the 0.0935 m i.d. data of Hand [5]. The Spedding–Hand model [15], shown in Fig. 3 against the data of Hand [5], gave improved performance but was still unsatisfactory with the prediction of hold-up for stratified-type flows. The MARS model of Grolman [6] gave better prediction of hold-up (cf. Fig. 4) but deterioration in the estimation of pressure drop when tested against the data of Nguyen [2]. Thus no method is available that will accurately predict liquid hold-up across the whole range of flow patterns but particularly for the stratified plus roll wavy regime. The position is particularly unfortunate since the stratified-type regimes are perhaps the most predominant pattern found in multiphase lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conditional branches frequently exhibit similar behavior (bias, time-varying behavior,...), a property that can be used to improve branch prediction accuracy. Branch clustering constructs groups or clusters of branches with similar behavior and applies different branch prediction techniques to each branch cluster. We revisit the topic of branch clustering with the aim of generalizing branch clustering. We investigate several methods to measure cluster information, with the most effective the storage of information in the branch target buffer. Also, we investigate alternative methods of using the branch cluster identification in the branch predictor. By these improvements we arrive at a branch clustering technique that obtains higher accuracy than previous approaches presented in the literature for the gshare predictor. Furthermore, we evaluate our branch clustering technique in a wide range of predictors to show the general applicability of the method. Branch clustering improves the accuracy of the local history (PAg) predictor, the path-based perceptron and the PPM-like predictor, one of the 2004 CBP finalists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prediction and management of ecosystem responses to global environmental change would profit from a clearer understanding of the mechanisms determining the structure and dynamics of ecological communities. The analytic theory presented here develops a causally closed picture for the mechanisms controlling community and population size structure, in particular community size spectra, and their dynamic responses to perturbations, with emphasis on marine ecosystems. Important implications are summarised in non-technical form. These include the identification of three different responses of community size spectra to size-specific pressures (of which one is the classical trophic cascade), an explanation for the observed slow recovery of fish communities from exploitation, and clarification of the mechanism controlling predation mortality rates. The theory builds on a community model that describes trophic interactions among size-structured populations and explicitly represents the full life cycles of species. An approximate time-dependent analytic solution of the model is obtained by coarse graining over maturation body sizes to obtain a simple description of the model steady state, linearising near the steady state, and then eliminating intraspecific size structure by means of the quasi-neutral approximation. The result is a convolution equation for trophic interactions among species of different maturation body sizes, which is solved analytically using a novel technique based on a multiscale expansion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: More accurate coronary heart disease (CHD) prediction, specifically in middle-aged men, is needed to reduce the burden of disease more effectively. We hypothesised that a multilocus genetic risk score could refine CHD prediction beyond classic risk scores and obtain more precise risk estimates using a prospective cohort design.

Methods: Using data from nine prospective European cohorts, including 26,221 men, we selected in a case-cohort setting 4,818 healthy men at baseline, and used Cox proportional hazards models to examine associations between CHD and risk scores based on genetic variants representing 13 genomic regions. Over follow-up (range: 5-18 years), 1,736 incident CHD events occurred. Genetic risk scores were validated in men with at least 10 years of follow-up (632 cases, 1361 non-cases). Genetic risk score 1 (GRS1) combined 11 SNPs and two haplotypes, with effect estimates from previous genome-wide association studies. GRS2 combined 11 SNPs plus 4 SNPs from the haplotypes with coefficients estimated from these prospective cohorts using 10-fold cross-validation. Scores were added to a model adjusted for classic risk factors comprising the Framingham risk score and 10-year risks were derived.

Results: Both scores improved net reclassification (NRI) over the Framingham score (7.5%, p = 0.017 for GRS1, 6.5%, p = 0.044 for GRS2) but GRS2 also improved discrimination (c-index improvement 1.11%, p = 0.048). Subgroup analysis on men aged 50-59 (436 cases, 603 non-cases) improved net reclassification for GRS1 (13.8%) and GRS2 (12.5%). Net reclassification improvement remained significant for both scores when family history of CHD was added to the baseline model for this male subgroup improving prediction of early onset CHD events.

Conclusions: Genetic risk scores add precision to risk estimates for CHD and improve prediction beyond classic risk factors, particularly for middle aged men.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting the next location of a user based on their previous visiting pattern is one of the primary tasks over data from location based social networks (LBSNs) such as Foursquare. Many different aspects of these so-called “check-in” profiles of a user have been made use of in this task, including spatial and temporal information of check-ins as well as the social network information of the user. Building more sophisticated prediction models by enriching these check-in data by combining them with information from other sources is challenging due to the limited data that these LBSNs expose due to privacy concerns. In this paper, we propose a framework to use the location data from LBSNs, combine it with the data from maps for associating a set of venue categories with these locations. For example, if the user is found to be checking in at a mall that has cafes, cinemas and restaurants according to the map, all these information is associated. This category information is then leveraged to predict the next checkin location by the user. Our experiments with publicly available check-in dataset show that this approach improves on the state-of-the-art methods for location prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Pre-eclampsia is a leading cause of maternal and perinatal morbidity and mortality. Women with type 1 diabetes are considered a high-risk group for developing pre-eclampsia. Much research has focused on biomarkers as a means of screening for pre-eclampsia in the general maternal population; however, there is a lack of evidence for women with type 1 diabetes.
OBJECTIVES: To undertake a systematic review to identify potential biomarkers for the prediction of pre-eclampsia in women with type 1 diabetes.
SEARCH STRATEGY: We searched Medline, EMBASE, Maternity and Infant Care, Scopus, Web of Science and CINAHL SELECTION CRITERIA: Studies were included if they measured biomarkers in blood or urine of women who developed pre-eclampsia and had pre-gestational type 1 diabetes mellitus Data collection and analysis A narrative synthesis was adopted as a meta-analysis could not be performed, due to high study heterogeneity.
MAIN RESULTS: A total of 72 records were screened, with 21 eligible studies being included in the review. A wide range of biomarkers was investigated and study size varied from 34 to 1258 participants. No single biomarker appeared to be effective in predicting pre-eclampsia; however, glycaemic control was associated with an increased risk while a combination of angiogenic and anti-angiogenic factors seemed to be potentially useful.
CONCLUSIONS: Limited evidence suggests that combinations of biomarkers may be more effective in predicting pre-eclampsia than single biomarkers. Further research is needed to verify the predictive potential of biomarkers that have been measured in the general maternal population, as many studies exclude women with diabetes preceding pregnancy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Arbeit umfaßt das elektromechanische Design und die Designoptimierung von weit durchstimmbaren optischen multimembranbasierten Bauelementen, mit vertikal orientierten Kavitäten, basierend auf der Finiten Element Methode (FEM). Ein multimembran InP/Luft Fabry-Pérot optischer Filter wird dargestellt und umfassend analysiert. In dieser Arbeit wird ein systematisches strukturelles Designverfahren dargestellt. Genaue analytische elektromechanischer Modelle für die Bauelemente sind abgeleitet worden. Diese können unschätzbare Werkzeuge sein, um am Anfang der Designphase schnell einen klaren Einblick zur Verfügung zu stellen. Mittels des FEM Programms ist der durch die nicht-lineare Verspannung hervorgerufene versteifende Effekt nachgeforscht und sein Effekt auf die Verlängerung der mechanischen Durchstimmungsstrecke der Bauelemente demonstriert worden. Interessant war auch die Beobachtung, dass die normierte Relation zwischen Ablenkung und Spannung ein unveränderliches Profil hat. Die Deformation der Membranflächen der in dieser Arbeit dargestellten Bauelementformen erwies sich als ein unerwünschter, jedoch manchmal unvermeidbarer Effekt. Es zeigt sich aber, dass die Wahl der Größe der strukturellen Dimensionen den Grad der Membrandeformation im Falle der Aktuation beeinflusst. Diese Arbeit stellt ein elektromechanisches in FEMLAB implementierte quasi-3D Modell, das allgemein für die Modellierung dünner Strukturen angewendet werden kann, dar; und zwar indem man diese als 2D-Objekte betrachtet und die dritte Dimension als eine konstante Größe (z.B. die Schichtdicke) oder eine Größe, welche eine mathematische Funktion ist, annimmt. Diese Annahme verringert drastisch die Berechnungszeit sowie den erforderlichen Arbeitsspeicherbedarf. Weiter ist es für die Nachforschung des Effekts der Skalierung der durchstimmbaren Bauelemente verwendet worden. Eine neuartige Skalierungstechnik wurde abgeleitet und verwendet. Die Ergebnisse belegen, dass das daraus resultierende, skalierte Bauelement fast genau die gleiche mechanische Durchstimmung wie das unskalierte zeigt. Die Einbeziehung des Einflusses von axialen Verspannungen und Gradientenverspannungen in die Berechnungen erforderte die Änderung der Standardimplementierung des 3D Mechanikberechnungsmodus, der mit der benutzten FEM Software geliefert wurde. Die Ergebnisse dieser Studie zeigen einen großen Einfluss der Verspannung auf die Durchstimmungseigenschaften der untersuchten Bauelemente. Ferner stimmten die Ergebnisse der theoretischen Modellrechnung mit den experimentellen Resultaten sehr gut überein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improved nutrient utilization efficiency is strongly related to enhanced economic performance and reduced environmental footprint of dairy farms. Pasture-based systems are widely used for dairy production in certain areas of the world, but prediction equations of fresh grass nutritive value (nutrient digestibility and energy concentrations) are limited. Equations to predict digestible energy (DE) and metabolizable energy (ME) used for grazing cattle have been either developed with cattle fed conserved forage and concentrate diets or sheep fed previously frozen grass, and the majority of them require measurements less commonly available to producers, such as nutrient digestibility. The aim of the present study was therefore to develop prediction equations more suitable to grazing cattle for nutrient digestibility and energy concentrations, which are routinely available at farm level by using grass nutrient contents as predictors. A study with 33 nonpregnant, nonlactating cows fed solely fresh-cut grass at maintenance energy level for 50 wk was carried out over 3 consecutive grazing seasons. Freshly harvested grass of 3 cuts (primary growth and first and second regrowth), 9 fertilizer input levels, and contrasting stage of maturity (3 to 9 wk after harvest) was used, thus ensuring a wide representation of nutritional quality. As a result, a large variation existed in digestibility of dry matter (0.642-0.900) and digestible organic matter in dry matter (0.636-0.851) and in concentrations of DE (11.8-16.7 MJ/kg of dry matter) and ME (9.0-14.1 MJ/kg of dry matter). Nutrient digestibilities and DE and ME concentrations were negatively related to grass neutral detergent fiber (NDF) and acid detergent fiber (ADF) contents but positively related to nitrogen (N), gross energy, and ether extract (EE) contents. For each predicted variable (nutrient digestibilities or energy concentrations), different combinations of predictors (grass chemical composition) were found to be significant and increase the explained variation. For example, relatively higher R(2) values were found for prediction of N digestibility using N and EE as predictors; gross-energy digestibility using EE, NDF, ADF, and ash; NDF, ADF, and organic matter digestibilities using N, water-soluble carbohydrates, EE, and NDF; digestible organic matter in dry matter using water-soluble carbohydrates, EE, NDF, and ADF; DE concentration using gross energy, EE, NDF, ADF, and ash; and ME concentration using N, EE, ADF, and ash. Equations presented may allow a relatively quick and easy prediction of grass quality and, hence, better grazing utilization on commercial and research farms, where nutrient composition falls within the range assessed in the current study.