45 resultados para Prune


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The protein-protein docking programs typically perform four major tasks: (i) generation of docking poses, (ii) selecting a subset of poses, (iii) their structural refinement and (iv) scoring, ranking for the final assessment of the true quaternary structure. Although the tasks can be integrated or performed in a serial order, they are by nature modular, allowing an opportunity to substitute one algorithm with another. We have implemented two modular web services, (i) PRUNE: to select a subset of docking poses generated during sampling search (http://pallab.serc.iisc.ernet.in/prune) and (ii) PROBE: to refine, score and rank them (http://pallab.serc.iisc.ernet.in/probe). The former uses a new interface area based edge-scoring function to eliminate > 95% of the poses generated during docking search. In contrast to other multi-parameter-based screening functions, this single parameter based elimination reduces the computational time significantly, in addition to increasing the chances of selecting native-like models in the top rank list. The PROBE server performs ranking of pruned poses, after structure refinement and scoring using a regression model for geometric compatibility, and normalized interaction energy. While web-service similar to PROBE is infrequent, no web-service akin to PRUNE has been described before. Both the servers are publicly accessible and free for use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Prune belly syndrome is a rare condition produced by an early mesodermal defect that causes abdominal abnormalities. However, the literature indicates that disturbances related to ectodermal development may also be present. This is the first case report in the literature to suggest that dental abnormalities are part of the broad spectrum of clinical features of prune belly syndrome. Because the syndrome causes many serious medical problems, early diagnosis of abnormalities involving the primary and permanent dentitions are encouraged.Case presentation: The authors report the clinical case of a 4-year-old Caucasian boy with prune belly syndrome. In addition to the triad of abdominal muscle deficiency, abnormalities of the gastrointestinal and urinary tracts, and cryptorchidism, a geminated mandibular right central incisor, agenesis of a mandibular permanent left incisor, and congenitally missing primary teeth (namely, the mandibular right and left lateral incisors) were noted.Conclusion: This original case report about prune belly syndrome highlights the possibility that dental abnormalities are a part of the broad spectrum of clinical features of the syndrome. Therefore, an accurate intra-oral clinical examination and radiographic evaluation are required for patients with this syndrome in order to provide an early diagnosis of abnormalities involving the primary and permanent dentitions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine the security of the 64-bit lightweight block cipher PRESENT-80 against related-key differential attacks. With a computer search we are able to prove that for any related-key differential characteristic on full-round PRESENT-80, the probability of the characteristic only in the 64-bit state is not higher than 2−64. To overcome the exponential (in the state and key sizes) computational complexity of the search we use truncated differences, however as the key schedule is not nibble oriented, we switch to actual differences and apply early abort techniques to prune the tree-based search. With a new method called extended split approach we are able to make the whole search feasible and we implement and run it in real time. Our approach targets the PRESENT-80 cipher however,with small modifications can be reused for other lightweight ciphers as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information on foods patients like and dislike is the essential basis for planning menus which are acceptable to patients and promote adequate consumption. The aim of this study was to obtain quantitative data on the food preferences of inpatients at a large metropolitan public hospital for use in menu planning. Methodology was based on a study by Williams et al (1988), and included additional questions about appetite and taste changes. The survey used a 9 point hedonic scale to rate foods listed in random order and was modified to incorporate more contemporary foods than those used in the originalWilliams study. Surveys were conducted by final year University of Queensland dietetics students on Food Service Practicum at the Royal Brisbane and Women’s Hospital (929 beds) in 2012. The first survey (220 questions, n = 157) had a response rate of 61%. The second included more sandwich fillings and salads (231 questions, n = 219, response rate 67%). Total number surveyed was 376. Results showed the most preferred foods were roast potato, grilled steak, ice cream, fresh strawberries, roast lamb, roast beef, grapes and banana. The least preferred foods were grapefruit, soybeans, lentils, sardines, prune juice and grapefruit juice. Patients who reported taste changes (10%) had similar food preferences to those who didn’t report taste changes. Patients who reported poor/very poor appetite (10%) generally scored foods lower than those who reported OK (22%), good/very good appetite (65%). The results of this study informed planning for a new patient menu at the RBWH in December 2012.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anti-thrombotic properties of an anthocyanin-rich Queen Garnet plum juice (QGPJ) and anthocyanin-free prune juice (PJ) were studied in this randomised, double-blind, crossover trial. Twenty-one healthy subjects (M = 10, F = 11) consumed QGPJ, PJ or placebo, 200 mL/day for 28-days followed by a 2-week wash-out period. Only QGPJ supplementation inhibited platelet aggregation induced by ADP (<5%, P = 0.02), collagen (<2.7%, P < 0.001) and arachidonic acid (<4%, P < 0.001); reduced platelet activation-dependent surface-marker P-selectin expression of activated de-granulated platelets (<17.2%, P = 0.04); prolonged activated-partial thromboplastin clotting time (>2.1 s, P = 0.03); reduced plasma-fibrinogen (<7.5%, P = 0.02) and malondialdehyde levels, a plasma biomarker of oxidative stress ( P = 0.016). PJ supplementation increased plasma hippuric acid content ( P = 0.018). QGPJ or PJ supplementation did not affect blood cell counts, lipid profile, or inflammation markers. Our findings suggest that QGPJ but not PJ has the potential to significantly attenuate thrombosis by reducing platelet activation/hyper-coagulability and oxidative stress.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Signal transduction events often involve transient, yet specific, interactions between structurally conserved protein domains and polypeptide sequences in target proteins. The identification and validation of these associating domains is crucial to understand signal transduction pathways that modulate different cellular or developmental processes. Bioinformatics strategies to extract and integrate information from diverse sources have been shown to facilitate the experimental design to understand complex biological events. These methods, primarily based on information from high-throughput experiments, have also led to the identification of new connections thus providing hypothetical models for cellular events. Such models, in turn, provide a framework for directing experimental efforts for validating the predicted molecular rationale for complex cellular processes. In this context, it is envisaged that the rational design of peptides for protein-peptide binding studies could substantially facilitate the experimental strategies to evaluate a predicted interaction. This rational design procedure involves the integration of protein-protein interaction data, gene ontology, physico-chemical calculations, domain-domain interaction data and information on functional sites or critical residues. Results: Here we describe an integrated approach called ``PeptideMine'' for the identification of peptides based on specific functional patterns present in the sequence of an interacting protein. This approach based on sequence searches in the interacting sequence space has been developed into a webserver, which can be used for the identification and analysis of peptides, peptide homologues or functional patterns from the interacting sequence space of a protein. To further facilitate experimental validation, the PeptideMine webserver also provides a list of physico-chemical parameters corresponding to the peptide to determine the feasibility of using the peptide for in vitro biochemical or biophysical studies. Conclusions: The strategy described here involves the integration of data and tools to identify potential interacting partners for a protein and design criteria for peptides based on desired biochemical properties. Alongside the search for interacting protein sequences using three different search programs, the server also provides the biochemical characteristics of candidate peptides to prune peptide sequences based on features that are most suited for a given experiment. The PeptideMine server is available at the URL: http://caps.ncbs.res.in/peptidemine

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual tracking is an important task in various computer vision applications including visual surveillance, human computer interaction, event detection, video indexing and retrieval. Recent state of the art sparse representation (SR) based trackers show better robustness than many of the other existing trackers. One of the issues with these SR trackers is low execution speed. The particle filter framework is one of the major aspects responsible for slow execution, and is common to most of the existing SR trackers. In this paper,(1) we propose a robust interest point based tracker in l(1) minimization framework that runs at real-time with performance comparable to the state of the art trackers. In the proposed tracker, the target dictionary is obtained from the patches around target interest points. Next, the interest points from the candidate window of the current frame are obtained. The correspondence between target and candidate points is obtained via solving the proposed l(1) minimization problem. In order to prune the noisy matches, a robust matching criterion is proposed, where only the reliable candidate points that mutually match with target and candidate dictionary elements are considered for tracking. The object is localized by measuring the displacement of these interest points. The reliable candidate patches are used for updating the target dictionary. The performance and accuracy of the proposed tracker is benchmarked with several complex video sequences. The tracker is found to be considerably fast as compared to the reported state of the art trackers. The proposed tracker is further evaluated for various local patch sizes, number of interest points and regularization parameters. The performance of the tracker for various challenges including illumination change, occlusion, and background clutter has been quantified with a benchmark dataset containing 50 videos. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we describe MARIE, an Ngram-based statistical machine translation decoder. It is implemented using a beam search strategy, with distortion (or reordering) capabilities. The underlying translation model is based on an Ngram approach, extended to introduce reordering at the phrase level. The search graph structure is designed to perform very accurate comparisons, what allows for a high level of pruning, improving the decoder efficiency. We report several techniques for efficiently prune out the search space. The combinatory explosion of the search space derived from the search graph structure is reduced by limiting the number of reorderings a given translation is allowed to perform, and also the maximum distance a word (or a phrase) is allowed to be reordered. We finally report translation accuracy results on three different translation tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

园址选择、优良品种搭配、坡地改良、合理施肥与抗旱栽培、合理修剪等是陕北丘陵沟壑区苹果优质丰产的保障 ,且同时论述了各项措施的操作依据和方法