29 resultados para matching score
em CentAUR: Central Archive University of Reading - UK
Resumo:
In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.
Resumo:
The frequency responses of two 50 Hz and one 400 Hz induction machines have been measured experimentally over a frequency range of 1 kHz to 400 kHz. This study has shown that the stator impedances of the machines behave in a similar manner to a parallel resonant circuit, and hence have a resonant point at which the Input impedance of the machine is at a maximum. This maximum impedance point was found experimentally to be as low as 33 kHz, which is well within the switching frequency ranges of modern inverter drives. This paper investigates the possibility of exploiting the maximum impedance point of the machine, by taking it into consideration when designing an inverter, in order to minimize ripple currents due to the switching frequency. Minimization of the ripple currents would reduce torque pulsation and losses, increasing overall performance. A modified machine model was developed to take into account the resonant point, and this model was then simulated with an inverter to demonstrate the possible advantages of matching the inverter switching frequency to the resonant point. Finally, in order to experimentally verify the simulated results, a real inverter with a variable switching frequency was used to drive an induction machine. Experimental results are presented.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
The sequential analysis of repeated binary responses: a score test for the case of three time points
Resumo:
In this paper a robust method is developed for the analysis of data consisting of repeated binary observations taken at up to three fixed time points on each subject. The primary objective is to compare outcomes at the last time point, using earlier observations to predict this for subjects with incomplete records. A score test is derived. The method is developed for application to sequential clinical trials, as at interim analyses there will be many incomplete records occurring in non-informative patterns. Motivation for the methodology comes from experience with clinical trials in stroke and head injury, and data from one such trial is used to illustrate the approach. Extensions to more than three time points and to allow for stratification are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
An approach to the automatic generation of efficient Field Programmable Gate Arrays (FPGAs) circuits for the Regular Expression-based (RegEx) Pattern Matching problems is presented. Using a novel design strategy, as proposed, circuits that are highly area-and-time-efficient can be automatically generated for arbitrary sets of regular expressions. This makes the technique suitable for applications that must handle very large sets of patterns at high speed, such as in the network security and intrusion detection application domains. We have combined several existing techniques to optimise our solution for such domains and proposed the way the whole process of dynamic generation of FPGAs for RegEX pattern matching could be automated efficiently.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
In this paper, we introduce two kinds of graphs: the generalized matching networks (GMNs) and the recursive generalized matching networks (RGMNs). The former generalize the hypercube-like networks (HLNs), while the latter include the generalized cubes and the star graphs. We prove that a GMN on a family of k-connected building graphs is -connected. We then prove that a GMN on a family of Hamiltonian-connected building graphs having at least three vertices each is Hamiltonian-connected. Our conclusions generalize some previously known results.
Resumo:
There is growing interest, especially for trials in stroke, in combining multiple endpoints in a single clinical evaluation of an experimental treatment. The endpoints might be repeated evaluations of the same characteristic or alternative measures of progress on different scales. Often they will be binary or ordinal, and those are the cases studied here. In this paper we take a direct approach to combining the univariate score statistics for comparing treatments with respect to each endpoint. The correlations between the score statistics are derived and used to allow a valid combined score test to be applied. A sample size formula is deduced and application in sequential designs is discussed. The method is compared with an alternative approach based on generalized estimating equations in an illustrative analysis and replicated simulations, and the advantages and disadvantages of the two approaches are discussed.
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.