61 resultados para computational analysis
Resumo:
This paper discusses the calculation of electron impact collision strengths and effective collision strengths for iron peak elements of importance in the analysis of many astronomical and laboratory spectra. It commences with a brief overview of R-matrix theory which is the basis of computer programs which have been widely used to calculate the relevant atomic data used in this analysis. A summary is then given of calculations carried out over the last 20 y for electron collisions with Fe II. The grand challenge, represented by the calculation of accurate collision strengths and effective collision strengths for this ion, is then discussed. A new parallel R-matrix program PRMAT, which is being developed to meet this challenge, is then described and results of recent calculations, using this program to determine optically forbidden transitions in e- – Ni IV on a Cray T3E-1200 parallel supercomputer, are presented. The implications of this e- – Ni IV calculation for the determination of accurate data from an isoelectronic e- – Fe II calculation are discussed and finally some future directions of research are reviewed.
Resumo:
Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.
Resumo:
Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.
Resumo:
In this paper, we introduce a method to detect pathological pathways of a disease. We aim to identify biological processes rather than single genes affected by the chronic fatigue syndrome (CFS). So far, CFS has neither diagnostic clinical signals nor abnormalities that could be diagnosed by laboratory examinations. It is also unclear if the CFS represents one disease or can be subdivided in different categories. We use information from clinical trials, the gene ontology (GO) database as well as gene expression data to identify undirected dependency graphs (UDGs) representing biological processes according to the GO database. The structural comparison of UDGs of sick versus non-sick patients allows us to make predictions about the modification of pathways due to pathogenesis.
Resumo:
The provision of security in mobile ad hoc networks is of paramount importance due to their wireless nature. However, when conducting research into security protocols for ad hoc networks it is necessary to consider these in the context of the overall system. For example, communicational delay associated with the underlying MAC layer needs to be taken into account. Nodes in mobile ad hoc networks must strictly obey the rules of the underlying MAC when transmitting security-related messages while still maintaining a certain quality of service. In this paper a novel authentication protocol, RASCAAL, is described and its performance is analysed by investigating both the communicational-related effects of the underlying IEEE 802.11 MAC and the computational-related effects of the cryptographic algorithms employed. To the best of the authors' knowledge, RASCAAL is the first authentication protocol which proposes the concept of dynamically formed short-lived random clusters with no prior knowledge of the cluster head. The performance analysis demonstrates that the communication losses outweigh the computation losses with respect to energy and delay. MAC-related communicational effects account for 99% of the total delay and total energy consumption incurred by the RASCAAL protocol. The results also show that a saving in communicational energy of up to 12.5% can be achieved by changing the status of the wireless nodes during the course of operation. Copyright (C) 2009 G. A. Safdar and M. P. O'Neill (nee McLoone).
Resumo:
Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure. which encourages further research towards a higher-dimensional analysis of Pareto fronts. (C) 2010 Elsevier Ltd. All rights reserved.