93 resultados para Markov Clustering, GPI Computing, PPI Networks, CUDA, ELPACK-R Sparse Format, Parallel Computing
Resumo:
Promiscuous human leukocyte antigen (HLA) binding peptides are ideal targets for vaccine development. Existing computational models for prediction of promiscuous peptides used hidden Markov models and artificial neural networks as prediction algorithms. We report a system based on support vector machines that outperforms previously published methods. Preliminary testing showed that it can predict peptides binding to HLA-A2 and -A3 super-type molecules with excellent accuracy, even for molecules where no binding data are currently available.
Resumo:
A variety of current and future wired and wireless networking technologies can be transformed into a seamless communication environments through application of context-based vertical handovers. Such seamless communication environments are needed for future pervasive/ubiquitous systems. Pervasive systems are context aware and need to adapt to context changes, including network disconnections and changes in network Quality of Service (QoS). Vertical handover is one of many possible adaptation methods. It allows users to roam freely between heterogeneous networks while maintaining the continuity of their applications. This paper proposes a vertical handover mechanism suitable for multimedia applications in pervasive systems. The paper focuses on the handover decision making process which uses context information regarding user devices, user location, network environment and requested QoS. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for a(n)b(n)c(n), a context-sensitive language. The additional difficulty with a(n)b(n)c(n), compared with the context-free language a(n)b(n), consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.
Resumo:
Geospatial clustering must be designed in such a way that it takes into account the special features of geoinformation and the peculiar nature of geographical environments in order to successfully derive geospatially interesting global concentrations and localized excesses. This paper examines families of geospaital clustering recently proposed in the data mining community and identifies several features and issues especially important to geospatial clustering in data-rich environments.
Resumo:
In this paper, we studied the fate of endocytosed glycosylphosphatidyl inositol anchored proteins (GPI-APs) in mammalian cells, using aerolysin, a bacterial toxin that binds to the GPI anchor, as a probe. We find that GPI-APs are transported down the endocytic pathway to reducing late endosomes in BHK cells, using biochemical, morphological and functional approaches. We also find that this transport correlates with the association to raft-like membranes and thus that lipid rafts are present in late endosomes (in addition to the Golgi and the plasma membrane). In marked contrast, endocytosed GPI-APs reach the recycling endosome in CHO cells and this transport correlates with a decreased raft association. GPI-APs are, however, diverted from the recycling endosome and routed to late endosomes in CHO cells, when their raft association is increased by clustering seven or less GPI-APs with an aerolysin mutant. We conclude that the different endocytic routes followed by GPI-APs in different cell types depend on the residence time of GPI-APs in lipid rafts, and hence that raft partitioning regulates GPI-APs sorting in the endocytic pathway.
Resumo:
Using benthic habitat data from the Florida Keys (USA), we demonstrate how siting algorithms can help identify potential networks of marine reserves that comprehensively represent target habitat types. We applied a flexible optimization tool-simulated annealing-to represent a fixed proportion of different marine habitat types within a geographic area. We investigated the relative influence of spatial information, planning-unit size, detail of habitat classification, and magnitude of the overall conservation goal on the resulting network scenarios. With this method, we were able to identify many adequate reserve systems that met the conservation goals, e.g., representing at least 20% of each conservation target (i.e., habitat type) while fulfilling the overall aim of minimizing the system area and perimeter. One of the most useful types of information provided by this siting algorithm comes from an irreplaceability analysis, which is a count of the number of, times unique planning units were included in reserve system scenarios. This analysis indicated that many different combinations of sites produced networks that met the conservation goals. While individual 1-km(2) areas were fairly interchangeable, the irreplaceability analysis highlighted larger areas within the planning region that were chosen consistently to meet the goals incorporated into the algorithm. Additionally, we found that reserve systems designed with a high degree of spatial clustering tended to have considerably less perimeter and larger overall areas in reserve-a configuration that may be preferable particularly for sociopolitical reasons. This exercise illustrates the value of using the simulated annealing algorithm to help site marine reserves: the approach makes efficient use of;available resources, can be used interactively by conservation decision makers, and offers biologically suitable alternative networks from which an effective system of marine reserves can be crafted.
Resumo:
Traditional methods of R&D management are no longer sufficient for embracing innovations and leveraging complex new technologies to fully integrated positions in established systems. This paper presents the view that the technology integration process is a result of fundamental interactions embedded in inter-organisational activities. Emerging industries, high technology companies and knowledge intensive organisations owe a large part of their viability to complex networks of inter-organisational interactions and relationships. R&D organisations are the gatekeepers in the technology integration process with their initial sanction and motivation to develop technologies providing the first point of entry. Networks rely on the activities of stakeholders to provide the foundations of collaborative R&D activities, business-to-business marketing and strategic alliances. Such complex inter-organisational interactions and relationships influence value creation and organisational goals as stakeholders seek to gain investment opportunities. A theoretical model is developed here that contributes to our understanding of technology integration (adoption) as a dynamic process, which is simultaneously structured and enacted through the activities of stakeholders and organisations in complex inter-organisational networks of sanction and integration.
Resumo:
We show how the measurement induced model of quantum computation proposed by Raussendorf and Briegel ( 2001, Phys. Rev. Letts., 86, 5188) can be adapted to a nonlinear optical interaction. This optical implementation requires a Kerr nonlinearity, a single photon source, a single photon detector and fast feed forward. Although nondeterministic optical quantum information proposals such as that suggested by KLM ( 2001, Nature, 409, 46) do not require a Kerr nonlinearity they do require complex reconfigurable optical networks. The proposal in this paper has the benefit of a single static optical layout with fixed device parameters, where the algorithm is defined by the final measurement procedure.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.