893 resultados para ARPANET (Computer network)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The speciation and distribution of Gd(III) in human interstitial fluid was studied by computer simulation. Meantime artificial neural network was applied to the estimation of log beta values of complexes. The results show that the precipitate species, GdPO4 and Gd-2(CO3)(3), are the predominant species. Among soluble species, the free Gd(III), [Gd(HSA)], [Gd(Ox)] and then the ternary complexes of Gd(III) with citrate arc main species and [Gd-3(OH)(4)] becomes the predominant species at the Gd(III) total concentration or 2.2x10(-2)mol/L.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neal, M., Meta-stable memory in an artificial immune network, Proceedings of the 2nd International Conference on Artificial Immune Systems {ICARIS}, Springer, 168-180, 2003,LNCS 2787/2003

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Timmis J Neal M J and Hunt J. Augmenting an artificial immune network using ordering, self-recognition and histo-compatibility operators. In Proceedings of IEEE international conference of systems, man and cybernetics, pages 3821-3826, San Diego, 1998. IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R. Daly and Q. Shen. Methods to accelerate the learning of bayesian network structures. Proceedings of the Proceedings of the 2007 UK Workshop on Computational Intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R. Daly, Q. Shen and S. Aitken. Speeding up the learning of equivalence classes of Bayesian network structures. Proceedings of the 10th International Conference on Artificial Intelligence and Soft Computing, pages 34-39.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R. Daly, Q. Shen and S. Aitken. Using ant colony optimisation in learning Bayesian network equivalence classes. Proceedings of the 2006 UK Workshop on Computational Intelligence, pages 111-118.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R. Daly and Q. Shen. A Framework for the Scoring of Operators on the Search Space of Equivalence Classes of Bayesian Network Structures. Proceedings of the 2005 UK Workshop on Computational Intelligence, pages 67-74.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous problems exist that can be modeled as traffic through a network in which constraints exist to regulate flow. Vehicular road travel, computer networks, and cloud based resource distribution, among others all have natural representations in this manner. As these networks grow in size and/or complexity, analysis and certification of the safety invariants becomes increasingly costly. The NetSketch formalism introduces a lightweight verification framework that allows for greater scalability than traditional analysis methods. The NetSketch tool was developed to provide the power of this formalism in an easy to use and intuitive user interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel computing on a network of workstations can saturate the communication network, leading to excessive message delays and consequently poor application performance. We examine empirically the consequences of integrating a flow control protocol, called Warp control [Par93], into Mermera, a software shared memory system that supports parallel computing on distributed systems [HS93]. For an asynchronous iterative program that solves a system of linear equations, our measurements show that Warp succeeds in stabilizing the network's behavior even under high levels of contention. As a result, the application achieves a higher effective communication throughput, and a reduced completion time. In some cases, however, Warp control does not achieve the performance attainable by fixed size buffering when using a statically optimal buffer size. Our use of Warp to regulate the allocation of network bandwidth emphasizes the possibility for integrating it with the allocation of other resources, such as CPU cycles and disk bandwidth, so as to optimize overall system throughput, and enable fully-shared execution of parallel programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal correctness of complex multi-party network protocols can be difficult to verify. While models of specific fixed compositions of agents can be checked against design constraints, protocols which lend themselves to arbitrarily many compositions of agents-such as the chaining of proxies or the peering of routers-are more difficult to verify because they represent potentially infinite state spaces and may exhibit emergent behaviors which may not materialize under particular fixed compositions. We address this challenge by developing an algebraic approach that enables us to reduce arbitrary compositions of network agents into a behaviorally-equivalent (with respect to some correctness property) compact, canonical representation, which is amenable to mechanical verification. Our approach consists of an algebra and a set of property-preserving rewrite rules for the Canonical Homomorphic Abstraction of Infinite Network protocol compositions (CHAIN). Using CHAIN, an expression over our algebra (i.e., a set of configurations of network protocol agents) can be reduced to another behaviorally-equivalent expression (i.e., a smaller set of configurations). Repeated applications of such rewrite rules produces a canonical expression which can be checked mechanically. We demonstrate our approach by characterizing deadlock-prone configurations of HTTP agents, as well as establishing useful properties of an overlay protocol for scheduling MPEG frames, and of a protocol for Web intra-cache consistency.