985 resultados para parallel computation
Resumo:
It is an exciting era for molecular computation because molecular logic gates are being pushed in new directions. The use of sulfur rather than the commonplace nitrogen as the key receptor atom in metal ion sensors is one of these directions; plant cells coming within the jurisdiction of fluorescent molecular thermometers is another, combining photochromism with voltammetry for molecular electronics is yet another. Two-input logic gates benefit from old ideas such as rectifying bilayer electrodes, cyclodextrin-enhanced room-temperature phosphorescence, steric hindrance, the polymerase chain reaction, charge transfer absorption of donor–acceptor complexes and lectin–glycocluster interactions. Furthermore, the concept of photo-uncaging enables rational ways of concatenating logic gates. Computational concepts are also applied to potential cancer theranostics and to the selective monitoring of neurotransmitters in situ. Higher numbers of inputs are also accommodated with the concept of functional integration of gates, where complex input–output patterns are sought out and analysed. Molecular emulation of computational components such as demultiplexers and parity generators/checkers are achieved in related ways. Complexity of another order is tackled with molecular edge detection routines.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
Molecular logic-based computation continues to throw up new applications in sensing and switching, the newest of which is the edge detection of objects. The scope of this phenomenon is mapped out by the use of structure-activity relationships, where several structures of the molecules and of the objects are examined. The different angles and curvatures of the objects are followed with good-fidelity in the visualized edges, even when the objects are in reverse video.
Resumo:
As a newly invented parallel kinematic machine (PKM), Exechon has attracted intensive attention from both academic and industrial fields due to its conceptual high performance. Nevertheless, the dynamic behaviors of Exechon PKM have not been thoroughly investigated because of its structural and kinematic complexities. To identify the dynamic characteristics of Exechon PKM, an elastodynamic model is proposed with the substructure synthesis technique in this paper. The Exechon PKM is divided into a moving platform subsystem, a fixed base subsystem and three limb subsystems according to its structural features. Differential equations of motion for the limb subsystem are derived through finite element (FE) formulations by modeling the complex limb structure as a spatial beam with corresponding geometric cross sections. Meanwhile, revolute, universal, and spherical joints are simplified into virtual lumped springs associated with equivalent stiffnesses and mass at their geometric centers. Differential equations of motion for the moving platform are derived with Newton's second law after treating the platform as a rigid body due to its comparatively high rigidity. After introducing the deformation compatibility conditions between the platform and the limbs, governing differential equations of motion for Exechon PKM are derived. The solution to characteristic equations leads to natural frequencies and corresponding modal shapes of the PKM at any typical configuration. In order to predict the dynamic behaviors in a quick manner, an algorithm is proposed to numerically compute the distributions of natural frequencies throughout the workspace. Simulation results reveal that the lower natural frequencies are strongly position-dependent and distributed axial-symmetrically due to the structure symmetry of the limbs. At the last stage, a parametric analysis is carried out to identify the effects of structural, dimensional, and stiffness parameters on the system's dynamic characteristics with the purpose of providing useful information for optimal design and performance improvement of the Exechon PKM. The elastodynamic modeling methodology and dynamic analysis procedure can be well extended to other overconstrained PKMs with minor modifications.
Resumo:
In Boolean games, agents try to reach a goal formulated as a Boolean formula. These games are attractive because of their compact representations. However, few methods are available to compute the solutions and they are either limited or do not take privacy or communication concerns into account. In this paper we propose the use of an algorithm related to reinforcement learning to address this problem. Our method is decentralized in the sense that agents try to achieve their goals without knowledge of the other agents’ goals. We prove that this is a sound method to compute a Pareto optimal pure Nash equilibrium for an interesting class of Boolean games. Experimental results are used to investigate the performance of the algorithm.
Resumo:
Routine molecular diagnostics modalities are unable to confidently detect low frequency mutations (<5-15%) that may indicate response to targeted therapies. We confirm the presence of a low frequency NRAS mutation in a rectal cancer patient using massively parallel sequencing when previous Sanger sequencing results proved negative and Q-PCR testing inconclusive. There is increasing evidence that these low frequency mutations may confer resistance to anti-EGFR therapy. In view of negative/inconclusive Sanger sequencing and Q-PCR results for NRAS mutations in a KRAS wt rectal case, the diagnostic biopsy and 4 distinct subpopulations of cells in the resection specimen after conventional chemo/radiotherapy were massively parallel sequenced using the Ion Torrent PGM. DNA was derived from FFPE rectal cancer tissue and amplicons produced using the Cancer Hotspot Panel V2 and sequenced using semiconductor technology. NRAS mutations were observed at varying frequencies in the patient biopsy (12.2%) and all four subpopulations of cells in the resection with an average frequency of 7.3% (lowest 2.6%). The results of the NGS also provided the mutational status of 49 other genes that may have prognostic or predictive value, including KRAS and PIK3CA. NGS technology has been postulated in diagnostics because of its capability to generate results in large panels of clinically meaningful genes in a cost-effective manner. This case illustrates another potential advantage of this technology: its use for detecting low frequency mutations that may influence therapeutic decisions in cancer treatment.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
This case study deals with the role of time series analysis in sociology, and its relationship with the wider literature and methodology of comparative case study research. Time series analysis is now well-represented in top-ranked sociology journals, often in the form of ‘pooled time series’ research designs. These studies typically pool multiple countries together into a pooled time series cross-section panel, in order to provide a larger sample for more robust and comprehensive analysis. This approach is well suited to exploring trans-national phenomena, and for elaborating useful macro-level theories specific to social structures, national policies, and long-term historical processes. It is less suited however, to understanding how these global social processes work in different countries. As such, the complexities of individual countries - which often display very different or contradictory dynamics than those suggested in pooled studies – are subsumed. Meanwhile, a robust literature on comparative case-based methods exists in the social sciences, where researchers focus on differences between cases, and the complex ways in which they co-evolve or diverge over time. A good example of this is the inequality literature, where although panel studies suggest a general trend of rising inequality driven by the weakening power of labour, marketisation of welfare, and the rising power of capital, some countries have still managed to remain resilient. This case study takes a closer look at what can be learned by applying the insights of case-based comparative research to the method of time series analysis. Taking international income inequality as its point of departure, it argues that we have much to learn about the viability of different combinations of policy options by examining how they work in different countries over time. By taking representative cases from different welfare systems (liberal, social democratic, corporatist, or antipodean), we can better sharpen our theories of how policies can be more specifically engineered to offset rising inequality. This involves a fundamental realignment of the strategy of time series analysis, grounding it instead in a qualitative appreciation of the historical context of cases, as a basis for comparing effects between different countries.
Resumo:
In this work we report both the calculation of atomic collision data for the electron-impact excitation of Ni II using parallel R-matrix codes and the computation of atomic transition data using the general atomic structure package CIV3.
Resumo:
The goal of this contribution is to discuss local computation in credal networks — graphical models that can represent imprecise and indeterminate probability values. We analyze the inference problem in credal networks, discuss how inference algorithms can benefit from local computation, and suggest that local computation can be particularly important in approximate inference algorithms.