990 resultados para Significance driven computation
Resumo:
Routing is a very important step in VLSI physical design. A set of nets are routed under delay and resource constraints in multi-net global routing. In this paper a delay-driven congestion-aware global routing algorithm is developed, which is a heuristic based method to solve a multi-objective NP-hard optimization problem. The proposed delay-driven Steiner tree construction method is of O(n(2) log n) complexity, where n is the number of terminal points and it provides n-approximation solution of the critical time minimization problem for a certain class of grid graphs. The existing timing-driven method (Hu and Sapatnekar, 2002) has a complexity O(n(4)) and is implemented on nets with small number of sinks. Next we propose a FPTAS Gradient algorithm for minimizing the total overflow. This is a concurrent approach considering all the nets simultaneously contrary to the existing approaches of sequential rip-up and reroute. The algorithms are implemented on ISPD98 derived benchmarks and the drastic reduction of overflow is observed. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
We investigated the nature of the cohesive energy between graphane sheets via multiple CH center dot center dot center dot HC interactions, using density functional theory (DFT) including dispersion correction (Grimmes D3 approach) computations of n]graphane sigma dimers (n = 6-73). For comparison, we also evaluated the binding between graphene sheets that display prototypical pi/pi interactions. The results were analyzed using the block-localized wave function (BLW) method, which is a variant of ab initio valence bond (VB) theory. BLW interprets the intermolecular interactions in terms of frozen interaction energy (Delta E-F) composed of electrostatic and Pauli repulsion interactions, polarization (Delta E-pol), charge-transfer interaction (Delta E-CT), and dispersion effects (Delta E-disp). The BLW analysis reveals that the cohesive energy between graphane sheets is dominated by two stabilizing effects, namely intermolecular London dispersion and two-way charge transfer energy due to the sigma CH -> sigma*(HC) interactions. The shift of the electron density around the nonpolar covalent C-H bonds involved in the intermolecular interaction decreases the C-H bond lengths uniformly by 0.001 angstrom. The Delta E-CT term, which accounts for similar to 15% of the total binding energy, results in the accumulation of electron density in the interface area between two layers. This accumulated electron density thus acts as an electronic glue for the graphane layers and constitutes an important driving force in the self-association and stability of graphane under ambient conditions. Similarly, the double faced adhesive tape style of charge transfer interactions was also observed among graphene sheets in which it accounts for similar to 18% of the total binding energy. The binding energy between graphane sheets is additive and can be expressed as a sum of CH center dot center dot center dot HC interactions, or as a function of the number of C-H bonds.
Resumo:
Coordination-driven self-assembly of dinuclear half-sandwich p-cymene ruthenium(II) complexes Ru-2(mu-eta(4)-C2O4)(CH3OH)(2)(eta(6)-p-cymene)(2)](O3SCF3)(2) (1a) and Ru-2(mu-eta(4)-C6H2O4)(CH3OH)(2)(eta(6)-p-cymene)(2)](O3SCF3)(2) (1b) separately with imidazole-based tritopic donors (L-1-L-2) in methanol yielded a series of hexanuclear 3+2] trigonal prismatic cages (2-5), respectively L-1 = 1,3,5-tris(imidazole-1-yl) benzene; L-2 = 4,4',4 `'-tris(imidazole-1-yl) triphenylamine]. All the self-assembled cages 2-5 were characterized by various spectroscopic techniques (multinuclear NMR, Infra-red and ESI-MS) and their sizes, shapes were obtained through geometry optimization using molecular mechanics universal force field (MMUFF) computation. Despite the possibility due to the free rotation of donor sites of imidazole ligands, of two different atropoisomeric prismatic cages (C-3h or C-s) and polymeric product, the self-selection of single (C(3)h) conformational isomeric cages as the only product is a noteworthy observation. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a pressure correction algorithm for computing incompressible flows is modified and implemented on unstructured Chimera grid. Schwarz method is used to couple the solutions of different sub-domains. A new interpolation to ensure consistency between primary variables and auxiliary variables is proposed. Other important issues such as global mass conservation and order of accuracy in the interpolations are also discussed. Two numerical simulations are successfully performed. They include one steady case, the lid-driven cavity and one unsteady case, the flow around a circular cylinder. The results demonstrate a very good performance of the proposed scheme on unstructured Chimera grids. It prevents the decoupling of pressure field in the overlapping region and requires only little modification to the existing unstructured Navier–Stokes (NS) solver. The numerical experiments show the reliability and potential of this method in applying to practical problems.
Resumo:
Electrically driven single photon source based on single InAs quantum dot (QDs) is demonstrated. The device contains InAs QDs within a planar cavity formed between a bottom AlGaAs/GaAs distributed Bragg reflector (DBR) and a surface GaAs-air interface. The device is characterized by I-V curve and electroluminescence, and a single sharp exciton emission line at 966nm is observed. Hanbury Brown and Twiss (HBT) correlation measurements demonstrate single photon emission with suppression of multiphoton emission to below 45% at 80K
Resumo:
The Message-Driven Processor is a node of a large-scale multiprocessor being developed by the Concurrent VLSI Architecture Group. It is intended to support fine-grained, message passing, parallel computation. It contains several novel architectural features, such as a low-latency network interface, extensive type-checking hardware, and on-chip memory that can be used as an associative lookup table. This document is a programmer's guide to the MDP. It describes the processor's register architecture, instruction set, and the data types supported by the processor. It also details the MDP's message sending and exception handling facilities.
Resumo:
In the last decade, data mining has emerged as one of the most dynamic and lively areas in information technology. Although many algorithms and techniques for data mining have been proposed, they either focus on domain independent techniques or on very specific domain problems. A general requirement in bridging the gap between academia and business is to cater to general domain-related issues surrounding real-life applications, such as constraints, organizational factors, domain expert knowledge, domain adaption, and operational knowledge. Unfortunately, these either have not been addressed, or have not been sufficiently addressed, in current data mining research and development.Domain-Driven Data Mining (D3M) aims to develop general principles, methodologies, and techniques for modeling and merging comprehensive domain-related factors and synthesized ubiquitous intelligence surrounding problem domains with the data mining process, and discovering knowledge to support business decision-making. This paper aims to report original, cutting-edge, and state-of-the-art progress in D3M. It covers theoretical and applied contributions aiming to: 1) propose next-generation data mining frameworks and processes for actionable knowledge discovery, 2) investigate effective (automated, human and machine-centered and/or human-machined-co-operated) principles and approaches for acquiring, representing, modelling, and engaging ubiquitous intelligence in real-world data mining, and 3) develop workable and operational systems balancing technical significance and applications concerns, and converting and delivering actionable knowledge into operational applications rules to seamlessly engage application processes and systems.
Resumo:
We study the dissipative dynamics of two independent arrays of many-body systems, locally driven by a common entangled field. We showthat in the steady state the entanglement of the driving field is reproduced in an arbitrarily large series of inter-array entangled pairs over all distances. Local nonclassical driving thus realizes a scale-free entanglement replication and long-distance entanglement distribution mechanism that has immediate bearing on the implementation of quantum communication networks.
Resumo:
Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.
Resumo:
In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.
Resumo:
Without human beings, and human activities, hazards can strike but disasters cannot occur, they are not just natural phenomena but a social event (Van Der Zon, 2005). The rapid demand for reconstruction after disastrous events can result in the impacts of projects not being carefully considered from the outset and the opportunity to improve long-term physical and social community structures being neglected. The events that struck Banda Aceh in 2004 have been described as
a story of ‘two tsunamis’, the first being the natural hazard that struck and the second being the destruction of social structures that occurred as a result of unplanned, unregulated and uncoordinated response (Syukrizal et al, 2009). Measures must be in place to ensure that, while aiming to meet reconstruction
needs as rapidly as possible, the risk of re-occurring disaster impacts are reduced through both the physical structures and the capacity of the community who inhabit them. The paper explores issues facing reconstruction in a post-disaster scenario, drawing on the connections between physical and social reconstruction in order to address long term recovery solutions. It draws on a study of relevant literature and a six week pilot study spent in Haiti exploring the progress of recovery in the Haitian capital and the limitations still restricting reconstruction efforts. The study highlights the need for recovery management strategies that recognise the link between social and physical reconstruction and the significance of community based initiatives that see local residents driving recovery in terms of debris handling and rebuilding. It demonstrates how a community driven approach to physical reconstruction could also address the social impacts of events that, in the case of places such as Haiti, are still dramatically restricting recovery efforts.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.