821 resultados para Reservoir computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the entanglement of two impurity qubits immersed in a Bose-Einstein condensate (BEC) reservoir. This open quantum system model allows for interpolation between a common dephasing scenario and an independent dephasing scenario by modifying the wavelength of the superlattice superposed to the BEC, and how this influences the dynamical properties of the impurities. We demonstrate the existence of rich dynamics corresponding to different values of reservoir parameters, including phenomena such as entanglement trapping, revivals of entanglement, and entanglement generation. In the spirit of reservoir engineering, we present the optimal BEC parameters for entanglement generation and trapping, showing the key role of the ultracold-gas interactions. Copyright (C) EPLA, 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a general scheme for sequential one-way quantum computation where static systems with long-living quantum coherence (memories) interact with moving systems that may possess very short coherence times. Both the generation of the cluster state needed for the computation and its consumption by measurements are carried out simultaneously. As a consequence, effective clusters of one spatial dimension fewer than in the standard approach are sufficient for computation. In particular, universal computation requires only a one-dimensional array of memories. The scheme applies to discrete-variable systems of any dimension as well as to continuous-variable ones, and both are treated equivalently under the light of local complementation of graphs. In this way our formalism introduces a general framework that encompasses and generalizes in a unified manner some previous system-dependent proposals. The procedure is intrinsically well suited for implementations with atom-photon interfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooling and sinking of dense saline water in the Norwegian–Greenland Sea is essential for the formation of North Atlantic Deep Water. The convection in the Norwegian–Greenland Sea allows for a northward flow of warm surface water and southward transport of cold saline water. This circulation system is highly sensitive to climate change and has been shown to operate in different modes. In ice cores the last glacial period is characterized by millennial-scale Dansgaard–Oeschger (D–O) events of warm interstadials and cold stadials. Similar millennial-scale variability (linked to D–O events) is evident from oceanic cores, suggesting a strong coupling of the atmospheric and oceanic circulations system. Particularly long-lasting cold stadials correlate with North Atlantic Heinrich events, where icebergs released from the continents caused a spread of meltwater over the northern North Atlantic and Nordic seas. The meltwater layer is believed to have caused a stop or near-stop in the deep convection, leading to cold climate. The spreading of meltwater and changes in oceanic circulation have a large influence on the carbon exchange between atmosphere and the deep ocean and lead to profound changes in the 14C activity of the surface ocean. Here we demonstrate marine 14C reservoir ages (R) of up to c. 2000 years for Heinrich event H4. Our R estimates are based on a new method for age model construction using identified tephra layers and tie-points based on abrupt interstadial warmings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extension of approximate computing, significance-based computing exploits applications' inherent error resiliency and offers a new structural paradigm that strategically relaxes full computational precision to provide significant energy savings with minimal performance degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a novel discrete cosine transform (DCT) architecture that allows aggressive voltage scaling for low-power dissipation, even under process parameter variations with minimal overhead as opposed to existing techniques. Under a scaled supply voltage and/or variations in process parameters, any possible delay errors appear only from the long paths that are designed to be less contributive to output quality. The proposed architecture allows a graceful degradation in the peak SNR (PSNR) under aggressive voltage scaling as well as extreme process variations. Results show that even under large process variations (±3σ around mean threshold voltage) and aggressive supply voltage scaling (at 0.88 V, while the nominal voltage is 1.2 V for a 90-nm technology), there is a gradual degradation of image quality with considerable power savings (71% at PSNR of 23.4 dB) for the proposed architecture, when compared to existing implementations in a 90-nm process technology. © 2006 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No Abstract available

Relevância:

20.00% 20.00%

Publicador: