899 resultados para Multi-way cluster
Resumo:
In this article, the representation of the merging process at the floor— stair interface is examined within a comprehensive evacuation model and trends found in experimental data are compared with model predictions. The analysis suggests that the representation of floor—stair merging within the comprehensive model appears to be consistent with trends observed within several published experiments of the merging process. In particular: (a) The floor flow rate onto the stairs decreases as the stair population density increases. (b) For a given stair population density, the floor population's flow rate onto the stairs can be maximized by connecting the floor to the landing adjacent to the incoming stair. (c) In situations where the floor is connected adjacent to the incoming stair, the merging process appears to be biased in favor of the floor population. It is further conjectured that when the floor is connected opposite the incoming stair, the merging process between the stair and floor streams is almost in balance for high stair population densities, with a slight bias in favor of the floor stream at low population densities. A key practical finding of this analysis is that the speed at which a floor can be emptied onto a stair can be enhanced simply by connecting the floor to the landing at a location adjacent to the incoming stair rather than opposite the stair. Configuring the stair in this way, while reducing the floor emptying time, results in a corresponding decrease in the descent flow rate of those already on the stairs. While this is expected to have a negligible impact on the overall time to evacuate the building, the evacuation time for those higher up in the building is extended while those on the lower flows is reduced. It is thus suggested that in high-rise buildings, floors should be connected to the landing on the opposite side to the incoming stair. Information of this type will allow engineers to better design stair—floor interfaces to meet specific design objectives.
Resumo:
In the near future, the oceans will be subjected to a massive development of marine infrastructures, including offshore wind, tidal and wave energy farms and constructions for marine aquaculture. The development of these facilities will unavoidably exert environmental pressures on marine ecosystems. It is therefore crucial that the economic costs, the use of marine space and the environmental impacts of these activities remain within acceptable limits. Moreover, the installation of arrays of wave energy devices is still far from being economically feasible due to many combined aspects, such as immature technologies for energy conversion, local energy storage and moorings. Therefore, multi-purpose solutions combining renewable energy from the sea (wind, wave, tide), aquaculture and transportation facilities can be considered as a challenging, yet advantageous, way to boost blue growth. This would be due to the sharing of the costs of installation and using the produced energy locally to feed the different functionalities and optimizing marine spatial planning. This paper focuses on the synergies that may be produced by a multi-purpose offshore installation in a relatively calm sea, i.e., the Northern Adriatic Sea, Italy, and specifically offshore Venice. It analyzes the combination of aquaculture, energy production from wind and waves, and energy storage or transfer. Alternative solutions are evaluated based on specific criteria, including the maturity of the technology, the environmental impact, the induced risks and the costs. Based on expert judgment, the alternatives are ranked and a preliminary layout of the selected multi-purpose installation for the case study is proposed, to further allow the exploitation of the synergies among different functionalities.
Resumo:
Distributed quantum information processing (QIP) is a promising way to bypass problems due to unwanted interactions between elements. However, this strategy presupposes the engineering of protocols for remote processors. In many of them, pairwise entanglement is a key resource. We study a model which distributes entanglement among elements of a delocalized network without local control. The model is efficient both in finite- and infinite-dimensional Hilbert spaces. We suggest a setup of electromechanical systems to implement our proposal.
Resumo:
This paper presents a multi-language framework to FPGA hardware development which aims to satisfy the dual requirement of high-level hardware design and efficient hardware implementation. The central idea of this framework is the integration of different hardware languages in a way that harnesses the best features of each language. This is illustrated in this paper by the integration of two hardware languages in the form of HIDE: a structured hardware language which provides more abstract and elegant hardware descriptions and compositions than are possible in traditional hardware description languages such as VHDL or Verilog, and Handel-C: an ANSI C-like hardware language which allows software and hardware engineers alike to target FPGAs from high-level algorithmic descriptions. On the one hand, HIDE has proven to be very successful in the description and generation of highly optimised parameterisable FPGA circuits from geometric descriptions. On the other hand, Handel-C has also proven to be very successful in the rapid design and prototyping of FPGA circuits from algorithmic application descriptions. The proposed integrated framework hence harnesses HIDE for the generation of highly optimised circuits for regular parts of algorithms, while Handel-C is used as a top-level design language from which HIDE functionality is dynamically invoked. The overall message of this paper posits that there need not be an exclusive choice between different hardware design flows. Rather, an integrated framework where different design flows can seamlessly interoperate should be adopted. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware languages.
Resumo:
We provide an analysis of basic quantum-information processing protocols under the effect of intrinsic nonidealities in cluster states. These nonidealities are based on the introduction of randomness in the entangling steps that create the cluster state and are motivated by the unavoidable imperfections faced in creating entanglement using condensed-matter systems. Aided by the use of an alternative and very efficient method to construct cluster-state configurations, which relies on the concatenation of fundamental cluster structures, we address quantum-state transfer and various fundamental gate simulations through noisy cluster states. We find that a winning strategy to limit the effects of noise is the management of small clusters processed via just a few measurements. Our study also reinforces recent ideas related to the optical implementation of a one-way quantum computer.
Resumo:
We assess the effects of a realistic intrinsic model for imperfections in cluster states by introducing noisy cluster states and characterizing their role in the one-way computational model. A suitable strategy to counter-affect these non-idealities is represented by the use of small clusters, stripped of any redundancy, which leads to the search for compact schemes for one-way quantum computation. In light of this, we quantitatively address the behavior of a simple four-qubit cluster which simulates a controlled-NOT under the influences of our model for decoherence. Our scheme can be particularly useful in an all-optical setup and the strategy we address can be directly applied in those, experimental situations where small cluster states can be constucted.
Resumo:
We report the experimental demonstration of a one-way quantum protocol reliably operating in the presence of decoherence. Information is protected by designing an appropriate decoherence-free subspace for a cluster state resource. We demonstrate our scheme in an all-optical setup, encoding the information into the polarization states of four photons. A measurement-based one-way information-transfer protocol is performed with the photons exposed to severe symmetric phase-damping noise. Remarkable protection of information is accomplished, delivering nearly ideal outcomes.
Resumo:
We introduce a novel scheme for one-way quantum computing (QC) based on the use of information encoded qubits in an effective cluster state resource. With the correct encoding structure, we show that it is possible to protect the entangled resource from phase damping decoherence, where the effective cluster state can be described as residing in a decoherence-free subspace (DFS) of its supporting quantum system. One-way QC then requires either single or two-qubit adaptive measurements. As an example where this proposal can be realized, we describe an optical lattice set-up where the scheme provides robust quantum information processing. We also outline how one can adapt the model to provide protection from other types of decoherence.
Resumo:
We study the effects of amplitude and phase damping decoherence in d-dimensional one-way quantum computation. We focus our attention on low dimensions and elementary unidimensional cluster state resources. Our investigation shows how information transfer and entangling gate simulations are affected for d >= 2. To understand motivations for extending the one-way model to higher dimensions, we describe how basic qudit cluster states deteriorate under environmental noise of experimental interest. In order to protect quantum information from the environment, we consider encoding logical qubits into qudits and compare entangled pairs of linear qubit-cluster states to single qudit clusters of equal length and total dimension. A significant reduction in the performance of cluster state resources for d > 2 is found when Markovian-type decoherence models are present.
Acoustic solitary waves in dusty and/or multi-ion plasmas with cold, adiabatic, and hot constituents
Resumo:
Large nonlinear acoustic waves are discussed in a four-component plasma, made up of two superhot isothermal species, and two species with lower thermal velocities, being, respectively, adiabatic and cold. First a model is considered in which the isothermal species are electrons and ions, while the cooler species are positive and/or negative dust. Using a Sagdeev pseudopotential formalism, large dust-acoustic structures have been studied in a systematic way, to delimit the compositional parameter space in which they can be found, without restrictions on the charges and masses of the dust species and their charge signs. Solitary waves can only occur for nonlinear structure velocities smaller than the adiabatic dust thermal velocity, leading to a novel dust-acoustic-like mode based on the interplay between the two dust species. If the cold and adiabatic dust are oppositely charged, only solitary waves exist, having the polarity of the cold dust, their parameter range being limited by infinite compression of the cold dust. However, when the charges of the cold and adiabatic species have the same sign, solitary structures are limited for increasing Mach numbers successively by infinite cold dust compression, by encountering the adiabatic dust sonic point, and by the occurrence of double layers. The latter have, for smaller Mach numbers, the same polarity as the charged dust, but switch at the high Mach number end to the opposite polarity. Typical Sagdeev pseudopotentials and solitary wave profiles have been presented. Finally, the analysis has nowhere used the assumption that the dust would be much more massive than the ions and hence, one or both dust species can easily be replaced by positive and/or negative ions and the conclusions will apply to that plasma model equally well. This would cover a number of different scenarios, such as, for example, very hot electrons and ions, together with a mix of adiabatic ions and dust (of either polarity) or a very hot electron-positron mix, together with a two-ion mix or together with adiabatic ions and cold dust (both of either charge sign), to name but some of the possible plasma compositions.
Resumo:
A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.
Resumo:
We report on our discovery and observations of the Pan-STARRS1 supernova (SN) PS1-12sk, a transient with properties that indicate atypical star formation in its host galaxy cluster or pose a challenge to popular progenitor system models for this class of explosion. The optical spectra of PS1-12sk classify it as a Type Ibn SN (c.f. SN 2006jc), dominated by intermediate-width (3x10^3 km/s) and time variable He I emission. Our multi-wavelength monitoring establishes the rise time dt = 9-23 days and shows an NUV-NIR SED with temperature > 17x10^3 K and a peak rise magnitude of Mz = -18.9 mag. SN Ibn spectroscopic properties are commonly interpreted as the signature of a massive star (17 - 100 M_sun) explosion within a He-enriched circumstellar medium. However, unlike previous Type Ibn supernovae, PS1-12sk is associated with an elliptical brightest cluster galaxy, CGCG 208-042 (z = 0.054) in cluster RXC J0844.9+4258. The expected probability of an event like PS1-12sk in such environments is low given the measured infrequency of core-collapse SNe in red sequence galaxies compounded by the low volumetric rate of SN Ibn. Furthermore, we find no evidence of star formation at the explosion site to sensitive limits (Sigma Halpha
Resumo:
This study attempts to establish a link between the reasonably well known nature of the progenitor of SN2011fe and its surrounding environment. This is done with the aim of enabling the identification of similar systems in the vast majority of the cases, when distance and epoch of discovery do not allow a direct approach. To study the circumstellar environment of SN2011fe we have obtained high-resolution spectroscopy of SN2011fe on 12 epochs, from 8 to 86 days after the estimated date of explosion, targeting in particular at the time evolution of CaII and NaI. Three main absorption systems are identified from CaII and NaI, one associated to the Milky Way, one probably arising within a high-velocity cloud, and one most likely associated to the halo of M101. The Galactic and host galaxy reddening, deduced from the integrated equivalent widths (EW) of the NaI lines are E(B-V)=0.011+/-0.002 and E(B-V)=0.014+/-0.002 mag, respectively. The host galaxy absorption is dominated by a component detected at the same velocity measured from the 21-cm HI line at the projected SN position (~180 km/s). During the ~3 months covered by our observations, its EW changed by 15.6+/-6.5 mA. This small variation is shown to be compatible with the geometric effects produced by therapid SN photosphere expansion coupled to the patchy fractal structure of the ISM. The observed behavior is fully consistent with ISM properties similar to those derived for our own Galaxy, with evidences for structures on scales
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
The UK’s transport infrastructure is one of the most heavily used in the world. The performance of these networks is critically dependent on the performance of cutting and embankment slopes which make up £20B of the £60B asset value of major highway infrastructure alone. The rail network in particular is also one of the oldest in the world: many of these slopes are suffering high incidents of instability (increasing with time). This paper describes the development of a fundamental understanding of earthwork material and system behaviour, through the systematic integration of research across a range of spatial and temporal scales. Spatially these range from microscopic studies of soil fabric, through elemental materials behaviour to whole slope modelling and monitoring and scaling up to transport networks. Temporally, historical and current weather event sequences are being used to understand and model soil deterioration processes, and climate change scenarios to examine their potential effects on slope performance in futures up to and including the 2080s. The outputs of this research are being mapped onto the different spatial and temporal scales of infrastructure slope asset management to inform the design of new slopes through to changing the way in which investment is made into aging assets. The aim ultimately is to help create a more reliable, cost effective, safer and more resilient transport system.