32 resultados para One-way Quantum Computer


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have analysed the diurnal cycle of rainfall over the Indian region (10S-35N, 60E-100E) using both satellite and in-situ data, and found many interesting features associated with this fundamental, yet under-explored, mode of variability. Since there is a distinct and strong diurnal mode of variability associated with the Indian summer monsoon rainfall, we evaluate the ability of the Weather Research and Forecasting Model (WRF) to simulate the observed diurnal rainfall characteristics. The model (at 54km grid-spacing) is integrated for the month of July, 2006, since this period was particularly favourable for the study of diurnal cycle. We first evaluate the sensitivity of the model to the prescribed sea surface temperature (SST), by using two different SST datasets, namely, Final Analyses (FNL) and Real-time Global (RTG). It was found that with RTG SST the rainfall simulation over central India (CI) was significantly better than that with FNL. On the other hand, over the Bay of Bengal (BoB), rainfall simulated with FNL was marginally better than with RTG. However, the overall performance of RTG SST was found to be better than FNL, and hence it was used for further model simulations. Next, we investigated the role of the convective parameterization scheme on the simulation of diurnal cycle of rainfall. We found that the Kain-Fritsch (KF) scheme performs significantly better than Betts-Miller-Janjić (BMJ) and Grell-Devenyi schemes. We also studied the impact of other physical parameterizations, namely, microphysics, boundary layer, land surface, and the radiation parameterization, on the simulation of diurnal cycle of rainfall, and identified the “best” model configuration. We used this configuration of the “best” model to perform a sensitivity study on the role of various convective components used in the KF scheme. In particular, we studied the role of convective downdrafts, convective timescale, and feedback fraction, on the simulated diurnal cycle of rainfall. The “best” model simulations, in general, show a good agreement with observations. Specifically, (i) Over CI, the simulated diurnal rainfall peak is at 1430 IST, in comparison to the observed 1430-1730 IST peak; (ii) Over Western Ghats and Burmese mountains, the model simulates a diurnal rainfall peak at 1430 IST, as opposed to the observed peak of 1430-1730 IST; (iii) Over Sumatra, both model and observations show a diurnal peak at 1730 IST; (iv) The observed southward propagating diurnal rainfall bands over BoB are weakly simulated by WRF. Besides the diurnal cycle of rainfall, the mean spatial pattern of total rainfall and its partitioning between the convective and stratiform components, are also well simulated. The “best” model configuration was used to conduct two nested simulations with one-way, three-level nesting (54-18-6km) over CI and BoB. While, the 54km and 18km simulations were conducted for the whole of July, 2006, the 6km simulation was carried out for the period 18 - 24 July, 2006. The results of our coarse- and fine-scale numerical simulations of the diurnal cycle of monsoon rainfall will be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A distributed storage setting is considered where a file of size B is to be stored across n storage nodes. A data collector should be able to reconstruct the entire data by downloading the symbols stored in any k nodes. When a node fails, it is replaced by a new node by downloading data from some of the existing nodes. The amount of download is termed as repair bandwidth. One way to implement such a system is to store one fragment of an (n, k) MDS code in each node, in which case the repair bandwidth is B. Since repair of a failed node consumes network bandwidth, codes reducing repair bandwidth are of great interest. Most of the recent work in this area focuses on reducing the repair bandwidth of a set of k nodes which store the data in uncoded form, while the reduction in the repair bandwidth of the remaining nodes is only marginal. In this paper, we present an explicit code which reduces the repair bandwidth for all the nodes to approximately B/2. To the best of our knowledge, this is the first explicit code which reduces the repair bandwidth of all the nodes for all feasible values of the system parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communication applications are usually delay restricted, especially for the instance of musicians playing over the Internet. This requires a one-way delay of maximum 25 msec and also a high audio quality is desired at feasible bit rates. The ultra low delay (ULD) audio coding structure is well suited to this application and we investigate further the application of multistage vector quantization (MSVQ) to reach a bit rate range below 64 Kb/s, in a scalable manner. Results at 32 Kb/s and 64 Kb/s show that the trained codebook MSVQ performs best, better than KLT normalization followed by a simulated Gaussian MSVQ or simulated Gaussian MSVQ alone. The results also show that there is only a weak dependence on the training data, and that we indeed converge to the perceptual quality of our previous ULD coder at 64 Kb/s.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the one-way relay aided MIMO X fading Channel where there are two transmitters and two receivers along with a relay with M antennas at every node. Every transmitter wants to transmit messages to every other receiver. The relay broadcasts to the receivers along a noisy link which is independent of the transmitters channel. In literature, this is referred to as a relay with orthogonal components. We derive an upper bound on the degrees of freedom of such a network. Next we show that the upper bound is tight by proposing an achievability scheme based on signal space alignment for the same for M = 2 antennas at every node.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomechanical signals due to human movements during exercise are represented in time-frequency domain using Wigner Distribution Function (WDF). Analysis based on WDF reveals instantaneous spectral and power changes during a rhythmic exercise. Investigations were carried out on 11 healthy subjects who performed 5 cycles of sun salutation, with a body-mounted Inertial Measurement Unit (IMU) as a motion sensor. Variance of Instantaneous Frequency (I.F) and Instantaneous Power (I.P) for performance analysis of the subject is estimated using one-way ANOVA model. Results reveal that joint Time-Frequency analysis of biomechanical signals during motion facilitates a better understanding of grace and consistency during rhythmic exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edge-preserving smoothing is widely used in image processing and bilateral filtering is one way to achieve it. Bilateral filter is a nonlinear combination of domain and range filters. Implementing the classical bilateral filter is computationally intensive, owing to the nonlinearity of the range filter. In the standard form, the domain and range filters are Gaussian functions and the performance depends on the choice of the filter parameters. Recently, a constant time implementation of the bilateral filter has been proposed based on raisedcosine approximation to the Gaussian to facilitate fast implementation of the bilateral filter. We address the problem of determining the optimal parameters for raised-cosine-based constant time implementation of the bilateral filter. To determine the optimal parameters, we propose the use of Stein's unbiased risk estimator (SURE). The fast bilateral filter accelerates the search for optimal parameters by faster optimization of the SURE cost. Experimental results show that the SURE-optimal raised-cosine-based bilateral filter has nearly the same performance as the SURE-optimal standard Gaussian bilateral filter and the Oracle mean squared error (MSE)-based optimal bilateral filter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To achieve food security and meet the demands of the ever-growing human populations, farming systems have assumed unsustainable practices to produce more from a finite land area. This has been cause for concern mainly due to the often-irreversible damage done to the otherwise productive agricultural landscapes. Agro-ecology is proclaimed to be deteriorating due to eroding integrity of connected ecological mosaics and vulnerability to climate change. This has contributed to declining species diversity, loss of buffer vegetation, fragmentation of habitats, and loss of natural pollinators or predators, which eventually leads to decline in ecosystem services. Currently, a hierarchy of conservation initiatives is being considered to restore ecological integrity of agricultural landscapes. However, the challenge of identifying a suitable conservation strategy is a daunting task in view of socio-ecological factors that may constrain the choice of available strategies. One way to mitigate this situation and integrate biodiversity with agricultural landscapes is to implement offset mechanisms, which are compensatory and balancing approaches to restore the ecological health and function of an ecosystem. This needs to be tailored to the history of location specific agricultural practices, and the social, ecological and environmental conditions. The offset mechanisms can complement other initiatives through which farmers are insured against landscape-level risks such as droughts, fire and floods. For countries in the developing world with significant biodiversity and extensive agriculture, we should promote a comprehensive model of sustainable agricultural landscapes and ecosystem services, replicable at landscape to regional scales. Arguably, the model can be a potential option to sustain the integrity of biodiversity mosaic in agricultural landscapes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a one-way nested Indian Ocean regional model. The model combines the National Oceanic and Atmospheric Administration (NOAA) Geophysical Fluid Dynamics Laboratory's (GFDL) Modular Ocean Model (MOM4p1) at global climate model resolution (nominally one degree), and a regional Indian Ocean MOM4p1 configuration with 25 km horizontal resolution and 1 m vertical resolution near the surface. Inter-annual global simulations with Coordinated Ocean-Ice Reference Experiments (CORE-II) surface forcing over years 1992-2005 provide surface boundary conditions. We show that relative to the global simulation, (i) biases in upper ocean temperature, salinity and mixed layer depth are reduced, (ii) sea surface height and upper ocean circulation are closer to observations, and (iii) improvements in model simulation can be attributed to refined resolution, more realistic topography and inclusion of seasonal river runoff. Notably, the surface salinity bias is reduced to less than 0.1 psu over the Bay of Bengal using relatively weak restoring to observations, and the model simulates the strong, shallow halocline often observed in the North Bay of Bengal. There is marked improvement in subsurface salinity and temperature, as well as mixed layer depth in the Bay of Bengal. Major seasonal signatures in observed sea surface height anomaly in the tropical Indian Ocean, including the coastal waveguide around the Indian peninsula, are simulated with great fidelity. The use of realistic topography and seasonal river runoff brings the three dimensional structure of the East India Coastal Current and West India Coastal Current much closer to observations. As a result, the incursion of low salinity Bay of Bengal water into the southeastern Arabian Sea is more realistic. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A droplet introduced in an external convective flow field exhibits significant multimodal shape oscillations depending upon the intensity of the aerodynamic forcing. In this paper, a theoretical model describing the temporal evolution of normal modes of the droplet shape is developed. The fluid is assumed to be weakly viscous and Newtonian. The convective flow velocity, which is assumed to be incompressible and inviscid, is incorporated in the model through the normal stress condition at the droplet surface and the equation of motion governing the dynamics of each mode is derived. The coupling between the external flow and the droplet is approximated to be a one-way process, i.e., the external flow perturbations effect the droplet shape oscillations and the droplet oscillation itself does not influence the external flow characteristics. The shape oscillations of the droplet with different fluid properties under different unsteady flow fields were simulated. For a pulsatile external flow, the frequency spectra of the normal modes of the droplet revealed a dominant response at the resonant frequency, in addition to the driving frequency and the corresponding harmonics. At driving frequencies sufficiently different from the resonant frequency of the prolate-oblate oscillation mode of the droplet, the oscillations are stable. But at resonance the oscillation amplitude grows in time leading to breakup depending upon the fluid viscosity. A line vortex advecting past the droplet, simulated as an isotropic jump in the far field velocity, leads to the resonant excitation of the droplet shape modes if and only if the time taken by the vortex to cross the droplet is less than the resonant period of the P-2 mode of the droplet. A train of two vortices interacting with the droplet is also analysed. It shows clearly that the time instant of introduction of the second vortex with respect to the droplet shape oscillation cycle is crucial in determining the amplitude of oscillation. (C) 2014 AIP Publishing LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phonon interaction with electrons or phonons or with structural defects result in a phonon mode conversion. The mode conversion is governed by the frequency wave-vector dispersion relation. The control over phonon mode or the screening of phonon in graphene is studied using the propagation of amplitude modulated phonon wave-packet. Control over phonon properties like frequency and velocity opens up several wave guiding, energy transport and thermo-electric applications of graphene. One way to achieve this control is with the introduction of nano-structured scattering in the phonon path. Atomistic model of thermal energy transport is developed which is applicable to devices consisting of source, channel and drain parts. Longitudinal acoustic phonon mode is excited from one end of the device. Molecular dynamics based time integration is adopted for the propagation of excited phonon to the other end of the device. The amount of energy transfer is estimated from the relative change of kinetic energy. Increase in the phonon frequency decreases the kinetic energy transmission linearly in the frequency band of interest. Further reduction in transmission is observed with the tuning of channel height of the device by increasing the boundary scattering. Phonon mode selective transmission control have potential application in thermal insulation or thermo-electric application or photo-thermal amplification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although ultrathin Au nanowires (similar to 2 nm diameter) are expected to demonstrate several interesting properties, their extreme fragility has hampered their use in potential applications. One way to improve the stability is to grow them on substrates; however, there is no general method to grow these wires over large areas. The existing methods suffer from poor coverage and associated formation of larger nanoparticles on the substrate. Herein, we demonstrate a room temperature method for growth of these nanowires with high coverage over large areas by in situ functionalization of the substrate. Using control experiments, we demonstrate that an in situ functionalization of the substrate is the key step in controlling the areal density of the wires on the substrate. We show that this strategy works for a variety of substrates ranging like graphene, borosil glass, Kapton, and oxide supports. We present initial results on catalysis using the wires grown on alumina and silica beads and also extend the method to lithography-free device fabrication. This method is general and may be extended to grow ultrathin Au nanowires on a variety of substrates for other applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The two-step particle synthesis mechanism, also known as the Finke-Watzky (1997) mechanism, has emerged as a significant development in the field of nanoparticle synthesis. It explains a characteristic feature of the synthesis of transition metal nanoparticles, an induction period in precursor concentration followed by its rapid sigmoidal decrease. The classical LaMer theory (1950) of particle formation fails to capture this behavior. The two-step mechanism considers slow continuous nucleation and autocatalytic growth of particles directly from precursor as its two kinetic steps. In the present work, we test the two-step mechanism rigorously using population balance models. We find that it explains precursor consumption very well, but fails to explain particle synthesis. The effect of continued nucleation on particle synthesis is not suppressed sufficiently by the rapid autocatalytic growth of particles. The nucleation continues to increase breadth of size distributions to unexpectedly large values as compared to those observed experimentally. A number of variations of the original mechanism with additional reaction steps are investigated next. The simulations show that continued nucleation from the beginning of the synthesis leads to formation of highly polydisperse particles in all of the tested cases. A short nucleation window, realized with delayed onset of nucleation and its suppression soon after in one of the variations, appears as one way to explain all of the known experimental observations. The present investigations clearly establish the need to revisit the two-step particle synthesis mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atomization is the process of disintegration of a liquid jet into ligaments and subsequently into smaller droplets. A liquid jet injected from a circular orifice into cross flow of air undergoes atomization primarily due to the interaction of the two phases rather than an intrinsic break up. Direct numerical simulation of this process resolving the finest droplets is computationally very expensive and impractical. In the present study, we resort to multiscale modelling to reduce the computational cost. The primary break up of the liquid jet is simulated using Gerris, an open source code, which employs Volume-of-Fluid (VOF) algorithm. The smallest droplets formed during primary atomization are modeled as Lagrangian particles. This one-way coupling approach is validated with the help of the simple test case of tracking a particle in a Taylor-Green vortex. The temporal evolution of the liquid jet forming the spray is captured and the flattening of the cylindrical liquid column prior to breakup is observed. The size distribution of the resultant droplets is presented at different distances downstream from the location of injection and their spatial evolution is analyzed.