978 resultados para Distributed parameters
Resumo:
Thermoacoustics is the interaction between heat and sound, which are useful in designing heat engines and heat pumps. Research in the field of thermoacoustics focuses on the demand to improve the performance which is achieved by altering operational, geometrical and fluid parameters. The present study deals with improving the performance of twin thermoacoustic prime mover, which has gained the significant importance in the recent years for the production of high amplitude sound waves. The performance of twin thermoacoustic prime mover is evaluated in terms of onset temperature difference, resonance frequency and pressure amplitude of the acoustic waves by varying the resonator length and charge pressures of fluid medium nitrogen. DeltaEC, the free simulation software developed by LANL, USA is employed in the present study to simulate the performance of twin thermoacoustic prime mover. Experimental and simulated results are compared and the deviation is found to be within 10%.
Resumo:
Recent data from high-statistics experiments that have measured the modulus of the pion electromagnetic form factor from threshold to relatively high energies are used as input in a suitable mathematical framework of analytic continuation to find stringent constraints on the shape parameters of the form factor at t = 0. The method uses also as input a precise description of the phase of the form factor in the elastic region based on Fermi-Watson theorem and the analysis of the pi pi scattering amplitude with dispersive Roy equations, and some information on the spacelike region coming from recent high precision experiments. Our analysis confirms the inconsistencies of several data on the modulus, especially from low energies, with analyticity and the input phase, noted in our earlier work. Using the data on the modulus from energies above 0.65 GeV, we obtain, with no specific parametrisation, the prediction < r(pi)(2)> is an element of (0.42, 0.44) fm(2) for the charge radius. The same formalism leads also to very narrow allowed ranges for the higher-order shape parameters at t = 0, with a strong correlation among them.
Resumo:
Electrical switching studies on amorphous Si15Te74Ge11 thin film devices show interesting changes in the switching behavior with changes in the input energy supplied; the input energy determines the extent of crystallization in the active volume, which is reflected in the value of SET resistances. This in turn, determines the trend exhibited by switching voltage (V-t) for different input conditions. The results obtained are analyzed on the basis of the amount of Joule heat generated, which determines the temperature of the active volume. Depending on the final temperature, devices are rendered either in the intermediate state with a resistance of 5*10(2) Omega or the ON state with a resistance of 5*10(1) Omega. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We address the problem of temporal envelope modeling for transient audio signals. We propose the Gamma distribution function (GDF) as a suitable candidate for modeling the envelope keeping in view some of its interesting properties such as asymmetry, causality, near-optimal time-bandwidth product, controllability of rise and decay, etc. The problem of finding the parameters of the GDF becomes a nonlinear regression problem. We overcome the hurdle by using a logarithmic envelope fit, which reduces the problem to one of linear regression. The logarithmic transformation also has the feature of dynamic range compression. Since temporal envelopes of audio signals are not uniformly distributed, in order to compute the amplitude, we investigate the importance of various loss functions for regression. Based on synthesized data experiments, wherein we have a ground truth, and real-world signals, we observe that the least-squares technique gives reasonably accurate amplitude estimates compared with other loss functions.
Resumo:
Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures at the nodes or links, during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.
Resumo:
In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.
Resumo:
We consider the problem of wireless channel allocation (whenever the channels are free) to multiple cognitive radio users in a Cognitive Radio Network (CRN) so as to satisfy their Quality of Service (QoS) requirements efficiently. The CRN base station may not know the channel states of all the users. The multiple channels are available at random times. In this setup Opportunistic Splitting can be an attractive solution. A disadvantage of this algorithm is that it requires the metrics of all users to be an independent, identically distributed sequence. However we use a recently generalized version of this algorithm in which the optimal parameters are learnt on-line through stochastic approximation and metrics can be Markov. We provide scheduling algorithms which maximize weighted-sum system throughput or are throughput or delay optimal. We also consider the scenario when some traffic streams are delay sensitive.
Resumo:
The multi-component nanomaterials combine the individual properties and give rise to emergent phenomenon. Optical excitations in such hybrid nonmaterial's ( for example Exciton in semiconductor quantum dots and Plasmon in Metal nanomaterials) undergo strong weak electromagnetic coupling. Such exciton-plasmon interactions allow design of absorption and emission properties, control of nanoscale energy-transfer processes, and creation of new excitations in the strong coupling regime.This Exciton plasmon interaction in hybrid nanomaterial can lead to both enhancement in the emission as well as quenching. In this work we prepared close-packed hybrid monolayer of thiol capped CdSe and gold nanoparticles. They exhibit both the Quenching and enhancements the in PL emission.The systematic variance of PL from such hybrid nanomaterials monolayer is studied by tuning the Number ratio of Gold per Quantum dots, the surface density of QDs and the spectral overlap of emission spectrum of QD and absorption spectrum of Gold nanoparticles. Role of Localized surface Plasmon which not only leads to quenching but strong enhancements as well, is explored.
Resumo:
We study the performance of a hybrid Graphene-Boron Nitride armchair nanoribbon (a-GNR-BN) n-MOSFET at its ballistic transport limit. We consider three geometric configurations 3p, 3p + 1, and 3p + 2 of a-GNR-BN with BN atoms embedded on either side (2, 4, and 6 BN) on the GNR. Material properties like band gap, effective mass, and density of states of these H-passivated structures are evaluated using the Density Functional Theory. Using these material parameters, self-consistent Poisson-Schrodinger simulations are carried out under the Non Equilibrium Green's Function formalism to calculate the ballistic n-MOSFET device characteristics. For a hybrid nanoribbon of width similar to 5 nm, the simulated ON current is found to be in the range of 265 mu A-280 mu A with an ON/OFF ratio 7.1 x 10(6)-7.4 x 10(6) for a V-DD = 0.68 V corresponding to 10 nm technology node. We further study the impact of randomly distributed Stone Wales (SW) defects in these hybrid structures and only 2.5% degradation of ON current is observed for SW defect density of 3.18%. (C) 2014 AIP Publishing LLC.
Resumo:
Opportunistic selection selects the node that improves the overall system performance the most. Selecting the best node is challenging as the nodes are geographically distributed and have only local knowledge. Yet, selection must be fast to allow more time to be spent on data transmission, which exploits the selected node's services. We analyze the impact of imperfect power control on a fast, distributed, splitting based selection scheme that exploits the capture effect by allowing the transmitting nodes to have different target receive powers and uses information about the total received power to speed up selection. Imperfect power control makes the received power deviate from the target and, hence, affects performance. Our analysis quantifies how it changes the selection probability, reduces the selection speed, and leads to the selection of no node or a wrong node. We show that the effect of imperfect power control is primarily driven by the ratio of target receive powers. Furthermore, we quantify its effect on the net system throughput.
Resumo:
In the Himalaya, large areas are covered by glaciers and seasonal snow. They are an important source of water for the Himalayan rivers. In this article, observed changes in glacial extent and mass balance have been discussed. Various studies suggest that most of the Himalayan glaciers are retreating though the rate of retreat varies from glacier to glacier, ranging from a few meters to almost 61 m/year, depending upon the terrain and meteorological parameters. In addition, mapping of almost 11,000 out of 40,000 sq. km of glaciated area, distributed in all major climatic zones of the Himalaya, suggests an almost 13% loss in area in the last 4-5 decades. The glacier mass balance observations and estimates made using methods like field, AAR, ELA and geodetic measurements, suggest a significant increase in mass wastage of Himalayan glaciers in the last 3-4 decades. In the last four decades loss in glacial ice has been estimated at 19 +/- 7 m. This suggests loss of 443 +/- 136 Gt of glacial mass out of a total 3600-4400 Gt of glacial stored water in the Indian Himalaya. This study has also shown that mean loss in glacier mass in the Indian Himalaya is accelerated from -9 +/- 4 to -20 +/- 4 Gt/year between the periods 1975-85 and 2000-2010. The estimate of glacial stored water in the Indian Himalaya is based on glacier inventory on a 1 : 250,000 scale and scaling methods; therefore, we assume uncertainties to be large.
Resumo:
This study presents the response of a vertically loaded pile in undrained clay considering spatially distributed undrained shear strength. The probabilistic study is performed considering undrained shear strength as random variable and the analysis is conducted using random field theory. The inherent soil variability is considered as source of variability and the field is modeled as two dimensional non-Gaussian homogeneous random field. Random field is simulated using Cholesky decomposition technique within the finite difference program and Monte Carlo simulation approach is considered for the probabilistic analysis. The influence of variance and spatial correlation of undrained shear strength on the ultimate capacity as summation of ultimate skin friction and end bearing resistance of pile are examined. It is observed that the coefficient of variation and spatial correlation distance are the most important parameters that affect the pile ultimate capacity.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.
Resumo:
An attempt has been made to quantify the variability in the seismic activity rate across the whole of India and adjoining areas (0–45°N and 60–105°E) using earthquake database compiled from various sources. Both historical and instrumental data were compiled and the complete catalog of Indian earthquakes till 2010 has been prepared. Region-specific earthquake magnitude scaling relations correlating different magnitude scales were achieved to develop a homogenous earthquake catalog for the region in unified moment magnitude scale. The dependent events (75.3%) in the raw catalog have been removed and the effect of aftershocks on the variation of b value has been quantified. The study area was divided into 2,025 grid points (1°91°) and the spatial variation of the seismicity across the region have been analyzed considering all the events within 300 km radius from each grid point. A significant decrease in seismic b value was seen when declustered catalog was used which illustrates that a larger proportion of dependent events in the earthquake catalog are related to lower magnitude events. A list of 203,448 earth- quakes (including aftershocks and foreshocks) occurred in the region covering the period from 250 B.C. to 2010 A.D. with all available details is uploaded in the website http://www.civil.iisc.ernet.in/*sreevals/resource.htm.
Resumo:
Programming for parallel architectures that do not have a shared address space is extremely difficult due to the need for explicit communication between memories of different compute devices. A heterogeneous system with CPUs and multiple GPUs, or a distributed-memory cluster are examples of such systems. Past works that try to automate data movement for distributed-memory architectures can lead to excessive redundant communication. In this paper, we propose an automatic data movement scheme that minimizes the volume of communication between compute devices in heterogeneous and distributed-memory systems. We show that by partitioning data dependences in a particular non-trivial way, one can generate data movement code that results in the minimum volume for a vast majority of cases. The techniques are applicable to any sequence of affine loop nests and works on top of any choice of loop transformations, parallelization, and computation placement. The data movement code generated minimizes the volume of communication for a particular configuration of these. We use a combination of powerful static analyses relying on the polyhedral compiler framework and lightweight runtime routines they generate, to build a source-to-source transformation tool that automatically generates communication code. We demonstrate that the tool is scalable and leads to substantial gains in efficiency. On a heterogeneous system, the communication volume is reduced by a factor of 11X to 83X over state-of-the-art, translating into a mean execution time speedup of 1.53X. On a distributed-memory cluster, our scheme reduces the communication volume by a factor of 1.4X to 63.5X over state-of-the-art, resulting in a mean speedup of 1.55X. In addition, our scheme yields a mean speedup of 2.19X over hand-optimized UPC codes.