147 resultados para harm minimization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a novel technique for reducing the power consumed by the on-chip cache in SNUCA chip multicore platform. This is achieved by what we call a "remap table", which maps accesses to the cache banks that are as close as possible to the cores, on which the processes are scheduled. With this technique, instead of using all the available cache, we use a portion of the cache and allocate lesser cache to the application. We formulate the problem as an energy-delay (ED) minimization problem and solve it offline using a scalable genetic algorithm approach. Our experiments show up to 40% of savings in the memory sub-system power consumption and 47% savings in energy-delay product (ED).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a novel technique for reducing the power consumed by the on-chip cache in SNUCA chip multicore platform. This is achieved by what we call a "remap table", which maps accesses to the cache banks that are as close as possible to the cores, on which the processes are scheduled. With this technique, instead of using all the available cache, we use a portion of the cache and allocate lesser cache to the application. We formulate the problem as an energy-delay (ED) minimization problem and solve it offline using a scalable genetic algorithm approach. Our experiments show up to 40% of savings in the memory sub-system power consumption and 47% savings in energy-delay product (ED).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lepton masses and mixing angles via localization of 5-dimensional fields in the bulk are revisited in the context of Randall-Sundrum models. The Higgs is assumed to be localized on the IR brane. Three cases for neutrino masses are considered: (a) The higher-dimensional neutrino mass operator (LH.LH), (b) Dirac masses, and (c) Type I seesaw with bulk Majorana mass terms. Neutrino masses and mixing as well as charged lepton masses are fit in the first two cases using chi(2) minimization for the bulk mass parameters, while varying the O(1) Yukawa couplings between 0.1 and 4. Lepton flavor violation is studied for all the three cases. It is shown that large negative bulk mass parameters are required for the right-handed fields to fit the data in the LH.LH case. This case is characterized by a very large Kaluza-Klein (KK) spectrum and relatively weak flavor-violating constraints at leading order. The zero modes for the charged singlets are composite in this case, and their corresponding effective 4-dimensional Yukawa couplings to the KK modes could be large. For the Dirac case, good fits can be obtained for the bulk mass parameters, c(i), lying between 0 and 1. However, most of the ``best-fit regions'' are ruled out from flavor-violating constraints. In the bulk Majorana terms case, we have solved the profile equations numerically. We give example points for inverted hierarchy and normal hierarchy of neutrino masses. Lepton flavor violating rates are large for these points. We then discuss various minimal flavor violation schemes for Dirac and bulk Majorana cases. In the Dirac case with minimal-flavor-violation hypothesis, it is possible to simultaneously fit leptonic masses and mixing angles and alleviate lepton flavor violating constraints for KK modes with masses of around 3 TeV. Similar examples are also provided in the Majorana case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wave propagation in graphene sheet embedded in elastic medium (polymer matrix) has been a topic of great interest in nanomechanics of graphene sheets, where the equivalent continuum models are widely used. In this manuscript, we examined this issue by incorporating the nonlocal theory into the classical plate model. The influence of the nonlocal scale effects has been investigated in detail. The results are qualitatively different from those obtained based on the local/classical plate theory and thus, are important for the development of monolayer graphene-based nanodevices. In the present work, the graphene sheet is modeled as an isotropic plate of one-atom thick. The chemical bonds are assumed to be formed between the graphene sheet and the elastic medium. The polymer matrix is described by a Pasternak foundation model, which accounts for both normal pressure and the transverse shear deformation of the surrounding elastic medium. When the shear effects are neglected, the model reduces to Winkler foundation model. The normal pressure or Winkler elastic foundation parameter is approximated as a series of closely spaced, mutually independent, vertical linear elastic springs where the foundation modulus is assumed equivalent to stiffness of the springs. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of flexural wave propagation model is also derived and the results of the wave dispersion analysis are shown for both local and nonlocal elasticity calculations. From this analysis we show that the elastic matrix highly affects the flexural wave mode and it rapidly increases the frequency band gap of flexural mode. The flexural wavenumbers obtained from nonlocal elasticity calculations are higher than the local elasticity calculations. The corresponding wave group speeds are smaller in nonlocal calculation as compared to local elasticity calculation. The effect of y-directional wavenumber (eta(q)) on the spectrum and dispersion relations of the graphene embedded in polymer matrix is also observed. We also show that the cut-off frequencies of flexural wave mode depends not only on the y-direction wavenumber but also on nonlocal scaling parameter (e(0)a). The effect of eta(q) and e(0)a on the cut-off frequency variation is also captured for the cases of with and without elastic matrix effect. For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(0)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this article. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple, rapid, and surfactant-free synthesis of crystalline copper nanostructures has been carried out through microwave irradiation of a solution of copper acetylacetonate in benzyl alcohol. The structures are found to be stable against oxidation in ambient air for several months. High-resolution electron microscopy (SEM and TEM) reveals that the copper samples comprise nanospheres measuring about 150 nm in diameter, each made of copper nanocrystals similar to 7 nm in extension. The nanocrystals are densely packed into spherical aggregates, the driving force being minimization of surface area and surface energy, and are thus immune to oxidation in ambient air. Such aggregates can also be adherently supported on SiO2 and Al2O3 when these substrates are immersed in the irradiated solution. The air-stable copper nanostructures exhibit surface enhanced Raman scattering, as evidenced by the detection of 4-mercaptobenzoic acid at 10(-6) M concentrations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lepton mass hierarchies and lepton flavour violation are revisited in the framework of Randall-Sundrum models. Models with Dirac-type as well as Majorana-type neutrinos are considered. The five-dimensional c-parameters are fit to the charged lepton and neutrino masses and mixings using chi(2) minimization. Leptonic flavour violation is shown to be large in these cases. Schemes of minimal flavour violation are considered for the cases of an effective LLHH operator and Dirac neutrinos and are shown to significantly reduce the limits from lepton flavour violation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ultrasonic wave propagation in a graphene sheet, which is embedded in an elastic medium, is studied using nonlocal elasticity theory incorporating small-scale effects. The graphene sheet is modeled as an one-atom thick isotropic plate and the elastic medium/substrate is modeled as distributed springs. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. After that, an ultrasonic type of wave propagation model is also derived. The explicit expressions for the cut-off frequencies are also obtained as functions of the nonlocal scaling parameter and the y-directional wavenumber. Local elasticity shows that the wave will propagate even at higher frequencies. But nonlocal elasticity predicts that the waves can propagate only up to certain frequencies (called escape frequencies), after which the wave velocity becomes zero. The results also show that the escape frequencies are purely a function of the nonlocal scaling parameter. The effect of the elastic medium is captured in the wave dispersion analysis and this analysis is explained with respect to both local and nonlocal elasticity. The simulations show that the elastic medium affects only the flexural wave mode in the graphene sheet. The presence of the elastic matrix increases the band gap of the flexural mode. The present results can provide useful guidance for the design of next-generation nanodevices in which graphene-based composites act as a major element.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access Interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the singularity minimization criterion under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs better than the adaptive network coding scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wind power, as an alternative to fossil fuels, is plentiful, renewable, widely distributed, clean, produces no greenhouse gas emissions during operation, and uses little land. In operation, the overall cost per unit of energy produced is similar to the cost for new coal and natural gas installations. However, the stochastic behaviour of wind speeds leads to significant disharmony between wind energy production and electricity demand. Wind generation suffers from an intermittent characteristics due to the own diurnal and seasonal patterns of the wind behaviour. Both reactive power and voltage control are important under varying operating conditions of wind farm. To optimize reactive power flow and to keep voltages in limit, an optimization method is proposed in this paper. The objective proposed is minimization of the voltage deviations of the load buses (Vdesired). The approach considers the reactive power limits of wind generators and co-ordinates the transformer taps. This algorithm has been tested under practically varying conditions simulated on a test system. The results are obtained on a system of 50-bus real life equivalent power network. The result shows the efficiency of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new family of supramolecular organogelators, based on chiral amino acid derivatives of 2,4,6-trichloro-pyrimidine-5-carbaldehyde, has been synthesized. L-alanine was incorporated as a spacer between the pyrimidine core and long hydrocarbon tails to compare the effect of chirality and hydrogen bonding to that of the achiral analogue. The role of aromatic moiety on the chiral spacer was also investigated by introducing L-phenyl alanine moieties. The presence of intermolecular hydrogen-bonding leading to the chiral self-assembly was probed by concentration-dependent FTIR and UV/Vis spectroscopies, in addition to circular dichroism (CD) studies. Temperature and concentration-dependent CD spectroscopy ascribed to the formation of -sheet-type H-bonded networks. The morphology and the arrangements of the molecules in the freeze-dried gels were examined by scanning electron microscopy (SEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), and X-ray diffraction (XRD) techniques. Calculation of the length of each molecular system by energy minimization in its extended conformation and comparison with the small-angle XRD pattern reveals that this class of gelator molecules adopts a lamellar organization. Polarized optical microscopy (POM) and differential scanning calorimetry (DSC) indicate that the solid state phase behavior of these molecules is totally dependent on the choice of their amino acid spacers. Structure-induced aggregation properties based on the H-bonding motifs and the packing of the molecule in three dimensions leading to gelation was elucidated by rheological studies. However, viscoelasticity was shown to depend only marginally on the H-bonding interactions; rather it depends on the packing of the gelators to a greater extent.