99 resultados para heterogeneous sources
Resumo:
Microwave-based methods are widely employed to synthesize metal nanoparticles on various substrates. However, the detailed mechanism of formation of such hybrids has not been addressed. In this paper, we describe the thermodynamic and kinetic aspects of reduction of metal salts by ethylene glycol under microwave heating conditions. On the basis of this analysis, we identify the temperatures above which the reduction of the metal salt is thermodynamically favorable and temperatures above which the rates of homogeneous nucleation of the metal and the heterogeneous nucleation of the metal on supports are favored. We delineate different conditions which favor the heterogeneous nucleation of the metal on the supports over homogeneous nucleation in the solvent medium based on the dielectric loss parameters of the solvent and the support and the metal/solvent and metal/support interfacial energies. Contrary to current understanding, we show that metal particles can be selectively formed on the substrate even under situations where the temperature of the substrate Is lower than that of the surrounding medium. The catalytic activity of the Pt/CeO(2) and Pt/TiO(2) hybrids synthesized by this method for H(2) combustion reaction shows that complete conversion is achieved at temperatures as low as 100 degrees C with Pt-CeO(2) catalyst and at 50 degrees C with Pt-TiO(2) catalyst. Our method thus opens up possibilities for rational synthesis of high-activity supported catalysts using a fast microwave-based reduction method.
Resumo:
This paper analyses the efficiency and productivity growth of the Electronic Sector of India in the liberalization era since 1991. The study gives an insight into the process of the growth of one of the most upcoming sector of this decade. This sector has experienced a vast structural change along with the changing economic structures in India after liberalisation. With the opening up of this sector to foreign market and incoming of multinational companies, the environment has become highly competitive. The law that operates is that of Darwin’s ‘Survival of the fittest’. Existing industries experience a continuous threat of exit due to entrance of new potential entrants. Thus, it becomes inevitable for the existing industries in this sector to improve productivity growth for their survival. It is thus important to analyze how the industries in this sector have performed over the years and what are the factors that have contributed to the overall output growth.
Resumo:
The use of electroacoustic analogies suggests that a source of acoustical energy (such as an engine, compressor, blower, turbine, loudspeaker, etc.) can be characterized by an acoustic source pressure ps and internal source impedance Zs, analogous to the open-circuit voltage and internal impedance of an electrical source. The present paper shows analytically that the source characteristics evaluated by means of the indirect methods are independent of the loads selected; that is, the evaluated values of ps and Zs are unique, and that the results of the different methods (including the direct method) are identical. In addition, general relations have been derived here for the transfer of source characteristics from one station to another station across one or more acoustical elements, and also for combining several sources into a single equivalent source. Finally, all the conclusions are extended to the case of a uniformly moving medium, incorporating the convective as well as dissipative effects of the mean flow.
Resumo:
In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The spectral index-luminosity relationship for steep-spectrum cores in galaxies and quasars has been investigated, and it is found that the sample of galaxies supports earlier suggestions of a strong correlation, while there is weak evidence for a similar relationship for the quasars. It is shown that a strong spectral index-luminosity correlation can be used to set an upper limit to the velocities of the radio-emitting material which is expelled from the nucleus in the form of collimated beams or jets having relativistic bulk velocities. The data on cores in galaxies indicate that the Lorentz factors of the radiating material are less than about 2.
Resumo:
In this paper studies were carried out on two compact electric discharge plasma sources for controlling nitrogen oxides (NOX) emission in diesel engine exhaust. The plasma sources consist of an old television flyback transformer to generate high frequency high voltage ac (HVAC) and an automobile ignition coil to generate the high voltage pulses (HV Pulse). The compact plasma sources are aimed at retrofitting the existing catalytic converters with electric discharge assisted cleaning technique. To enhance NOX removal efficiency cascaded plasma-adsorbent technique has been used. Studies were reported at different flow rates and load conditions of the diesel engine.
Resumo:
This paper is concerned with the optimal flow control of an ATM switching element in a broadband-integrated services digital network. We model the switching element as a stochastic fluid flow system with a finite buffer, a constant output rate server, and a Gaussian process to characterize the input, which is a heterogeneous set of traffic sources. The fluid level should be maintained between two levels namely b1 and b2 with b1
Resumo:
The study presents an analysis aimed at choosing between off-grid solar photovoltaic, biomass gasifier based power generation and conventional grid extension for remote village electrification. The model provides a relation between renewable energy systems and the economical distance limit (EDL) from the existing grid point, based on life cycle cost (LCC) analysis, where the LCC of energy for renewable energy systems and grid extension will match. The LCC of energy feed to the village is arrived at by considering grid availability and operating hours of the renewable energy systems. The EDL for the biomass gasifier system of 25 kW capacities is 10.5 km with 6 h of daily operation and grid availability. However, the EDL for a similar 25 kW capacity photovoltaic system is 35 km for the same number of hours of operation and grid availability. The analysis shows that for villages having low load demand situated far away from the existing grid line, biomass gasification based systems are more cost competitive than photovoltaic systems or even compared to grid extension. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
A novel method is proposed for fracture toughness determination of graded microstructurally complex (Pt,Ni)Al bond coats using edge-notched doubly clamped beams subjected to bending. Micron-scale beams are machined using the focused ion beam and loaded in bending under a nanoindenter. Failure loads gathered from the pop-ins in the load-displacement curves combined with XFEM analysis are used to calculate K-c at individual zones, free from substrate effects. The testing technique and sources of errors in measurement are described and possible micromechanisms of fracture in such heterogeneous coatings discussed.
Resumo:
Practical usage of machine learning is gaining strategic importance in enterprises looking for business intelligence. However, most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a flat form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets. namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We address the problem of identifying the constituent sources in a single-sensor mixture signal consisting of contributions from multiple simultaneously active sources. We propose a generic framework for mixture signal analysis based on a latent variable approach. The basic idea of the approach is to detect known sources represented as stochastic models, in a single-channel mixture signal without performing signal separation. A given mixture signal is modeled as a convex combination of known source models and the weights of the models are estimated using the mixture signal. We show experimentally that these weights indicate the presence/absence of the respective sources. The performance of the proposed approach is illustrated through mixture speech data in a reverberant enclosure. For the task of identifying the constituent speakers using data from a single microphone, the proposed approach is able to identify the dominant source with up to 8 simultaneously active background sources in a room with RT60 = 250 ms, using models obtained from clean speech data for a Source to Interference Ratio (SIR) greater than 2 dB.
Resumo:
In view of the major advancement made in understanding the seismicity and seismotectonics of the Indian region in recent times, an updated probabilistic seismic hazard map of India covering 6-38 degrees N and 68-98 degrees E is prepared. This paper presents the results of probabilistic seismic hazard analysis of India done using regional seismic source zones and four well recognized attenuation relations considering varied tectonic provinces in the region. The study area was divided into small grids of size 0.1 degrees x 0.1 degrees. Peak Horizontal Acceleration (PHA) and spectral accelerations for periods 0.1 s and 1 s have been estimated and contour maps showing the spatial variation of the same are presented in the paper. The present study shows that the seismic hazard is moderate in peninsular shield, but the hazard in most parts of North and Northeast India is high. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
GPUs have been used for parallel execution of DOALL loops. However, loops with indirect array references can potentially cause cross iteration dependences which are hard to detect using existing compilation techniques. Applications with such loops cannot easily use the GPU and hence do not benefit from the tremendous compute capabilities of GPUs. In this paper, we present an algorithm to compute at runtime the cross iteration dependences in such loops. The algorithm uses both the CPU and the GPU to compute the dependences. Specifically, it effectively uses the compute capabilities of the GPU to quickly collect the memory accesses performed by the iterations by executing the slice functions generated for the indirect array accesses. Using the dependence information, the loop iterations are levelized such that each level contains independent iterations which can be executed in parallel. Another interesting aspect of the proposed solution is that it pipelines the dependence computation of the future level with the actual computation of the current level to effectively utilize the resources available in the GPU. We use NVIDIA Tesla C2070 to evaluate our implementation using benchmarks from Polybench suite and some synthetic benchmarks. Our experiments show that the proposed technique can achieve an average speedup of 6.4x on loops with a reasonable number of cross iteration dependences.