21 resultados para Heterogeneous firms trade model
em Indian Institute of Science - Bangalore - Índia
Resumo:
A model for heterogeneous acetalisation of poly(vinyl alcohol) with limited solution volume is proposed based on the grain model of Sohn and Szekely. Instead of treating the heterogeneous acetalisation as purely a diffusion process, as in the Matuzawa and Ogasawara model, the present model also takes into account the chemical reaction and the physical state of the solid polymer, such as degree of swelling and porosity, and assumes segregation of the polymer phase at higher conversion into an outer fully reacted zone and an inner zone where the reaction still proceeds. The solution of the model for limited solution volume, moreover, offers a simple method of determining the kinetic parameters and diffusivity for the solid-liquid system using the easily measurable bulk solution concentration of the liquid reactant instead of conversion-distance data for the solid phase, which are considerably more difficult to obtain.
Resumo:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires which leads to delay in execution and significantly high energy consumption.In this paper, we propose a new instruction scheduling algorithm that exploits scheduling slacks of instructions and communication slacks of data values together to achieve better energy-performance trade-offs for clustered architectures with heterogeneous interconnect. Our instruction scheduling algorithm achieves 35% and 40% reduction in communication energy, whereas the overall energy-delay product improves by 4.5% and 6.5% respectively for 2 cluster and 4 cluster machines with marginal increase (1.6% and 1.1%) in execution time. Our test bed uses the Trimaran compiler infrastructure.
Resumo:
Energy use in developing countries is heterogeneous across households. Present day global energy models are mostly too aggregate to account for this heterogeneity. Here, a bottom-up model for residential energy use that starts from key dynamic concepts on energy use in developing countries is presented and applied to India. Energy use and fuel choice is determined for five end-use functions (cooking, water heating, space heating, lighting and appliances) and for five different income quintiles in rural and urban areas. The paper specifically explores the consequences of different assumptions for income distribution and rural electrification on residential sector energy use and CO(2) emissions, finding that results are clearly sensitive to variations in these parameters. As a result of population and economic growth, total Indian residential energy use is expected to increase by around 65-75% in 2050 compared to 2005, but residential carbon emissions may increase by up to 9-10 times the 2005 level. While a more equal income distribution and rural electrification enhance the transition to commercial fuels and reduce poverty, there is a trade-off in terms of higher CO(2) emissions via increased electricity use. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A fuzzy dynamic flood routing model (FDFRM) for natural channels is presented, wherein the flood wave can be approximated to a monoclinal wave. This study is based on modification of an earlier published work by the same authors, where the nature of the wave was of gravity type. Momentum equation of the dynamic wave model is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. Hence, the FDFRM gets rid of the assumptions associated with the momentum equation. Also, it overcomes the necessity of calculating friction slope (S-f) in flood routing and hence the associated uncertainties are eliminated. The fuzzy rule based model is developed on an equation for wave velocity, which is obtained in terms of discontinuities in the gradient of flow parameters. The channel reach is divided into a number of approximately uniform sub-reaches. Training set required for development of the fuzzy rule based model for each sub-reach is obtained from discharge-area relationship at its mean section. For highly heterogeneous sub-reaches, optimized fuzzy rule based models are obtained by means of a neuro-fuzzy algorithm. For demonstration, the FDFRM is applied to flood routing problems in a fictitious channel with single uniform reach, in a fictitious channel with two uniform sub-reaches and also in a natural channel with a number of approximately uniform sub-reaches. It is observed that in cases of the fictitious channels, the FDFRM outputs match well with those of an implicit numerical model (INM), which solves the dynamic wave equations using an implicit numerical scheme. For the natural channel, the FDFRM Outputs are comparable to those of the HEC-RAS model.
Resumo:
Using analysis-by-synthesis (AbS) approach, we develop a soft decision based switched vector quantization (VQ) method for high quality and low complexity coding of wideband speech line spectral frequency (LSF) parameters. For each switching region, a low complexity transform domain split VQ (TrSVQ) is designed. The overall rate-distortion (R/D) performance optimality of new switched quantizer is addressed in the Gaussian mixture model (GMM) based parametric framework. In the AbS approach, the reduction of quantization complexity is achieved through the use of nearest neighbor (NN) TrSVQs and splitting the transform domain vector into higher number of subvectors. Compared to the current LSF quantization methods, the new method is shown to provide competitive or better trade-off between R/D performance and complexity.
Resumo:
A hypomonotectic alloy of Al-4.5wt%Cd has been manufactured by melt spinning and the resulting microstructure examined by transmission electron microscopy. As-melt spun hypomonotectic Al-4.5wt%Cd consists of a homogeneous distribution of faceted 5 to 120 nm diameter cadmium particles embedded in a matrix of aluminium, formed during the monotectic solidification reaction. The cadmium particles exhibit an orientation relationship with the aluminium matrix of {111}Al//{0001}Cd and lang110rangAlAl//lang11¯20> Cd, with four cadmium particle variants depending upon which of the four {111}Al planes is parallel to {0001}Cd. The cadmium particles exibit a distorted cuboctahedral shape, bounded by six curved {100}Al//{20¯23}Cd facets, six curved {111}Al/{40¯43}Cd facets and two flat {111}Al//{0001}Cd facets. The as-melt spun cadmium particle shape is metastable and the cadmium particles equilibrate during heat treatment below the cadmium melting point, becoming elongated to increase the surface area and decrease the separation of the {111}Al//{0001}Cd facets. The equilibrium cadmium particle shape and, therefore, the anisotropy of solid aluminium-solid cadmium and solid aluminium -liquid cadmium surface energies have been monitored by in situ heating in the transmission electron microscope over the temperature range between room temperature and 420 °C. The anisotropy of solid aluminium-solid cadmium surface energy is constant between room temperature and the cadmium melting point, with the {100}Al//{20¯23}Cd surface energy on average 40% greater than the {111}Al//{0001}Cd surface energy, and 10% greater than the {111}Al//{40¯43Cd surface energy. When the cadmium particles melt at temperatures above 321 °C, the {100}Al//{20¯23}Cd facets disappear and the {111}Al//{40¯43}Cd and {111}A1//{0001}Cd surface energies become equal. The {111}Al facets do not disappear when the cadmium particles melt, and the anisotropy of solid aluminium-liquid cadmium surface energy decreases gradually with increasing temperature above the cadmium melting point. The kinetics of cadmium solidification have been examined by heating and cooling experiments in a differential scanning calorimeter over a range of heating and cooling rates. Cadmium particle solidification is nucleated catalytically by the surrounding aluminium matrix on the {111}Al faceted surfaces, with an undercooling of 56 K and a contact angle of 42 °. The nucleation kinetics of cadmium particle solidification are in good agreement with the hemispherical cap model of heterogeneous nucleation.
Resumo:
The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.
Resumo:
It is argued that the nanometric dispersion of Bi in a Zn matrix is an ideal model system for heterogeneous nucleation experiments. The classical theory of heterogeneous nucleation with a hemispherical cap model is applied to analyse the nucleation data. It is shown that, unlike the results of earlier experiments, the derived site density for catalytic nucleation and contact angle are realistic and strongly suggest the validity of the classical theory. The surface energy between the 0001 plane of Zn and the <10(1)over bar 2> plane of Bi, which constitute the epitaxial nucleation interface, is estimated to be 39 mJ m(-2).
Resumo:
Aqueous phase oxidation of sulphur dioxide at low concentrations catalysed by a PVP-Cu complex in the solid phase and dissolved Cu(II) in the liquid phase is studied in a rotating catalyst basket reactor (RCBR). The equilibrium adsorption of Cu(II) and S(VI) on PVP particles is found to be of the Langmuir-type. The diffusional effects of S(IV) species in PVP-Cu resin are found to be insignificant whereas that of product S(VI) are found to be significant. The intraparticle diffusivity of S(VI) is obtained from independent tracer experiments. In the oxidation reaction HSO3- is the reactive species. Both the S(IV) species in the solution, namely SO2(aq) and HSO3- get adsorbed onto the active PVP-Cu sites of the catalyst, but only HSO3- undergoes oxidation. A kinetic mechanism is proposed based on this feature which shows that SO2(aq) has a deactivating effect on the catalyst. A rate model is developed for the three-phase reaction system incorporating these factors along with the effect of concentration of H2SO4 on the solubility of SO2 in the dilute aqueous solutions of Cu(II). Transient oxidation experiments are conducted at different conditions of concentration of SO2 and O-2 in the gas phase and catalyst concentration, and the rate parameters are estimated from the data. The observed and calculated profiles are in very good agreement. This confirms the deactivating effect of nonreactive SO2(aq) on the heterogeneous catalysis.
Resumo:
We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
A modified lattice model using finite element method has been developed to study the mode-I fracture analysis of heterogeneous materials like concrete. In this model, the truss members always join at points where aggregates are located which are modeled as plane stress triangular elements. The truss members are given the properties of cement mortar matrix randomly, so as to represent the randomness of strength in concrete. It is widely accepted that the fracture of concrete structures should not be based on strength criterion alone, but should be coupled with energy criterion. Here, by incorporating the strain softening through a parameter ‘α’, the energy concept is introduced. The softening branch of load-displacement curves was successfully obtained. From the sensitivity study, it was observed that the maximum load of a beam is most sensitive to the tensile strength of mortar. It is seen that by varying the values of properties of mortar according to a normal random distribution, better results can be obtained for load-displacement diagram.
Resumo:
Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.
Resumo:
Electronic exchanges are double-sided marketplaces that allow multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. Two important issues in the design of exchanges are (1) trade determination (determining the number of goods traded between any buyer-seller pair) and (2) pricing. In this paper we address the trade determination issue for one-shot, multi-attribute exchanges that trade multiple units of the same good. The bids are configurable with separable additive price functions over the attributes and each function is continuous and piecewise linear. We model trade determination as mixed integer programming problems for different possible bid structures and show that even in two-attribute exchanges, trade determination is NP-hard for certain bid structures. We also make some observations on the pricing issues that are closely related to the mixed integer formulations.
Resumo:
This paper is concerned with the optimal flow control of an ATM switching element in a broadband-integrated services digital network. We model the switching element as a stochastic fluid flow system with a finite buffer, a constant output rate server, and a Gaussian process to characterize the input, which is a heterogeneous set of traffic sources. The fluid level should be maintained between two levels namely b1 and b2 with b1