8 resultados para Environmental valuation. Contingent valuation method. Willingness to pay. Travel cost method. Urban parks. Dunes

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysts have identified four related questions that need to be asked and answered before agreements to respond to global warming will be possible.1 Which countries bear responsibility for causing the problem? What quantities and mix of greenhouse gases should each country be allowed to emit? Which countries have the resources to do something about the problem? Where are the best opportunities for undertaking projects to respond to the problem? Failure to distinguish among these four questions, or willingness to accept superficial answers, promotes unnecessary controversy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development towards the combination of miniaturization and improved functionality of RFIC has been stalled due to the lack of high-performance integrated inductors. To meet this challenge, integration of magnetic material with high permeability as well as low conductivity is a must. Ferrite films are excellent candidates for RF devices due to their low cost, high resistivity, and low eddy current losses. Unlike its bulk counterpart, nanocrystalline zinc ferrite, because of partial inversion in the spinel structure, exhibits novel magnetic properties suitable for RF applications. However, most scalable ferrite film deposition processes require either high temperature or expensive equipment or both. We report a novel low temperature (< 200 degrees C) solution-based deposition process for obtaining high quality, polycrystalline zinc ferrite thin films (ZFTF) on Si (100) and on CMOS-foundry-fabricated spiral inductor structures, rapidly, using safe solvents and precursors. An enhancement of up to 20% at 5 GHz in the inductance of a fabricated device was achieved due to the deposited ZFTF. Substantial inductance enhancement requires sufficiently thick films and our reported process is capable of depositing smooth, uniform films as thick as similar to 20 mu m just by altering the solution composition. The method is capable of depositing film conformally on a surface with complex geometry. As it requires neither a vacuum system nor any post-deposition processing, the method reported here has a low thermal budget, making it compatible with modern CMOS process flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a study of the environments of extended radio sources in the Australia Telescope Low-Brightness Survey (ATLBS). The radio sources were selected from the ATLBS Extended Source Sample, which is a well defined sample containing the most extended of radio sources in the ATLBS sky survey regions. The environments were analysed using 4-m Cerro-Tololo Inter-American Observatory Blanco telescope observations carried out for ATLBS fields in the Sloan Digital Sky Survey r(') band. We have estimated the properties of the environments using smoothed density maps derived from galaxy catalogues constructed using these optical imaging data. The angular distribution of galaxy density relative to the axes of the radio sources has been quantified by defining anisotropy parameters that are estimated using a new method presented here. Examining the anisotropy parameters for a subsample of extended double radio sources that includes all sources with pronounced asymmetry in lobe extents, we find good evidence for environmental anisotropy being the dominant cause for lobe asymmetry in that higher galaxy density occurs almost always on the side of the shorter lobe, and this validates the usefulness of the method proposed and adopted here. The environmental anisotropy parameters have been used to examine and compare the environments of Fanaroff-Riley Class I (FRI) and Fanaroff-Riley Class II (FRII) radio sources in two redshift regimes (z < 0.5 and z > 0.5). Wide-angle tail sources and head-tail sources lie in the most overdense environments. The head-tail source environments (for the HT sources in our sample) display dipolar anisotropy in that higher galaxy density appears to lie in the direction of the tails. Excluding the head-tail and wide-angle tail sources, subsamples of FRI and FRII sources from the ATLBS appear to lie in similar moderately overdense environments, with no evidence for redshift evolution in the regimes studied herein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significance: The bi-domain protein tyrosine phosphatases (PTPs) exemplify functional evolution in signaling proteins for optimal spatiotemporal signal transduction. Bi-domain PTPs are products of gene duplication. The catalytic activity, however, is often localized to one PTP domain. The inactive PTP domain adopts multiple functional roles. These include modulation of catalytic activity, substrate specificity, and stability of the bi-domain enzyme. In some cases, the inactive PTP domain is a receptor for redox stimuli. Since multiple bi-domain PTPs are concurrently active in related cellular pathways, a stringent regulatory mechanism and selective cross-talk is essential to ensure fidelity in signal transduction. Recent Advances: The inactive PTP domain is an activator for the catalytic PTP domain in some cases, whereas it reduces catalytic activity in other bi-domain PTPs. The relative orientation of the two domains provides a conformational rationale for this regulatory mechanism. Recent structural and biochemical data reveal that these PTP domains participate in substrate recruitment. The inactive PTP domain has also been demonstrated to undergo substantial conformational rearrangement and oligomerization under oxidative stress. Critical Issues and Future Directions: The role of the inactive PTP domain in coupling environmental stimuli with catalytic activity needs to be further examined. Another aspect that merits attention is the role of this domain in substrate recruitment. These aspects have been poorly characterized in vivo. These lacunae currently restrict our understanding of neo-functionalization of the inactive PTP domain in the bi-domain enzyme. It appears likely that more data from these research themes could form the basis for understanding the fidelity in intracellular signal transduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Field Programmable Gate Array (FPGA) based hardware accelerator for multi-conductor parasitic capacitance extraction, using Method of Moments (MoM), is presented in this paper. Due to the prohibitive cost of solving a dense algebraic system formed by MoM, linear complexity fast solver algorithms have been developed in the past to expedite the matrix-vector product computation in a Krylov sub-space based iterative solver framework. However, as the number of conductors in a system increases leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products present a time bottleneck, especially for ill-conditioned system matrices. In this work, an FPGA based hardware implementation is proposed to parallelize the iterative matrix solution for multiple RHS vectors in a low-rank compression based fast solver scheme. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple conductors in a Ball Grid Array (BGA) package. Speed-ups up to 13x over equivalent software implementation on an Intel Core i5 processor for dense matrix-vector products and 12x for QR compressed matrix-vector products is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.