28 resultados para Attentional Demands
em Indian Institute of Science - Bangalore - Índia
Resumo:
The operation of thyristor-controlled static VAR compensators (SVCs) at various conduction angles can be used advantageously to meet the unablanced reactive power demands in a system. However, such operation introduces harmonic currents into the AC system. This paper presents an algorithm to evaluate an optimum combination of the phase-wise reactive power generations from SVC and balanced reactive power supply from the AC system, based on the defined performance indices, namely, the telephone influence factor (TIF), the total harmonic current factor (IT) and the distortion factor (D). Results of the studies conducted on a typical distribution system are presented and discussed.
Resumo:
In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.
Resumo:
This paper presents an approach to model the expected impacts of climate change on irrigation water demand in a reservoir command area. A statistical downscaling model and an evapotranspiration model are used with a general circulation model (GCM) output to predict the anticipated change in the monthly irrigation water requirement of a crop. Specifically, we quantify the likely changes in irrigation water demands at a location in the command area, as a response to the projected changes in precipitation and evapotranspiration at that location. Statistical downscaling with a canonical correlation analysis is carried out to develop the future scenarios of meteorological variables (rainfall, relative humidity (RH), wind speed (U-2), radiation, maximum (Tmax) and minimum (Tmin) temperatures) starting with simulations provided by a GCM for a specified emission scenario. The medium resolution Model for Interdisciplinary Research on Climate GCM is used with the A1B scenario, to assess the likely changes in irrigation demands for paddy, sugarcane, permanent garden and semidry crops over the command area of Bhadra reservoir, India. Results from the downscaling model suggest that the monthly rainfall is likely to increase in the reservoir command area. RH, Tmax and Tmin are also projected to increase with small changes in U-2. Consequently, the reference evapotranspiration, modeled by the Penman-Monteith equation, is predicted to increase. The irrigation requirements are assessed on monthly scale at nine selected locations encompassing the Bhadra reservoir command area. The irrigation requirements are projected to increase, in most cases, suggesting that the effect of projected increase in rainfall on the irrigation demands is offset by the effect due to projected increase/change in other meteorological variables (viz., Tmax and Tmin, solar radiation, RH and U-2). The irrigation demand assessment study carried out at a river basin will be useful for future irrigation management systems. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
The recently discovered twist phase is studied in the context of the full ten-parameter family of partially coherent general anisotropic Gaussian Schell-model beams. It is shown that the nonnegativity requirement on the cross-spectral density of the beam demands that the strength of the twist phase be bounded from above by the inverse of the transverse coherence area of the beam. The twist phase as a two-point function is shown to have the structure of the generalized Huygens kernel or Green's function of a first-order system. The ray-transfer matrix of this system is exhibited. Wolf-type coherent-mode decomposition of the twist phase is carried out. Imposition of the twist phase on an otherwise untwisted beam is shown to result in a linear transformation in the ray phase space of the Wigner distribution. Though this transformation preserves the four-dimensional phase-space volume, it is not symplectic and hence it can, when impressed on a Wigner distribution, push it out of the convex set of all bona fide Wigner distributions unless the original Wigner distribution was sufficiently deep into the interior of the set.
Resumo:
The rectangular dielectric waveguide is the most commonly used structure in integrated optics, especially in semi-conductor diode lasers. Demands for new applications such as high-speed data backplanes in integrated electronics, waveguide filters, optical multiplexers and optical switches are driving technology toward better materials and processing techniques for planar waveguide structures. The infinite slab and circular waveguides that we know are not practical for use on a substrate because the slab waveguide has no lateral confinement and the circular fiber is not compatible with the planar processing technology being used to make planar structures. The rectangular waveguide is the natural structure. In this review, we have discussed several analytical methods for analyzing the mode structure of rectangular structures, beginning with a wave analysis based on the pioneering work of Marcatili. We study three basic techniques with examples to compare their performance levels. These are the analytical approach developed by Marcatili, the perturbation techniques, which improve on the analytical solutions and the effective index method with examples.
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change.
Resumo:
A large part of the rural people of developing countries use traditional biomass stoves to meet their cooking and heating energy demands. These stoves possess very low thermal efficiency; besides, most of them cannot handle agricultural wastes. Thus, there is a need to develop an alternate cooking contrivance which is simple, efficient and can handle a range of biomass including agricultural wastes. In this reported work, a highly densified solid fuel block using a range of low cost agro residues has been developed to meet the cooking and heating needs. A strategy was adopted to determine the best suitable raw materials, which was optimized in terms of cost and performance. Several experiments were conducted using solid fuel block which was manufactured using various raw materials in different proportions; it was found that fuel block composed of 40% biomass, 40% charcoal powder, 15% binder and 5% oxidizer fulfilled the requirement. Based on this finding, fuel blocks of two different configurations viz. cylindrical shape with single and multi-holes (3, 6, 9 and 13) were constructed and its performance was evaluated. For instance, the 13 hole solid fuel block met the requirement of domestic cooking; the mean thermal power was 1.6 kWth with a burn time of 1.5 h. Furthermore, the maximum thermal efficiency recorded for this particular design was 58%. Whereas, the power level of single hole solid fuel block was found to be lower but adequate for barbecue cooking application.
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.
Resumo:
Polarization properties of Gaussian laser beams are analyzed in a manner consistent with the Maxwell equations, and expressions are developed for all components of the electric and magnetic field vectors in the beam. It is shown that the transverse nature of the free electromagnetic field demands a nonzero transverse cross-polarization component in addition to the well-known component of the field vectors along the beam axis. The strength of these components in relation to the strength of the principal polarization component is established. It is further shown that the integrated strengths of these components over a transverse plane are invariants of the propagation process. It is suggested that cross- polarization measurement using a null detector can serve as a new method for accurate determination of the center of Gaussian laser beams.
Resumo:
There is an endless quest for new materials to meet the demands of advancing technology. Thus, we need new magnetic and metallic/semiconducting materials for spintronics, new low-loss dielectrics for telecommunication, new multi-ferroic materials that combine both ferroelectricity and ferromagnetism for memory devices, new piezoelectrics that do not contain lead, new lithium containing solids for application as cathode/anode/electrolyte in lithium batteries, hydrogen storage materials for mobile/transport applications and catalyst materials that can convert, for example, methane to higher hydrocarbons, and the list is endless! Fortunately for us, chemistry - inorganic chemistry in particular - plays a crucial role in this quest. Most of the functional materials mentioned above are inorganic non-molecular solids, while much of the conventional inorganic chemistry deals with isolated molecules or molecular solids. Even so, the basic concepts that we learn in inorganic chemistry, for example, acidity/basicity, oxidation/reduction (potentials), crystal field theory, low spin-high spin/inner sphere-outer sphere complexes, role of d-electrons in transition metal chemistry, electron-transfer reactions, coordination geometries around metal atoms, Jahn-Teller distortion, metal-metal bonds, cation-anion (metal-nonmetal) redox competition in the stabilization of oxidation states - all find crucial application in the design and synthesis of inorganic solids possessing technologically important properties. An attempt has been made here to illustrate the role of inorganic chemistry in this endeavour, drawing examples from the literature its well as from the research work of my group.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
The scalar coupled proton NMR spectra of many organic molecules possessing more than one phenyl ring are generally complex due to degeneracy of transitions arising from the closely resonating protons, in addition to several short- and long- range couplings experienced by each proton. Analogous situations are generally encountered in derivatives of halogenated benzanilides. Extraction of information from such spectra is challenging and demands the differentiation of spectrum pertaining to each phenyl ring and the simplification of their spectral complexity. The present study employs the blend of independent spin system filtering and the spin-state selective detection of single quantum (SO) transitions by the two-dimensional multiple quantum (MQ) methodology in achieving this goal. The precise values of the scalar couplings of very small magnitudes have been derived by double quantum resolved experiments. The experiments also provide the relative signs of heteronuclear couplings. Studies on four isomers of dilhalogenated benzanilides are reported in this work.
Resumo:
The decentralized power is characterised by generation of power nearer to the demand centers, focusing mainly on meeting local energy needs. A decentralized power system can function either in the presence of grid, where it can feed the surplus power generated to the grid, or as an independent/stand-alone isolated system exclusively meeting the local demands of remote locations. Further, decentralized power is also classified on the basis of type of energy resources used-non-renewable and renewable. These classifications along with a plethora of technological alternatives have made the whole prioritization process of decentralized power quite complicated for decision making. There is abundant literature, which has discussed various approaches that have been used to support decision making under such complex situations. We envisage that summarizing such literature and coming out with a review paper would greatly help the policy/decision makers and researchers in arriving at effective solutions. With such a felt need 102 articles were reviewed and features of several technological alternatives available for decentralized power, the studies on modeling and analysis of economic, environmental and technological asibilities of both grid-connected (GC) and stand-alone (SA) systems as decentralized power options are presented. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.