43 resultados para Rational objective
Resumo:
This paper addresses some of the basic issues involved in the determination of rational strategies for players in two-target games. We show that unlike single-target games where the task of role assignment and selection of strategies is conceptually straightforward, in two-target games, many factors like the preference ordering of outcomes by players, the relative configuration of the target sets and secured outcome regions, the uncertainty about the parameters of the game, etc., also influence the rational selection of strategies by players. The importance of these issues is illustrated through appropriate examples.
Resumo:
The suitability of the European Centre for Medium Range Weather Forecasting (ECMWF) operational wind analysis for the period 1980-1991 for studying interannual variability is examined. The changes in the model and the analysis procedure are shown to give rise to a systematic and significant trend in the large scale circulation features. A new method of removing the systematic errors at all levels is presented using multivariate EOF analysis. Objectively detrended analysis of the three-dimensional wind field agrees well with independent Florida State University (FSU) wind analysis at the surface. It is shown that the interannual variations in the detrended surface analysis agree well in amplitude as well as spatial patterns with those of the FSU analysis. Therefore, the detrended analyses at other levels as well are expected to be useful for studies of variability and predictability at interannual time scales. It is demonstrated that this trend in the wind field is due to the shift in the climatologies from the period 1980-1985 to the period 1986-1991.
Resumo:
In the absence of a reliable method for a priori prediction of structure and properties of inorganic solid materials, an experimental approach involving a systematic study of composition, structure and properties combined with chemical intuition based on previous experience is likely to be a viable alternative to the problem of rational design of inorganic materials. The approach is illustrated by taking perovskite lithium-ion conductors as an example.
Resumo:
An integrated reservoir operation model is presented for developing effective operational policies for irrigation water management. In arid and semi-arid climates, owing to dynamic changes in the hydroclimatic conditions within a season, the fixed cropping pattern with conventional operating policies, may have considerable impact on the performance of the irrigation system and may affect the economics of the farming community. For optimal allocation of irrigation water in a season, development of effective mathematical models may guide the water managers in proper decision making and consequently help in reducing the adverse effects of water shortage and crop failure problems. This paper presents a multi-objective integrated reservoir operation model for multi-crop irrigation system. To solve the multi-objective model, a recent swarm intelligence technique, namely elitist-mutated multi-objective particle swarm optimisation (EM-MOPSO) has been used and applied to a case study in India. The method evolves effective strategies for irrigation crop planning and operation policies for a reservoir system, and thereby helps farming community in improving crop benefits and water resource usage in the reservoir command area.
Resumo:
The primary objective of the paper is to make use of statistical digital human model to better understand the nature of reach probability of points in the taskspace. The concept of task-dependent boundary manikin is introduced to geometrically characterize the extreme individuals in the given population who would accomplish the task. For a given point of interest and task, the map of the acceptable variation in anthropometric parameters is superimposed with the distribution of the same parameters in the given population to identify the extreme individuals. To illustrate the concept, the task space mapping is done for the reach probability of human arms. Unlike the boundary manikins, who are completely defined by the population, the dimensions of these manikins will vary with task, say, a point to be reached, as in the present case. Hence they are referred to here as the task-dependent boundary manikins. Simulations with these manikins would help designers to visualize how differently the extreme individuals would perform the task. Reach probability at the points in a 3D grid in the operational space is computed; for objects overlaid in this grid, approximate probabilities are derived from the grid for rendering them with colors indicating the reach probability. The method may also help in providing a rational basis for selection of personnel for a given task.
Resumo:
In this paper, we address a key problem faced by advertisers in sponsored search auctions on the web: how much to bid, given the bids of the other advertisers, so as to maximize individual payoffs? Assuming the generalized second price auction as the auction mechanism, we formulate this problem in the framework of an infinite horizon alternative-move game of advertiser bidding behavior. For a sponsored search auction involving two advertisers, we characterize all the pure strategy and mixed strategy Nash equilibria. We also prove that the bid prices will lead to a Nash equilibrium, if the advertisers follow a myopic best response bidding strategy. Following this, we investigate the bidding behavior of the advertisers if they use Q-learning. We discover empirically an interesting trend that the Q-values converge even if both the advertisers learn simultaneously.
Resumo:
In a computational grid, the presence of grid resource providers who are rational and intelligent could lead to an overall degradation in the efficiency of the grid. In this paper, we design incentive compatible grid resource procurement mechanisms which ensure that the efficiency of the grid is not affected by the rational behavior of resource providers.In particular, we offer three elegant incentive compatible mechanisms for this purpose: (1) G-DSIC (Grid-Dominant Strategy Incentive Compatible) mechanism (2) G-BIC (Grid-Bayesian Nash Incentive Compatible) mechanism (3) G-OPT(Grid-Optimal) mechanism which minimizes the cost to the grid user, satisfying at the same time, (a) Bayesian incentive compatibility and (b) individual rationality. We evaluate the relative merits and demerits of the above three mechanisms using game theoretical analysis and numerical experiments.
Resumo:
The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.
Resumo:
Displacement-amplifying compliant mechanisms (DaCMs) reported in literature are mostly used for actuator applications. This paper considers them for sensor applications that rely on displacement measurement, and evaluates them objectively. The main goal is to increase the sensitivity under constraints imposed by several secondary requirements and practical constraints. A spring-mass-lever model that effectively captures the addition of a DaCM to a sensor is used in comparing eight DaCMs. We observe that they significantly differ in performance criteria such as geometric advantage, stiffness, natural frequency, mode amplification, factor of safety against failure, cross-axis stiffness, etc., but none excel in all. Thus, a combined figure of merit is proposed using which the most suitable DaCM could be selected for a sensor application. A case-study of a micro machined capacitive accelerometer and another case-study of a vision-based force sensor are included to illustrate the general evaluation and selection procedure of DaCMs with specific applications. Some other insights gained with the analysis presented here were the optimum size-scale for a DaCM, the effect on its natural frequency, limits on its stiffness, and working range of the sensor.