967 resultados para Optimal design
Resumo:
This paper examines the optimal design of climate change policies in the context where governments want to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the carbon taxes and other environmental policies that could in principle stimulate such investment will be imposed over a very long future. The conventional claim by environmental economists is that environmental policies alone are sufficient to induce firms to undertake optimal investment. However this argument requires governments to be able to commit to these future taxes, and it is far from clear that governments have this degree of commitment. We assume instead that governments cannot commit, and so both they and the private sector have to contemplate the possibility of there being governments in power in the future that give different (relative) weights to the environment. We show that this lack of commitment has a significant asymmetric effect. Compared to the situation where governments can commit it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently governments may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.
Resumo:
The goal of this paper is to reexamine the optimal design and efficiency of loyalty rewards in markets for final consumption goods. While the literature has emphasized the role of loyalty rewards as endogenous switching costs (which distort the efficient allocation of consumers), in this paper I analyze the ability of alternative designs to foster consumer participation and increase total surplus. First, the efficiency of loyalty rewards depend on their specific design. A commitment to the price of repeat purchases can involve substantial efficiency gains by reducing price-cost margins. However, discount policies imply higher future regular prices and are likely to reduce total surplus. Second, firms may prefer to set up inefficient rewards (discounts), especially in those circumstances where a commitment to the price of repeat purchases triggers Coasian dynamics.
Resumo:
Optimum experimental designs depend on the design criterion, the model andthe design region. The talk will consider the design of experiments for regressionmodels in which there is a single response with the explanatory variables lying ina simplex. One example is experiments on various compositions of glass such asthose considered by Martin, Bursnall, and Stillman (2001).Because of the highly symmetric nature of the simplex, the class of models thatare of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather differentfrom those of standard regression analysis. The optimum designs are also ratherdifferent, inheriting a high degree of symmetry from the models.In the talk I will hope to discuss a variety of modes for such experiments. ThenI will discuss constrained mixture experiments, when not all the simplex is availablefor experimentation. Other important aspects include mixture experimentswith extra non-mixture factors and the blocking of mixture experiments.Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007).If time and my research allows, I would hope to finish with a few comments ondesign when the responses, rather than the explanatory variables, lie in a simplex.ReferencesAtkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum ExperimentalDesigns, with SAS. Oxford: Oxford University Press.Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results onoptimal and efficient designs for constrained mixture experiments. In A. C.Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000,pp. 225–239. Dordrecht: Kluwer.Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal StatisticalSociety, Ser. B 20, 344–360.1
Resumo:
We lay out a tractable model for fiscal and monetary policy analysis in a currency union, and study its implications for the optimal design of such policies. Monetary policy is conducted by a common central bank, which sets the interest rate for the union as a whole. Fiscal policy is implemented at the countrylevel, through the choice of government spending. The model incorporates country-specific shocks and nominal rigidities. Under our assumptions, the optimal cooperative policy arrangement requires that inflation be stabilized at the union level by the common central bank, while fiscal policy is used by each country for stabilization purposes. By contrast, when the fiscal authorities act in a non-coordinated way, their joint actions lead to a suboptimal outcome, and make the common central bank face a trade-off between inflation and output gap stabilization at the union level.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
The optimal design of a heat exchanger system is based on given model parameters together with given standard ranges for machine design variables. The goals set for minimizing the Life Cycle Cost (LCC) function which represents the price of the saved energy, for maximizing the momentary heat recovery output with given constraints satisfied and taking into account the uncertainty in the models were successfully done. Nondominated Sorting Genetic Algorithm II (NSGA-II) for the design optimization of a system is presented and implemented inMatlab environment. Markov ChainMonte Carlo (MCMC) methods are also used to take into account the uncertainty in themodels. Results show that the price of saved energy can be optimized. A wet heat exchanger is found to be more efficient and beneficial than a dry heat exchanger even though its construction is expensive (160 EUR/m2) compared to the construction of a dry heat exchanger (50 EUR/m2). It has been found that the longer lifetime weights higher CAPEX and lower OPEX and vice versa, and the effect of the uncertainty in the models has been identified in a simplified case of minimizing the area of a dry heat exchanger.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
This paper presents the optimal design of a sur- face mounted permanent magnet Brushless DC mo- tor (PMBLDC) meant for spacecraft applications. The spacecraft applications requires the choice of a torques motor with high torque density, minimum cogging torque, better positional stability and high torque to inertia ratio. Performance of two types of machine con¯gurations viz Slotted PMBLDC and Slotless PMBLDC with halbach array are compared with the help of analytical and FE methods. It is found that unlike a Slotted PMBLDC motor, the Slotless type with halbach array develops zero cogging torque without reduction in the developed torque. Moreover, the machine being coreless provides high torque to inertia ratio and zero magnetic stiction
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
Optimum experimental designs depend on the design criterion, the model and the design region. The talk will consider the design of experiments for regression models in which there is a single response with the explanatory variables lying in a simplex. One example is experiments on various compositions of glass such as those considered by Martin, Bursnall, and Stillman (2001). Because of the highly symmetric nature of the simplex, the class of models that are of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather different from those of standard regression analysis. The optimum designs are also rather different, inheriting a high degree of symmetry from the models. In the talk I will hope to discuss a variety of modes for such experiments. Then I will discuss constrained mixture experiments, when not all the simplex is available for experimentation. Other important aspects include mixture experiments with extra non-mixture factors and the blocking of mixture experiments. Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007). If time and my research allows, I would hope to finish with a few comments on design when the responses, rather than the explanatory variables, lie in a simplex. References Atkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum Experimental Designs, with SAS. Oxford: Oxford University Press. Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results on optimal and efficient designs for constrained mixture experiments. In A. C. Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000, pp. 225–239. Dordrecht: Kluwer. Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal Statistical Society, Ser. B 20, 344–360. 1
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
The purpose of this work is to provide a brief overview of the literature on the optimal design of unemployment insurance systems by analyzing some of the most influential articles published over the last three decades on the subject and extend the main results to a multiple aggregate shocks environment. The properties of optimal contracts are discussed in light of the key assumptions commonly made in theoretical publications on the area. Moreover, the implications of relaxing each of these hypothesis is reckoned as well. The analysis of models of only one unemployment spell starts from the seminal work of Shavell and Weiss (1979). In a simple and common setting, unemployment benefits policies, wage taxes and search effort assignments are covered. Further, the idea that the UI distortion of the relative price of leisure and consumption is the only explanation for the marginal incentives to search for a job is discussed, putting into question the reduction in labor supply caused by social insurance, usually interpreted as solely an evidence of a dynamic moral hazard caused by a substitution effect. In addition, the paper presents one characterization of optimal unemployment insurance contracts in environments in which workers experience multiple unemployment spells. Finally, an extension to multiple aggregate shocks environment is considered. The paper ends with a numerical analysis of the implications of i.i.d. shocks to the optimal unemployment insurance mechanism.
Resumo:
This work analyzes the optimal design of an unemployment insurance program for couples, whose joint search problem in the labor market differ significantly from the problem faced by single agents. We use a version of the sequential search model of the labor market adapted to married agents to compare optimal constant policies for single and married agents, as well as characterize the optimal constant policy when the agency faces single and married agents simultaneously. Our main result is that an agency that gives equal weights to single and married agents will want to give equal utility promises to both types of agents and spend more on the single agent.