961 resultados para Selection Problems
Resumo:
This work analyses the optimal menu of contracts offered by a risk neutral principal to a risk averse agent under moral hazard, adverse selection and limited liability. There are two output levels, whose probability of occurrence are given by agent’s private information choice of effort. The agent’s cost of effort is also private information. First, we show that without assumptions on the cost function, it is not possible to guarantee that the optimal contract menu is simple, when the agent is strictly risk averse. Then, we provide sufficient conditions over the cost function under which it is optimal to offer a single contract, independently of agent’s risk aversion. Our full-pooling cases are caused by non-responsiveness, which is induced by the high cost of enforcing higher effort levels. Also, we show that limited liability generates non-responsiveness.
Resumo:
The procedure used should guarantee that the risk of decision asserted from the observations is at most some specified value P*.
Resumo:
In this paper we propose a simple method of characterizing countervailing incentives in adverse selection problems. The key element in our characterization consists of analyzing properties of the full information problem. This allows solving the principal problem without using optimal control theory. Our methodology can also be applied to different economic settings: health economics, monopoly regulation, labour contracts, limited liabilities and environmental regulation.
Resumo:
Almost all material selection problems require that a compromise be sought between some metric of performance and cost. Trade-off methods using utility functions allow optimal solutions to be found for two objective, but for three it is harder. This paper develops and demonstrates a method for dealing with three objectives.
Resumo:
The objective of this dissertation is to re-examine classical issues in corporate finance, applying a new analytical tool. The single-crossing property, also called Spence-irrlees condition, is not required in the models developed here. This property has been a standard assumption in adverse selection and signaling models developed so far. The classical papers by Guesnerie and Laffont (1984) and Riley (1979) assume it. In the simplest case, for a consumer with a privately known taste, the single-crossing property states that the marginal utility of a good is monotone with respect to the taste. This assumption has an important consequence to the result of the model: the relationship between the private parameter and the quantity of the good assigned to the agent is monotone. While single crossing is a reasonable property for the utility of an ordinary consumer, this property is frequently absent in the objective function of the agents for more elaborate models. The lack of a characterization for the non-single crossing context has hindered the exploration of models that generate objective functions without this property. The first work that characterizes the optimal contract without the single-crossing property is Araújo and Moreira (2001a) and, for the competitive case, Araújo and Moreira (2001b). The main implication is that a partial separation of types may be observed. Two sets of disconnected types of agents may choose the same contract, in adverse selection problems, or signal with the same levei of signal, in signaling models.
Resumo:
The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
The present article assesses agency theory related problems contributing to the fall of shopping centers. The negative effects of the financial and economic downturn started in 2008 were accentuated in emerging markets like Romania. Several shopping centers were closed or sold through bankruptcy proceedings or forced execution. These failed shopping centers, 10 in number, were selected in order to assess agency theory problems contributing to the failure of shopping centers; as research method qualitative multiple cases-studies is used. Results suggest, that in all of the cases the risk adverse behavior of the External Investor- Principal, lead to risk sharing problems and subsequently to the fall of the shopping centers. In some of the cases Moral Hazard (lack of Developer-Agent’s know-how and experience) as well as Adverse Selection problems could be identified. The novelty of the topic for the shopping center industry and the empirical evidences confer a significant academic and practical value to the present article.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
Composite resins and glass-ionomer cements were introduced to dentistry in the 1960s and 1970s, respectively. Since then, there has been a series of modifications to both materials as well as the development other groups claiming intermediate characteristics between the two. The result is a confusion of materials leading to selection problems. While both materials are tooth-colored, there is a considerable difference in their properties, and it is important that each is used in the appropriate situation. Composite resin materials are esthetic and now show acceptable physical strength and wear resistance. However, they are hydrophobic, and therefore more difficult to handle in the oral environment, and cannot support ion migration. Also, the problems of gaining long-term adhesion to dentin have yet to be overcome. On the other hand, glass ionomers are water-based and therefore have the potential for ion migration, both inward and outward from the restoration, leading to a number of advantages. However, they lack the physical properties required for use in load-bearing areas. A logical classification designed to differentiate the materials was first published by McLean et al in 1994, but in the last 15 years, both types of material have undergone further research and modification. This paper is designed to bring the classification up to date so that the operator can make a suitable, evidence-based, choice when selecting a material for any given situation.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.