982 resultados para Utility-functions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis progresses Bayesian experimental design by developing novel methodologies and extensions to existing algorithms. Through these advancements, this thesis provides solutions to several important and complex experimental design problems, many of which have applications in biology and medicine. This thesis consists of a series of published and submitted papers. In the first paper, we provide a comprehensive literature review on Bayesian design. In the second paper, we discuss methods which may be used to solve design problems in which one is interested in finding a large number of (near) optimal design points. The third paper presents methods for finding fully Bayesian experimental designs for nonlinear mixed effects models, and the fourth paper investigates methods to rapidly approximate the posterior distribution for use in Bayesian utility functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the study, the potential allowable cut in the district of Pohjois-Savo - based on the non-industrial private forest landowners' (NIPF) choices of timber management strategies - was clarified. Alternative timber management strategies were generated, and the choices and factors affecting the choices of timber management strategies by NIPF landowners were studied. The choices of timber management strategies were solved by maximizing the utility functions of the NIPF landowners. The parameters of the utility functions were estimated using the Analytic Hierarchy Process (AHP). The level of the potential allowable cut was compared to the cutting budgets based on the 7th and 8th National Forest Inventories (NFI7 and NFI8), to the combining of private forestry plans, and to the realized drain from non-industrial private forests. The potential allowable cut was calculated using the same MELA system as has been used in the calculation of the national cutting budget. The data consisted of the NIPF holdings (from the TASO planning system) that had been inventoried compartmentwise and had forestry plans made during the years 1984-1992. The NIPF landowners' choices of timber management strategies were clarified by a two-phase mail inquiry. The most preferred strategy obtained was "sustainability" (chosen by 62 % of landowners). The second in order of preference was "finance" (17 %) and the third was "saving" (11 %). "No cuttings", and "maximum cuttings" were the least preferred (9 % and 1 %, resp.). The factors promoting the choices of strategies with intensive cuttings were a) "farmer as forest owner" and "owning fields", b) "increase in the size of the forest holding", c) agriculture and forestry orientation in production, d) "decreasing short term stumpage earning expectations", e) "increasing intensity of future cuttings", and f) "choice of forest taxation system based on site productivity". The potential allowable cut defined in the study was 20 % higher than the average of the realized drain during the years 1988-1993, which in turn, was at the same level as the cutting budget based on the combining of forestry plans in eastern Finland. Respectively, the potential allowable cut defined in the study was 12 % lower than the NFI8-based greatest sustained allowable cut for the 1990s. Using the method presented in this study, timber management strategies can be clarified for non-industrial private forest landowners in different parts of Finland. Based on the choices of timber managemet strategies, regular cutting budgets can be calculated more realistically than before.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider a setting in which several operators offer downlink wireless data access services in a certain geographical region. Each operator deploys several base stations or access points, and registers some subscribers. In such a situation, if operators pool their infrastructure, and permit the possibility of subscribers being served by any of the cooperating operators, then there can be overall better user satisfaction, and increased operator revenue. We use coalitional game theory to investigate such resource pooling and cooperation between operators.We use utility functions to model user satisfaction, and show that the resulting coalitional game has the property that if all operators cooperate (i.e., form a grand coalition) then there is an operating point that maximizes the sum utility over the operators while providing the operators revenues such that no subset of operators has an incentive to break away from the coalition. We investigate whether such operating points can result in utility unfairness between users of the various operators. We also study other revenue sharing concepts, namely, the nucleolus and the Shapely value. Such investigations throw light on criteria for operators to accept or reject subscribers, based on the service level agreements proposed by them. We also investigate the situation in which only certain subsets of operators may be willing to cooperate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of this paper is to analyse the value of information contained in prices of options on the IBEX 35 index at the Spanish Stock Exchange Market. The forward looking information is extracted using implied risk-neutral density functions estimated by a mixture of two-lognormals and three alternative risk-adjustments: the classic power and exponential utility functions and a habit-based specification that allows for a counter-cyclical variation of risk aversion. Our results show that at four-week horizon we can reject the hypothesis that between October 1996 and March 2000 the risk-neutral densities provide accurate predictions of the distributions of future realisations of the IBEX 35 index at a four-week horizon. When forecasting through risk-adjusted densities the performance of this period is statistically improved and we no longer reject that hypothesis. All risk-adjusted densities generate similar forecasting statistics. Then, at least for a horizon of four-weeks, the actual risk adjustment does not seem to be the issue. By contrast, at the one-week horizon risk-adjusted densities do not improve the forecasting ability of the risk-neutral counterparts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Almost all material selection problems require that a compromise be sought between some metric of performance and cost. Trade-off methods using utility functions allow optimal solutions to be found for two objective, but for three it is harder. This paper develops and demonstrates a method for dealing with three objectives.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes an autonomics development tool which serves as both a powerful and flexible policy-expression language and a policy-based framework that supports the integration and dynamic composition of several autonomic computing techniques including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. Self-management behaviour is specified using the policy language semantics to bind the various technologies together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. The policy language and implementation library have integrated support for self-stabilising behaviour, enabling oscillation and other forms of instability to be handled at the policy level with very little effort on the part of the application developer. Example applications are presented to illustrate the integration of different autonomics techniques, and the achievement of dynamic composition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increased capabilities (e.g., processing, storage) of portable devices along with the constant need of users to retrieve and send information have introduced a new form of communication. Users can seamlessly exchange data by means of opportunistic contacts among them and this is what characterizes the opportunistic networks (OppNets). OppNets allow users to communicate even when an end-to-end path may not exist between them. Since 2007, there has been a trend to improve the exchange of data by considering social similarity metrics. Social relationships, shared interests, and popularity are examples of such metrics that have been employed successfully: as users interact based on relationships and interests, this information can be used to decide on the best next forwarders of information. This Thesis work combines the features of today's devices found in the regular urban environment with the current social-awareness trend in the context of opportunistic routing. To achieve this goal, this work was divided into di erent tasks that map to a set of speci c objectives, leading to the following contributions: i) an up-to-date opportunistic routing taxonomy; ii) a universal evaluation framework that aids in devising and testing new routing proposals; iii) three social-aware utility functions that consider the dynamic user behavior and can be easily incorporated to other routing proposals; iv) two opportunistic routing proposals based on the users' daily routines and on the content traversing the network and interest of users in such content; and v) a structure analysis of the social-based network formed based on the approaches devised in this work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Future emerging market trends head towards positioning based services placing a new perspective on the way we obtain and exploit positioning information. On one hand, innovations in information technology and wireless communication systems enabled the development of numerous location based applications such as vehicle navigation and tracking, sensor networks applications, home automation, asset management, security and context aware location services. On the other hand, wireless networks themselves may bene t from localization information to improve the performances of di erent network layers. Location based routing, synchronization, interference cancellation are prime examples of applications where location information can be useful. Typical positioning solutions rely on measurements and exploitation of distance dependent signal metrics, such as the received signal strength, time of arrival or angle of arrival. They are cheaper and easier to implement than the dedicated positioning systems based on ngerprinting, but at the cost of accuracy. Therefore intelligent localization algorithms and signal processing techniques have to be applied to mitigate the lack of accuracy in distance estimates. Cooperation between nodes is used in cases where conventional positioning techniques do not perform well due to lack of existing infrastructure, or obstructed indoor environment. The objective is to concentrate on hybrid architecture where some nodes have points of attachment to an infrastructure, and simultaneously are interconnected via short-range ad hoc links. The availability of more capable handsets enables more innovative scenarios that take advantage of multiple radio access networks as well as peer-to-peer links for positioning. Link selection is used to optimize the tradeo between the power consumption of participating nodes and the quality of target localization. The Geometric Dilution of Precision and the Cramer-Rao Lower Bound can be used as criteria for choosing the appropriate set of anchor nodes and corresponding measurements before attempting location estimation itself. This work analyzes the existing solutions for node selection in order to improve localization performance, and proposes a novel method based on utility functions. The proposed method is then extended to mobile and heterogeneous environments. Simulations have been carried out, as well as evaluation with real measurement data. In addition, some speci c cases have been considered, such as localization in ill-conditioned scenarios and the use of negative information. The proposed approaches have shown to enhance estimation accuracy, whilst signi cantly reducing complexity, power consumption and signalling overhead.