19 resultados para Maximization
Resumo:
This paper compares the Random Regret Minimization and the Random Utility Maximization models for determining recreational choice. The Random Regret approach is based on the idea that, when choosing, individuals aim to minimize their regret – regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. The Random Regret paradigm, recently developed in transport economics, presents a tractable, regret-based alternative to the dominant choice paradigm based on Random Utility. Using data from a travel cost study exploring factors that influence kayakers’ site-choice decisions in the Republic of Ireland, we estimate both the traditional Random Utility multinomial logit model (RU-MNL) and the Random Regret multinomial logit model (RR-MNL) to gain more insights into site choice decisions. We further explore whether choices are driven by a utility maximization or a regret minimization paradigm by running a binary logit model to examine the likelihood of the two decision choice paradigms using site visits and respondents characteristics as explanatory variables. In addition to being one of the first studies to apply the RR-MNL to an environmental good, this paper also represents the first application of the RR-MNL to compute the Logsum to test and strengthen conclusions on welfare impacts of potential alternative policy scenarios.
Resumo:
This paper proposes a discrete mixture model which assigns individuals, up to a probability, to either a class of random utility (RU) maximizers or a class of random regret (RR) minimizers, on the basis of their sequence of observed choices. Our proposed model advances the state of the art of RU-RR mixture models by (i) adding and simultaneously estimating a membership model which predicts the probability of belonging to a RU or RR class; (ii) adding a layer of random taste heterogeneity within each behavioural class; and (iii) deriving a welfare measure associated with the RU-RR mixture model and consistent with referendum-voting, which is the adequate mechanism of provision for such local public goods. The context of our empirical application is a stated choice experiment concerning traffic calming schemes. We find that the random parameter RU-RR mixture model not only outperforms its fixed coefficient counterpart in terms of fit-as expected-but also in terms of plausibility of membership determinants of behavioural class. In line with psychological theories of regret, we find that, compared to respondents who are familiar with the choice context (i.e. the traffic calming scheme), unfamiliar respondents are more likely to be regret minimizers than utility maximizers. © 2014 Elsevier Ltd.
Resumo:
Al Rawi, Anas F., Emiliano Garcia-Palacios, Sonia Aissa, Charalampos C. Tsimenidis, and Bayan S. Sharif. "Dual-Diversity Combining for Constrained Resource Allocation and Throughput Maximization in OFDMA Networks." In Vehicular Technology Conference (VTC Spring), 2013 IEEE 77th, pp. 1-5. IEEE, 2013.
Resumo:
We consider a three-node decode-and-forward (DF) half-duplex relaying system, where the source first harvests RF energy from the relay, and then uses this energy to transmit information to the destination via the relay. We assume that the information transfer and wireless power transfer phases alternate over time in the same frequency band, and their time fraction (TF) may change or be fixed from one transmission epoch (fading state) to the next. For this system, we maximize the achievable average data rate. Thereby, we propose two schemes: (1) jointly optimal power and TF allocation, and (2) optimal power allocation with fixed TF. Due to the small amounts of harvested power at the source, the two schemes achieve similar information rates, but yield significant performance gains compared to a benchmark system with fixed power and fixed TF allocation.
Resumo:
In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.
Resumo:
This paper studies the dynamic pricing problem of selling fixed stock of perishable items over a finite horizon, where the decision maker does not have the necessary historic data to estimate the distribution of uncertain demand, but has imprecise information about the quantity demand. We model this uncertainty using fuzzy variables. The dynamic pricing problem based on credibility theory is formulated using three fuzzy programming models, viz.: the fuzzy expected revenue maximization model, a-optimistic revenue maximization model, and credibility maximization model. Fuzzy simulations for functions with fuzzy parameters are given and embedded into a genetic algorithm to design a hybrid intelligent algorithm to solve these three models. Finally, a real-world example is presented to highlight the effectiveness of the developed model and algorithm.
Resumo:
The increasing risks and costs of new product development require firms to collaborate with their supply chain partners in product management. In this paper, a supply chain model is proposed with one risk-neutral supplier and one risk-averse manufacturer. The manufacturer has an opportunity to enhance demand by developing a new product, but both the actual demand for new product and the supplier’s wholesale price are uncertain. The supplier has an incentive to share risks of new product development via an advance commitment to wholesale price for its own profit maximization. The effects of the manufacturer’s risk sensitivity on the players’ optimal strategies are analyzed and the trade-off between innovation incentives and pricing flexibility is investigated from the perspective of the supplier. The results highlight the significant role of risk sensitivity in collaborative new product development, and it is found that the manufacturer’s innovation level and retail price are always decreasing in the risk sensitivity, and the supplier prefers commitment to wholesale price only when the risk sensitivity is below a certain threshold.
Resumo:
Background
Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.
Results
In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.
Conclusions
For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.
Resumo:
This study integrates the concepts of value creation and value claiming into a theoretical framework that emphasizes the dependence of resource value maximization on value-claiming motivations in outsourcing decisions. To test this theoretical framework, it develops refutable implications to explain the firm's outsourcing decision, and it uses data from 178 firms in the publishing and printing industry on outsourcing of application services. The results show that in outsourcing decisions, resource value and transaction costs are simultaneously considered and that outsourcing decisions are dependent on alignment between resource and transaction attributes. The findings support a resource contingency view that highlights value-claiming mechanisms as resource contingency in interorganizational strategic decisions.
Resumo:
This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the dominant choice-modelling paradigm based on Random Utility Maximization-theory (RUM-theory). We highlight how RRM-based models provide closed form, logit-type formulations for choice probabilities that allow for capturing semi-compensatory behaviour and choice set-composition effects while being equally parsimonious as their utilitarian counterparts. Using data from a Stated Choice-experiment aimed at identifying valuations of characteristics of nature parks, we compare RRM-based models and RUM-based models in terms of parameter estimates, goodness of fit, elasticities and consequential policy implications.
Resumo:
This paper introduces the discrete choice model-paradigm of Random Regret Minimisation (RRM) to the field of health economics. The RRM is a regret-based model that explores a driver of choice different from the traditional utility-based Random Utility Maximisation (RUM). The RRM approach is based on the idea that, when choosing, individuals aim to minimise their regret–regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. Analysing data from a discrete choice experiment on diet, physical activity and risk of a fatal heart attack in the next ten years administered to a sample of the Northern Ireland population, we find that the combined use of RUM and RRM models offer additional information, providing useful behavioural insights for better informed policy appraisal.
Resumo:
Mixture of Gaussians (MoG) modelling [13] is a popular approach to background subtraction in video sequences. Although the algorithm shows good empirical performance, it lacks theoretical justification. In this paper, we give a justification for it from an online stochastic expectation maximization (EM) viewpoint and extend it to a general framework of regularized online classification EM for MoG with guaranteed convergence. By choosing a special regularization function, l1 norm, we derived a new set of updating equations for l1 regularized online MoG. It is shown empirically that l1 regularized online MoG converge faster than the original online MoG .
Resumo:
This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.
Resumo:
Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).