738 resultados para Maximizing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current standard of care for hepatitis C virus (HCV) infection - combination therapy with pegylated interferon and ribavirin - elicits sustained responses in only similar to 50% of the patients treated. No alternatives exist for patients who do not respond to combination therapy. Addition of ribavirin substantially improves response rates to interferon and lowers relapse rates following the cessation of therapy, suggesting that increasing ribavirin exposure may further improve treatment response. A key limitation, however, is the toxic side-effect of ribavirin, hemolytic anemia, which often necessitates a reduction of ribavirin dosage and compromises treatment response. Maximizing treatment response thus requires striking a balance between the antiviral and hemolytic activities of ribavirin. Current models of viral kinetics describe the enhancement of treatment response due to ribavirin. Ribavirin-induced anemia, however, remains poorly understood and precludes rational optimization of combination therapy. Here, we develop a new mathematical model of the population dynamics of erythrocytes that quantitatively describes ribavirin-induced anemia in HCV patients. Based on the assumption that ribavirin accumulation decreases erythrocyte lifespan in a dose-dependent manner, model predictions capture several independent experimental observations of the accumulation of ribavirin in erythrocytes and the resulting decline of hemoglobin in HCV patients undergoing combination therapy, estimate the reduced erythrocyte lifespan during therapy, and describe inter-patient variations in the severity of ribavirin-induced anemia. Further, model predictions estimate the threshold ribavirin exposure beyond which anemia becomes intolerable and suggest guidelines for the usage of growth hormones, such as erythropoietin, that stimulate erythrocyte production and avert the reduction of ribavirin dosage, thereby improving treatment response. Our model thus facilitates, in conjunction with models of viral kinetics, the rational identification of treatment protocols that maximize treatment response while curtailing side effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present robust semi-blind (SB) algorithms for the estimation of beamforming vectors for multiple-input multiple-output wireless communication. The transmitted symbol block is assumed to comprise of a known sequence of training (pilot) symbols followed by information bearing blind (unknown) data symbols. Analytical expressions are derived for the robust SB estimators of the MIMO receive and transmit beamforming vectors. These robust SB estimators employ a preliminary estimate obtained from the pilot symbol sequence and leverage the second-order statistical information from the blind data symbols. We employ the theory of Lagrangian duality to derive the robust estimate of the receive beamforming vector by maximizing an inner product, while constraining the channel estimate to lie in a confidence sphere centered at the initial pilot estimate. Two different schemes are then proposed for computing the robust estimate of the MIMO transmit beamforming vector. Simulation results presented in the end illustrate the superior performance of the robust SB estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Employing multiple base stations is an attractive approach to enhance the lifetime of wireless sensor networks. In this paper, we address the fundamental question concerning the limits on the network lifetime in sensor networks when multiple base stations are deployed as data sinks. Specifically, we derive upper bounds on the network lifetime when multiple base stations are employed, and obtain optimum locations of the base stations (BSs) that maximize these lifetime bounds. For the case of two BSs, we jointly optimize the BS locations by maximizing the lifetime bound using a genetic algorithm based optimization. Joint optimization for more number of BSs is complex. Hence, for the case of three BSs, we optimize the third BS location using the previously obtained optimum locations of the first two BSs. We also provide simulation results that validate the lifetime bounds and the optimum locations of the BSs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to increasing trend of intensive rice cultivation in a coastal river basin, crop planning and groundwater management are imperative for the sustainable agriculture. For effective management, two models have been developed viz. groundwater balance model and optimum cropping and groundwater management model to determine optimum cropping pattern and groundwater allocation from private and government tubewells according to different soil types (saline and non-saline), type of agriculture (rainfed and irrigated) and seasons (monsoon and winter). A groundwater balance model has been developed considering mass balance approach. The components of the groundwater balance considered are recharge from rainfall, irrigated rice and non-rice fields, base flow from rivers and seepage flow from surface drains. In the second phase, a linear programming optimization model is developed for optimal cropping and groundwater management for maximizing the economic returns. The models developed were applied to a portion of coastal river basin in Orissa State, India and optimal cropping pattern for various scenarios of river flow and groundwater availability was obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we model a scenario where a ship uses decoys to evade a hostile torpedo. We address the problem of enhancing ship survivability against enemy torpedoes by using single and multiple decoy deployments. We incorporate deterministic ship maneuvers and realistic constraints on turn rates, field of view, etc in the model. We formulate the objective function to quantify and maximize the survivability of the ship in terms of maximizing the intercept time. We introduce the concept of optimal deployment regions, same side deployment, and zig-zag deployment strategies. Finally, we present simulation results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is focused on the development of a model for predicting the mean drop size in effervescent sprays. A combinatorial approach is followed in this modeling scheme, which is based on energy and entropy principles. The model is implemented in cascade in order to take primary breakup (due to exploding gas bubbles) and secondary breakup (due to shearing action of surrounding medium) into account. The approach in this methodology is to obtain the most probable drop size distribution by maximizing the entropy while satisfying the constraints of mass and energy balance. The comparison of the model predictions with the past experimental data is presented for validation. A careful experimental study is conducted over a wide range of gas-to-liquid ratios, which shows a good agreement with the model predictions: It is observed that the model gives accurate results in bubbly and annular flow regimes. However, discrepancies are observed in the transitional slug flow regime of the atomizer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the problem of association of wireless stations (STAs) with an access network served by a wireless local area network (WLAN) and a 3G cellular network. There is a set of WLAN Access Points (APs) and a set of 3G Base Stations (BSs) and a number of STAs each of which needs to be associated with one of the APs or one of the BSs. We concentrate on downlink bulk elastic transfers. Each association provides each ST with a certain transfer rate. We evaluate an association on the basis of the sum log utility of the transfer rates and seek the utility maximizing association. We also obtain the optimal time scheduling of service from a 3G BS to the associated STAs. We propose a fast iterative heuristic algorithm to compute an association. Numerical results show that our algorithm converges in a few steps yielding an association that is within 1% (in objective value) of the optimal (obtained through exhaustive search); in most cases the algorithm yields an optimal solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fuzzy multiobjective programming for a deterministic case involves maximizing the minimum goal satisfaction level among conflicting goals of different stakeholders using Max-min approach. Uncertainty due to randomness in a fuzzy multiobjective programming may be addressed by modifying the constraints using probabilistic inequality (e.g., Chebyshev’s inequality) or by addition of new constraints using statistical moments (e.g., skewness). Such modifications may result in the reduction of the optimal value of the system performance. In the present study, a methodology is developed to allow some violation in the newly added and modified constraints, and then minimizing the violation of those constraints with the objective of maximizing the minimum goal satisfaction level. Fuzzy goal programming is used to solve the multiobjective model. The proposed methodology is demonstrated with an application in the field of Waste Load Allocation (WLA) in a river system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite element modeling can be a useful tool for predicting the behavior of composite materials and arriving at desirable filler contents for maximizing mechanical performance. In the present study, to corroborate finite element analysis results, quantitative information on the effect of reinforcing polypropylene (PP) with various proportions of nanoclay (in the range of 3-9% by weight) is obtained through experiments; in particular, attention is paid to the Young's modulus, tensile strength and failure strain. Micromechanical finite element analysis combined with Monte Carlo simulation have been carried out to establish the validity of the modeling procedure and accuracy of prediction by comparing against experimentally determined stiffness moduli of nanocomposites. In the same context, predictions of Young's modulus yielded by theoretical micromechanics-based models are compared with experimental results. Macromechanical modeling was done to capture the non-linear stress-strain behavior including failure observed in experiments as this is deemed to be a more viable tool for analyzing products made of nanocomposites including applications of dynamics. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose power management algorithms for maximizing the utility of energy harvesting sensors (EHS) that operate purely on the basis of energy harvested from the environment. In particular, we consider communication (i.e., transmission and reception) power management issues for EHS under an energy neutrality constraint. We also consider the fixed power loss effects of the circuitry, the battery inefficiency and its storage capacity, in the design of the algorithms. We propose a two-stage structure that exploits the inherent difference in the timescales at which the energy harvesting and channel fading processes evolve, without loss of optimality of the resulting solution. The outer stage schedules the power that can be used by an inner stage algorithm, so as to maximize the long term average utility and at the same time maintain energy neutrality. The inner stage optimizes the communication parameters to achieve maximum utility in the short-term, subject to the power constraint imposed by the outer stage. We optimize the algorithms for different transmission schemes such as the truncated channel inversion and retransmission strategies. The performance of the algorithms is illustrated via simulations using solar irradiance data, and for the case of Rayleigh fading channels. The results demonstrate the significant performance benefits that can be obtained using the proposed power management algorithms compared to the energy efficient (optimum when there is no storage) and the uniform power consumption (optimum when the battery has infinite capacity and is perfectly efficient) approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A moving magnet linear motor compressor or pressure wave generator (PWG) of 2 cc swept volume with dual opposed piston configuration has been developed to operate miniature pulse tube coolers. Prelimnary experiments yielded only a no-load cold end temperature of 180 K. Auxiliary tests and the interpretation of detailed modeling of a PWG suggest that much of the PV power has been lost in the form of blow-by at piston seals due to large and non-optimum clearance seal gap between piston and cylinder. The results of experimental parameters simulated using Sage provide the optimum seal gap value for maximizing the delivered PV power.