738 resultados para Maximizing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formation of high value procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions, where suppliers are intelligent and rational agents who act strategically. In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged We model the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a stable and fair manner We first investigate the stability of such networks by examining the conditions under which the core of the game is non-empty. We then present a protocol, based on the extensive form game realization of the core, for forming such networks so that the resulting network is stable. We also mention a key result when the Shapley value is applied as a solution concept.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatively few studies have addressed water management and adaptation measures in the face of changing water balances due to climate change. The current work studies climate change impact on a multipurpose reservoir performance and derives adaptive policies for possible futurescenarios. The method developed in this work is illustrated with a case study of Hirakud reservoir on the Mahanadi river in Orissa, India,which is a multipurpose reservoir serving flood control, irrigation and power generation. Climate change effects on annual hydropower generation and four performance indices (reliability with respect to three reservoir functions, viz. hydropower, irrigation and flood control, resiliency, vulnerability and deficit ratio with respect to hydropower) are studied. Outputs from three general circulation models (GCMs) for three scenarios each are downscaled to monsoon streamflow in the Mahanadi river for two future time slices, 2045-65 and 2075-95. Increased irrigation demands, rule curves dictated by increased need for flood storage and downscaled projections of streamflow from the ensemble of GCMs and scenarios are used for projecting future hydrologic scenarios. It is seen that hydropower generation and reliability with respect to hydropower and irrigation are likely to show a decrease in future in most scenarios, whereas the deficit ratio and vulnerability are likely to increase as a result of climate change if the standard operating policy (SOP) using current rule curves for flood protection is employed. An optimal monthly operating policy is then derived using stochastic dynamic programming (SDP) as an adaptive policy for mitigating impacts of climate change on reservoir operation. The objective of this policy is to maximize reliabilities with respect to multiple reservoir functions of hydropower, irrigation and flood control. In variations to this adaptive policy, increasingly more weightage is given to the purpose of maximizing reliability with respect to hydropower for two extreme scenarios. It is seen that by marginally sacrificing reliability with respect to irrigation and flood control, hydropower reliability and generation can be increased for future scenarios. This suggests that reservoir rules for flood control may have to be revised in basins where climate change projects an increasing probability of droughts. However, it is also seen that power generation is unable to be restored to current levels, due in part to the large projected increases in irrigation demand. This suggests that future water balance deficits may limit the success of adaptive policy options. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports a measurement of the cross section for the pair production of top quarks in ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron. The data was collected from the CDF II detector in a set of runs with a total integrated luminosity of 1.1 fb^{-1}. The cross section is measured in the dilepton channel, the subset of ttbar events in which both top quarks decay through t -> Wb -> l nu b where l = e, mu, or tau. The lepton pair is reconstructed as one identified electron or muon and one isolated track. The use of an isolated track to identify the second lepton increases the ttbar acceptance, particularly for the case in which one W decays as W -> tau nu. The purity of the sample may be further improved at the cost of a reduction in the number of signal events, by requiring an identified b-jet. We present the results of measurements performed with and without the request of an identified b-jet. The former is the first published CDF result for which a b-jet requirement is added to the dilepton selection. In the CDF data there are 129 pretag lepton + track candidate events, of which 69 are tagged. With the tagging information, the sample is divided into tagged and untagged sub-samples, and a combined cross section is calculated by maximizing a likelihood. The result is sigma_{ttbar} = 9.6 +/- 1.2 (stat.) -0.5 +0.6 (sys.) +/- 0.6 (lum.) pb, assuming a branching ratio of BR(W -> ell nu) = 10.8% and a top mass of m_t = 175 GeV/c^2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a measurement of the top quark mass in the all-hadronic channel (\tt $\to$ \bb$q_{1}\bar{q_{2}}q_{3}\bar{q_{4}}$) using 943 pb$^{-1}$ of \ppbar collisions at $\sqrt {s} = 1.96$ TeV collected at the CDF II detector at Fermilab (CDF). We apply the standard model production and decay matrix-element (ME) to $\ttbar$ candidate events. We calculate per-event probability densities according to the ME calculation and construct template models of signal and background. The scale of the jet energy is calibrated using additional templates formed with the invariant mass of pairs of jets. These templates form an overall likelihood function that depends on the top quark mass and on the jet energy scale (JES). We estimate both by maximizing this function. Given 72 observed events, we measure a top quark mass of 171.1 $\pm$ 3.7 (stat.+JES) $\pm$ 2.1 (syst.) GeV/$c^{2}$. The combined uncertainty on the top quark mass is 4.3 GeV/$c^{2}$.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we develop a novel auction algorithm for procuring wireless channel by a wireless node in a heterogeneous wireless network. We assume that the service providers of the heterogeneous wireless network are selfish and non-cooperative in the sense that they are only interested in maximizing their own utilities. The wireless user needs to procure wireless channels to execute multiple tasks. To solve the problem of the wireless user, we propose a reverse optimal (REVOPT) auction and derive an expression for the expected payment by the wireless user. The proposed auction mechanism REVOPT satisfies important game theoretic properties such as Bayesian incentive compatibility and individual rationality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using first principles calculations, we show the high hydrogen storage capacity of metallacarboranes, where the transition metal (TM) atoms can bind up to 5 H-2-molecules. The average binding energy of similar to 0.3 eV/H favorably lies within the reversible adsorption range. Among the first row TM atoms, Sc and Ti are found to be the optimum in maximizing the H-2 storage (similar to 8 wt %) on the metallacarborane cluster. Being an integral part of the cage, TMs do not suffer from the aggregation problem, which has been the biggest hurdle for the success of TM-decorated graphitic materials for hydrogen storage. Furthermore, the presence of carbon atom in the cages permits linking the metallacarboranes to form metal organic frameworks, which are thus able to adsorb hydrogen via Kubas interaction, in addition to van der Waals physisorption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the study, the potential allowable cut in the district of Pohjois-Savo - based on the non-industrial private forest landowners' (NIPF) choices of timber management strategies - was clarified. Alternative timber management strategies were generated, and the choices and factors affecting the choices of timber management strategies by NIPF landowners were studied. The choices of timber management strategies were solved by maximizing the utility functions of the NIPF landowners. The parameters of the utility functions were estimated using the Analytic Hierarchy Process (AHP). The level of the potential allowable cut was compared to the cutting budgets based on the 7th and 8th National Forest Inventories (NFI7 and NFI8), to the combining of private forestry plans, and to the realized drain from non-industrial private forests. The potential allowable cut was calculated using the same MELA system as has been used in the calculation of the national cutting budget. The data consisted of the NIPF holdings (from the TASO planning system) that had been inventoried compartmentwise and had forestry plans made during the years 1984-1992. The NIPF landowners' choices of timber management strategies were clarified by a two-phase mail inquiry. The most preferred strategy obtained was "sustainability" (chosen by 62 % of landowners). The second in order of preference was "finance" (17 %) and the third was "saving" (11 %). "No cuttings", and "maximum cuttings" were the least preferred (9 % and 1 %, resp.). The factors promoting the choices of strategies with intensive cuttings were a) "farmer as forest owner" and "owning fields", b) "increase in the size of the forest holding", c) agriculture and forestry orientation in production, d) "decreasing short term stumpage earning expectations", e) "increasing intensity of future cuttings", and f) "choice of forest taxation system based on site productivity". The potential allowable cut defined in the study was 20 % higher than the average of the realized drain during the years 1988-1993, which in turn, was at the same level as the cutting budget based on the combining of forestry plans in eastern Finland. Respectively, the potential allowable cut defined in the study was 12 % lower than the NFI8-based greatest sustained allowable cut for the 1990s. Using the method presented in this study, timber management strategies can be clarified for non-industrial private forest landowners in different parts of Finland. Based on the choices of timber managemet strategies, regular cutting budgets can be calculated more realistically than before.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of the thesis is to assess the fishery of Baltic cod, herring and sprat by simulation over 50 years time period. We form a bioeconomic multispecies model for the species. We include species interactions into the model because especially cod and sprat stocks have significant effects on each other. We model the development of population dynamics, catches and profits of the fishery with current fishing mortalities, as well as with the optimal profit maximizing fishing mortalities. Thus, we see how the fishery would develop with current mortalities, and how the fishery should be developed in order to yield maximal profits. Especially cod stock has been quite low recently and by optimizing the fishing mortality it could get recovered. In addition, we assess what would happen to the fisheries of the species if more favourable environmental conditions for cod recruitment dominate in the Baltic Sea. The results may yield new information for the fisheries management. According to the results the fishery of Baltic cod, herring and sprat are not at the most profitable level. The fishing mortalities of each species should be lower in order to maximize the profits. By fishing mortality optimizing the net present value would be almost three times higher in the simulation period. The lower fishing mortality of cod would result in a cod stock recovery. If the environmental conditions in the Baltic Sea improved, cod stock would recover even without a decrease in the fishing mortality. Then the increased cod stock would restrict herring and sprat stock remarkably, and harvesting of these species would not be as profitable anymore.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing network lifetime is important in wireless sensor/ad-hoc networks. In this paper, we are concerned with algorithms to increase network lifetime and amount of data delivered during the lifetime by deploying multiple mobile base stations in the sensor network field. Specifically, we allow multiple mobile base stations to be deployed along the periphery of the sensor network field and develop algorithms to dynamically choose the locations of these base stations so as to improve network lifetime. We propose energy efficient low-complexity algorithms to determine the locations of the base stations; they include i) Top-K-max algorithm, ii) maximizing the minimum residual energy (Max-Min-RE) algorithm, and iii) minimizing the residual energy difference (MinDiff-RE) algorithm. We show that the proposed base stations placement algorithms provide increased network lifetimes and amount of data delivered during the network lifetime compared to single base station scenario as well as multiple static base stations scenario, and close to those obtained by solving an integer linear program (ILP) to determine the locations of the mobile base stations. We also investigate the lifetime gain when an energy aware routing protocol is employed along with multiple base stations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In uplink orthogonal frequency division multiple access (OFDMA) systems, multiuser interference (MUI) occurs due to different carrier frequency offsets (CFO) of different users at the receiver. In this paper, we present a multistage linear parallel interference cancellation (LPIC) approach to mitigate the effect of this MUI in uplink OFDMA. The proposed scheme first performs CFO compensation (in time domain), followed by K DFT operations (where K is the number of users) and multistage LPIC on these DFT outputs. We scale the MUI estimates by weights before cancellation and optimize these weights by maximizing the signal-to-interference ratio (SIR) at the output of the different stages of the LPIC. We derive closed-form expressions for these optimum weights. The proposed LPIC scheme is shown to effectively cancel the MUI caused by the other user CFOs in uplink OFDMA.