738 resultados para Maximizing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

As multiprocessor system size scales upward, two important aspects of multiprocessor systems will generally get worse rather than better: (1) interprocessor communication latency will increase and (2) the probability that some component in the system will fail will increase. These problems can prevent us from realizing the potential benefits of large-scale multiprocessing. In this report we consider the problem of designing networks which simultaneously minimize communication latency while maximizing fault tolerance. Using a synergy of techniques including connection topologies, routing protocols, signalling techniques, and packaging technologies we assemble integrated, system-level solutions to this network design problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presenting a complete guide for the planning, design and implementation of solar PV systems for off-grid applications, this book features analysis based on the authors’ own laboratory testing as well as their in the field experiences. Incorporating the latest developments in smart-digital and control technologies into the design criteria of the PV system, this book will also focus on how to integrate newer smart design approaches and techniques for improving the efficiency, reliability and flexibility of the entire system. The design and implementation of India’s first-of its-kind Smart Mini-Grid system (SMG) at TERI premises, which involves the integration of multiple renewable energy resources (including solar PV) through smart controllers for managing the load intelligently and effectively is presented as a key case study. Maximizing reader insights into the performance of different components of solar PV systems under different operating conditions, the book will be of interest to graduate students, researchers, PV designers, planners, and practitioners working in the area of solar PV design, implementation and assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence of process variables (pea starch, guar gum and glycerol) on the viscosity (V), solubility (SOL), moisture content (MC), transparency (TR), Hunter parameters (L, a, and b), total color difference (ΔE), yellowness index (YI), and whiteness index (WI) of the pea starch based edible films was studied using three factors with three level Box–Behnken response surface design. The individual linear effect of pea starch, guar and glycerol was significant (p < 0.05) on all the responses. However, a value was only significantly (p < 0.05) affected by pea starch and guar gum in a positive and negative linear term, respectively. The effect of interaction of starch × glycerol was also significant (p < 0.05) on TR of edible films. Interaction between independent variables starch × guar gum had a significant impact on the b and YI values. The quadratic regression coefficient of pea starch showed a significant effect (p < 0.05) on V, MC, L, b, ΔE, YI, and WI; glycerol level on ΔE and WI; and guar gum on ΔE and SOL value. The results were analyzed by Pareto analysis of variance (ANOVA) and the second order polynomial models were developed from the experimental design with reliable and satisfactory fit with the corresponding experimental data and high coefficient of determination (R2) values (>0.93). Three-dimensional response surface plots were established to investigate the relationship between process variables and the responses. The optimized conditions with the goal of maximizing TR and minimizing SOL, YI and MC were 2.5 g pea starch, 25% glycerol and 0.3 g guar gum. Results revealed that pea starch/guar gum edible films with appropriate physical and optical characteristics can be effectively produced and successfully applied in the food packaging industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Google AdSense Program is a successful internet advertisement program where Google places contextual adverts on third-party websites and shares the resulting revenue with each publisher. Advertisers have budgets and bid on ad slots while publishers set reserve prices for the ad slots on their websites. Following previous modelling efforts, we model the program as a two-sided market with advertisers on one side and publishers on the other. We show a reduction from the Generalised Assignment Problem (GAP) to the problem of computing the revenue maximising allocation and pricing of publisher slots under a first-price auction. GAP is APX-hard but a (1-1/e) approximation is known. We compute truthful and revenue-maximizing prices and allocation of ad slots to advertisers under a second-price auction. The auctioneer's revenue is within (1-1/e) second-price optimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

TCP performance degrades when end-to-end connections extend over wireless connections-links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into "wireless-aware" TCP) performance under different settings. Our results show that increasing power, redundancy and/or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Accommodating Interruptions is a theory that emerged in the context of young people who have asthma. A background to the prevalence and management of asthma in Ireland is given to situate the theory. Ireland has the fourth highest incidence of asthma in the world, with almost one in five Irish young people having asthma. Although national and international asthma management guidelines exist it is accepted that the symptom control of asthma among the young people population is poor. Aim: The aim of this research is to investigate the lives of young people who have asthma, to allow for a deeper understanding of the issues affecting them. Methods: This research was undertaken using a Classic Grounded Theory approach. It is a systematic approach to allowing conceptual emergence from data in generating a theory that explains behaviour in resolving the participant’s main concern. The data were collected through in-depth interviews with young people aged 11-16 years who had asthma for over one year. Data were also collected from participant diaries. Constant comparative analysis, theoretical coding and memo writing were used to develop the theory. Results: The theory explains how young people resolve their main concern of being restricted, by maximizing their participation and inclusion in activities, events and relationships in spite of their asthma. They achieve this by accommodating interruptions in their lives in minimizing the effects of asthma on their everyday lives. Conclusion: The theory of accommodating interruptions explains young people’s asthma management behaviours in a new way. It allows us to understand how and why young people behave the way they do in order minimise the effect of asthma on their lives. The theory adds to the body of knowledge on young people with asthma and challenges some viewpoints regarding their behaviours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When subjects must choose repeatedly between two or more alternatives, each of which dispenses reward on a probabilistic basis (two-armed bandit ), their behavior is guided by the two possible outcomes, reward and nonreward. The simplest stochastic choice rule is that the probability of choosing an alternative increases following a reward and decreases following a nonreward (reward following ). We show experimentally and theoretically that animal subjects behave as if the absolute magnitudes of the changes in choice probability caused by reward and nonreward do not depend on the response which produced the reward or nonreward (source independence ), and that the effects of reward and nonreward are in constant ratio under fixed conditions (effect-ratio invariance )--properties that fit the definition of satisficing . Our experimental results are either not predicted by, or are inconsistent with, other theories of free-operant choice such as Bush-Mosteller, molar maximization, momentary maximizing, and melioration (matching).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is common for a retailer to sell products from competing manufacturers. How then should the firms manage their contract negotiations? The supply chain coordination literature focuses either on a single manufacturer selling to a single retailer or one manufacturer selling to many (possibly competing) retailers. We find that some key conclusions from those market structures do not apply in our setting, where multiple manufacturers sell through a single retailer. We allow the manufacturers to compete for the retailer's business using one of three types of contracts: a wholesale-price contract, a quantity-discount contract, or a two-part tariff. It is well known that the latter two, more sophisticated contracts enable the manufacturer to coordinate the supply chain, thereby maximizing the profits available to the firms. More importantly, they allow the manufacturer to extract rents from the retailer, in theory allowing the manufacturer to leave the retailer with only her reservation profit. However, we show that in our market structure these two sophisticated contracts force the manufacturers to compete more aggressively relative to when they only offer wholesale-price contracts, and this may leave them worse off and the retailer substantially better off. In other words, although in a serial supply chain a retailer may have just cause to fear quantity discounts and two-part tariffs, a retailer may actually prefer those contracts when offered by competing manufacturers. We conclude that the properties a contractual form exhibits in a one-manufacturer supply chain may not carry over to the realistic setting in which multiple manufacturers must compete to sell their goods through the same retailer. © 2010 INFORMS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Insecticide-treated nets (ITNs) are one of the most important and cost-effective tools for malaria control. Maximizing individual and community benefit from ITNs requires high population-based coverage. Several mechanisms are used to distribute ITNs, including health facility-based targeted distribution to high-risk groups; community-based mass distribution; social marketing with or without private sector subsidies; and integrating ITN delivery with other public health interventions. The objective of this analysis is to describe bednet coverage in a district in western Kenya where the primary mechanism for distribution is to pregnant women and infants who attend antenatal and immunization clinics. We use data from a population-based census to examine the extent of, and factors correlated with, ownership of bednets. We use both multivariable logistic regression and spatial techniques to explore the relationship between household bednet ownership and sociodemographic and geographic variables. We show that only 21% of households own any bednets, far lower than the national average, and that ownership is not significantly higher amongst pregnant women attending antenatal clinic. We also show that coverage is spatially heterogeneous with less than 2% of the population residing in zones with adequate coverage to experience indirect effects of ITN protection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent efforts to endogenize technological change in climate policy models demonstrate the importance of accounting for the opportunity cost of climate R&D investments. Because the social returns to R&D investments are typically higher than the social returns to other types of investment, any new climate mitigation R&D that comes at the expense of other R&D investment may dampen the overall gains from induced technological change. Unfortunately, there has been little empirical work to guide modelers as to the potential magnitude of such crowding out effects. This paper considers both the private and social opportunity costs of climate R&D. Addressing private costs, we ask whether an increase in climate R&D represents new R&D spending, or whether some (or all) of the additional climate R&D comes at the expense of other R&D. Addressing social costs, we use patent citations to compare the social value of alternative energy research to other types of R&D that may be crowded out. Beginning at the industry level, we find no evidence of crowding out across sectors-that is, increases in energy R&D do not draw R&D resources away from sectors that do not perform R&D. Given this, we proceed with a detailed look at alternative energy R&D. Linking patent data and financial data by firm, we ask whether an increase in alternative energy patents leads to a decrease in other types of patenting activity. While we find that increases in alternative energy patents do result in fewer patents of other types, the evidence suggests that this is due to profit-maximizing changes in research effort, rather than financial constraints that limit the total amount of R&D possible. Finally, we use patent citation data to compare the social value of alternative energy patents to other patents by these firms. Alternative energy patents are cited more frequently, and by a wider range of other technologies, than other patents by these firms, suggesting that their social value is higher. © 2011 Elsevier B.V.