973 resultados para minimum cost
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.
Resumo:
The Mitigation Options for Phosphorus and Sediment (MOPS) project investigated the effectiveness of within-field control measures (tramline management, straw residue management, type of cultivation and direction, and vegetative buffers) in terms of mitigating sediment and phosphorus loss from winter-sown combinable cereal crops using three case study sites. To determine the cost of the approaches, simple financial spreadsheet models were constructed at both farm and regional levels. Taking into account crop areas, crop rotation margins per hectare were calculated to reflect the costs of crop establishment, fertiliser and agro-chemical applications, harvesting, and the associated labour and machinery costs. Variable and operating costs associated with each mitigation option were then incorporated to demonstrate the impact on the relevant crop enterprise and crop rotation margins. These costs were then compared to runoff, sediment and phosphorus loss data obtained from monitoring hillslope-length scale field plots. Each of the mitigation options explored in this study had potential for reducing sediment and phosphorus losses from arable land under cereal crops. Sediment losses were reduced from between 9 kg ha−1 to as much as 4780 kg ha−1 with a corresponding reduction in phosphorus loss from 0.03 kg ha−1 to 2.89 kg ha−1. In percentage terms reductions of phosphorus were between 9% and 99%. Impacts on crop rotation margins also varied. Minimum tillage resulted in cost savings (up to £50 ha−1) whilst other options showed increased costs (up to £19 ha−1 for straw residue incorporation). Overall, the results indicate that each of the options has potential for on-farm implementation. However, tramline management appeared to have the greatest potential for reducing runoff, sediment, and phosphorus losses from arable land (between 69% and 99%) and is likely to be considered cost-effective with only a small additional cost of £2–4 ha−1, although further work is needed to evaluate alternative tramline management methods. Tramline management is also the only option not incorporated within current policy mechanisms associated with reducing soil erosion and phosphorus loss and in light of its potential is an approach that should be encouraged once further evidence is available.
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
Economic Dispatch (ED) problems have recently been solved by artificial neural networks approaches. In most of these dispatch models, the cost function must be linear or quadratic. Therefore, functions that have several minimum points represent a problem to the simulation since these approaches have not accepted nonlinear cost function. Another drawback pointed out in the literature is that some of these neural approaches fail to converge efficiently towards feasible equilibrium points. This paper discusses the application of a modified Hopfield architecture for solving ED problems defined by nonlinear cost function. The internal parameters of the neural network adopted here are computed using the valid-subspace technique, which guarantees convergence to equilibrium points that represent a solution for the ED problem. Simulation results and a comparative analysis involving a 3-bus test system are presented to illustrate efficiency of the proposed approach.
Resumo:
Aim. The aim of the present study was to investigate the validity of the Lactate Minimum Test (LMT) for the determination of peak VO2 on a cycle ergometer and to determine the submaximal oxygen uptake (VO2) and pulmonary ventilation (VE) responses in an incremental exercise test when it is preceded by high intensity exercise (i.e., during a LMT).Methods. Ten trained male athletes (triathletes and cyclists) performed 2 exercise tests in random order on an electromagnetic cycle ergometer: 1) Control Test (CT): an incremental test with an initial work rate of 100 W, and with 25 W increments at 3-min intervals, until voluntary exhaustion; 2) LMT: an incremental test identical to the CT, except that it was preceded by 2 supramaximal bouts of 30-sec (similar to120% VO(2)peak) with a 30-sec rest to induce lactic acidosis. This test started 8 min after the induction of acidosis.Results. There was no significant difference in peak VO2 (65.6+/-7.4 ml.kg(-1).min(-1); 63.8+/-7.5 ml.kg(-1).min(-1) to CT and LMT, respectively). However, the maximal power output (POmax) reached was significantly higher in CT (300.6+/-15.7 W) than in the LMT (283.2+/-16.0 W).VO2 and VE were significantly increased at initial power outputs in LMT.Conclusion. Although the LMT alters the submaximal physiological responses during the incremental phase (greater initial metabolic cost), this protocol is valid to evaluate peak VO2, although the POmax reached is also reduced.
Resumo:
Starting from the deregulated process of the Electric Sector, there was the need to attribute responsibilities to several agents and to elaborate appropriate forms of remuneration of the services rendered by the same. One of the services of great importance within this new electric sector is the Ancillary Services. Among the various types of Ancillary Services, Spinning Reserve is a service necessary for maintaining the integrity of the transmission system from either generation interruptions or load variations. This paper uses the application of the Economic Dispatch theory with the objective of quantifies the availability of Spinning Reserve supply in hydroelectric plants. The proposed methodology utilizes the generating units as well as their efficiencies so as to attend the total demand with the minimum water discharge. The proposed methodology was tested through the data provided by the Água Vermelha Hydroelectric Power Plant. These tests permitted the opportunity cost valuation to the Spinning Reserve supply in hydroelectric plants. © 2005 IEEE.
Resumo:
In this paper we present an optimization of the Optimum-Path Forest classifier training procedure, which is based on a theoretical relationship between minimum spanning forest and optimum-path forest for a specific path-cost function. Experiments on public datasets have shown that the proposed approach can obtain similar accuracy to the traditional one but with faster data training. © 2012 ICPR Org Committee.
Resumo:
In a recent ECLAC study of inefficiency at border crossings in Mercosur countries, it was found that the cost of delays in traffic between Argentina and Brazil amounted to a minimum of US$ 170 per truck for the most problematic border crossing. This is over 10 % higher than the typical price of freight between Buenos Aires and Sao Paulo or Porto Alegre. It was estimated that the extra-cost on this border crossing may amount to a maximum of US$ 273 per truck. These problems, which have to do more with organization than with infrastructure, cause serious losses to the sectors involved in international transport, and especially to end users of intermediate or consumer goods transported.This edition of the Bulletin includes a summary of a study entitled: Identificación de obstáculos al transporte terrestre internacional de cargas en el Mercosur: los casos de Argentina, Brasil y Uruguay.
Resumo:
In general, pattern recognition techniques require a high computational burden for learning the discriminating functions that are responsible to separate samples from distinct classes. As such, there are several studies that make effort to employ machine learning algorithms in the context of big data classification problems. The research on this area ranges from Graphics Processing Units-based implementations to mathematical optimizations, being the main drawback of the former approaches to be dependent on the graphic video card. Here, we propose an architecture-independent optimization approach for the optimum-path forest (OPF) classifier, that is designed using a theoretical formulation that relates the minimum spanning tree with the minimum spanning forest generated by the OPF over the training dataset. The experiments have shown that the approach proposed can be faster than the traditional one in five public datasets, being also as accurate as the original OPF. (C) 2014 Elsevier B. V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
How to evaluate the cost-effectiveness of repair/retrofit intervention vs. demolition/replacement and what level of shaking intensity can the chosen repairing/retrofit technique sustain are open questions affecting either the pre-earthquake prevention, the post-earthquake emergency and the reconstruction phases. The (mis)conception that the cost of retrofit interventions would increase linearly with the achieved seismic performance (%NBS) often discourages stakeholders to consider repair/retrofit options in a post-earthquake damage situation. Similarly, in a pre-earthquake phase, the minimum (by-law) level of %NBS might be targeted, leading in some cases to no-action. Furthermore, the performance measure enforcing owners to take action, the %NBS, is generally evaluated deterministically. Not directly reflecting epistemic and aleatory uncertainties, the assessment can result in misleading confidence on the expected performance. The present study aims at contributing to the delicate decision-making process of repair/retrofit vs. demolition/replacement, by developing a framework to assist stakeholders with the evaluation of the effects in terms of long-term losses and benefits of an increment in their initial investment (targeted retrofit level) and highlighting the uncertainties hidden behind a deterministic approach. For a pre-1970 case study building, different retrofit solutions are considered, targeting different levels of %NBS, and the actual probability of reaching Collapse when considering a suite of ground-motions is evaluated, providing a correlation between %NBS and Risk. Both a simplified and a probabilistic loss modelling are then undertaken to study the relationship between %NBS and expected direct and indirect losses.
Resumo:
Moisture induced distresses have been the prevalent distress type affecting the deterioration of both asphalt and concrete pavement sections. While various surface techniques have been employed over the years to minimize the ingress of moisture into the pavement structural sections, subsurface drainage components like open-graded base courses remain the best alternative in minimizing the time the pavement structural sections are exposed to saturated conditions. This research therefore focuses on assessing the performance and cost-effectiveness of pavement sections containing both treated and untreated open-graded aggregate base materials. Three common roadway aggregates comprising of two virgin aggregates and one recycled aggregate were investigated using four open-ended gradations and two binder types. Laboratory tests were conducted to determine the hydraulic, mechanical and durability characteristics of treated and untreated open-graded mixes made from these three aggregate types. Results of the experimental program show that for the same gradation and mix design types, limestone samples have the greatest drainage capacity, stability to traffic loads and resistance to degradation from environmental conditions like freeze-thaw. However, depending on the gradation and mix design used, all three aggregate types namely limestone, natural gravel and recycled concrete can meet the minimum coefficient of hydraulic conductivity required for good drainage in most pavements. Tests results for both asphalt and cement treated open-graded samples indicate that a percent air void content within the range of 15-25 will produce a treated open-graded base course with sufficient drainage capacity and also long term stability under both traffic and environmental loads. Using the new Mechanistic and Empirical Design Guide software, computer simulations of pavement performance were conducted on pavement sections containing these open-graded base aggregate base materials to determine how the MEPDG predicted pavement performance is sensitive to drainage. Using three truck traffic levels and four climatic regions, results of the computer simulations indicate that the predicted performance was not sensitive to the drainage characteristics of the open-graded base course. Based on the result of the MEPDG predicted pavement performance, the cost-effectiveness of the pavement sections with open-graded base was computed on the assumption that the increase service life experienced by these sections was attributed to the positive effects of subsurface drainage. The two cost analyses used gave two contrasting results with the one indicating that the inclusion of open-graded base courses can lead to substantial savings.
Resumo:
A Payment Cost Minimization (PCM) auction has been proposed as an alternative to the Offer Cost Minimization (OCM) auction to be used in wholesale electric power markets with the intention to lower the procurement cost of electricity. Efficiency concerns about this proposal have relied on the assumption of true production cost revelation. Using an experimental approach, I compare the two auctions, strictly controlling for the level of unilateral market power. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices at which they are willing to sell, but also the start-up fees that are designed to reimburse the fixed start-up costs of the generation plants. I find that both auctions result in start-up fees that are significantly higher than the start-up costs. Overall, the two auctions perform similarly in terms of procurement cost and efficiency. Surprisingly, I do not find a substantial difference between less market power and more market power designs. Both designs result in similar inefficiencies and equally higher procurement costs over the competitive prediction. The PCM auction tends to have lower price volatility than the OCM auction when the market power is minimal but this property vanishes in the designs with market power. These findings lead me to conclude that both the PCM and the OCM auctions do not belong to the class of truth revealing mechanisms and do not easily elicit competitive behavior.
Resumo:
In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.