958 resultados para Cost Optimization
Resumo:
PURPOSE: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. METHODS: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; VO(2peak) = 64.5 +/- 5.2 mL x kg(-1) min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption (VO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T(max)) at their VO(2peak) power output (P(max)), as well as 3) a 40-km time-trial (TT(40)). Subjects were matched and assigned to one of four training groups (G(2), N = 8, 8 x 60% T(max) at P(max), 1:2 work:recovery ratio; G(2), N = 9, 8 x 60% T(max) at P(max), recovery at 65% HR(max); G(3), N = 10, 12 x 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1), G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. RESULTS: All HIT groups improved TT(40) performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to +1.1%; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their VO(2peak) significantly more than G(CON) (+1.0%; P < 0.05). CONCLUSION: The present study has shown that when HIT incorporates P(max) as the interval intensity and 60% of T(max) as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.
Resumo:
Dear Editor We thank Dr Klek for his interest in our article and giving us the opportunity to clarify our study and share our thoughts. Our study looks at the prevalence of malnutrition in an acute tertiary hospital and tracked the outcomes prospectively.1 There are a number of reasons why we chose Subjective Global Assessment (SGA) to determine the nutritional status of patients. Firstly, we took the view that nutrition assessment tools should be used to determine nutrition status and diagnose presence and severity of malnutrition; whereas the purpose of nutrition screening tools are to identify individuals who are at risk of malnutrition. Nutritional assessment rather than screening should be used as the basis for planning and evaluating nutrition interventions for those diagnosed with malnutrition. Secondly, Subjective Global Assessment (SGA) has been well accepted and validated as an assessment tool to diagnose the presence and severity of malnutrition in clinical practice.2, 3 It has been used in many studies as a valid prognostic indicator of a range of nutritional and clinical outcomes.4, 5, 6 On the other hand, Malnutrition Universal Screening Tool (MUST)7 and Nutrition Risk Screening 2002 (NRS 2002)8 have been established as screening rather than assessment tools.
Resumo:
This study seeks insights into the economic consequences of accounting conservatism by examining the relation between conservatism and cost of equity capital. Appealing to the analytical and empirical literatures, we posit an inverse relation. Importantly, we also posit that the strength of the relation is conditional on the firm’s information environment, being the strongest for firms with high information asymmetry and the weakest (potentially negligible) for firms with low information asymmetry. Based on a sample of US-listed entities, we find, as predicted, an inverse relation between conservatism and the cost of equity capital, but further, that this relation is diminished for firms with low information asymmetry environments. This evidence indicates that there are economic benefits associated with the adoption of conservative reporting practices and leads us to conclude that conservatism has a positive role in accounting principles and practices, despite its increasing rejection by accounting standard setters.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.
Resumo:
Wireless networked control systems (WNCSs) have been widely used in the areas of manufacturing and industrial processing over the last few years. They provide real-time control with a unique characteristic: periodic traffic. These systems have a time-critical requirement. Due to current wireless mechanisms, the WNCS performance suffers from long time-varying delays, packet dropout, and inefficient channel utilization. Current wirelessly networked applications like WNCSs are designed upon the layered architecture basis. The features of this layered architecture constrain the performance of these demanding applications. Numerous efforts have attempted to use cross-layer design (CLD) approaches to improve the performance of various networked applications. However, the existing research rarely considers large-scale networks and congestion network conditions in WNCSs. In addition, there is a lack of discussions on how to apply CLD approaches in WNCSs. This thesis proposes a cross-layer design methodology to address the issues of periodic traffic timeliness, as well as to promote the efficiency of channel utilization in WNCSs. The design of the proposed CLD is highlighted by the measurement of the underlying network condition, the classification of the network state, and the adjustment of sampling period between sensors and controllers. This period adjustment is able to maintain the minimally allowable sampling period, and also maximize the control performance. Extensive simulations are conducted using the network simulator NS-2 to evaluate the performance of the proposed CLD. The comparative studies involve two aspects of communications, with and without using the proposed CLD, respectively. The results show that the proposed CLD is capable of fulfilling the timeliness requirement under congested network conditions, and is also able to improve the channel utilization efficiency and the proportion of effective data in WNCSs.
Resumo:
Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.
Resumo:
Mortality and cost outcomes of elderly intensive care unit (ICU) trauma patients were characterised in a retrospective cohort study from an Australian tertiary ICU. Trauma patients admitted between January 2000 and December 2005 were grouped into three major age categories: aged ≥65 years admitted into ICU (n=272); aged ≥65 years admitted into general ward (n=610) and aged <65 years admitted into ICU (n=1617). Hospital mortality predictors were characterised as odds ratios (OR) using logistic regression. The impact of predictor variables on (log) total hospital-stay costs was determined using least squares regression. An alternate treatment-effects regression model estimated the mortality cost-effect as an endogenous variable. Mortality predictors (P ≤0.0001, comparator: ICU ≥65 years, ventilated) were: ICU <65 not-ventilated (OR 0.014); ICU <65 ventilated (OR 0.090); ICU age ≥65 not-ventilated (OR 0.061) and ward ≥65 (OR 0.086); increasing injury severity score and increased Charlson comorbidity index of 1 and 2, compared with zero (OR 2.21 [1.40 to 3.48] and OR 2.57 [1.45 to 4.55]). The raw mean daily ICU and hospital costs in A$ 2005 (US$) for age <65 and ≥65 to ICU, and ≥65 to the ward were; for year 2000: ICU, $2717 (1462) and $2777 (1494); hospital, $1837 (988) and $1590 (855); ward $933 (502); for year 2005: ICU, $3202 (2393) and $3086 (2307); hospital, $1938 (1449) and $1914 (1431); ward $1180 (882). Cost increments were predicted by age ≥65 and ICU admission, increasing injury severity score, mechanical ventilation, Charlson comorbidity index increments and hospital survival. Mortalitycost-effect was estimated at -63% by least squares regression and -82% by treatment-effects regression model. Patient demographic factors, injury severity and its consequences predict both cost and survival in trauma. The cost mortality effect was biased upwards by conventional least squares regression estimation.
Resumo:
The ability to steer business operations in alignment with the true origins of costs, and to be informed about this on a real-time basis, allows businesses to increase profitability. In most organisations however, high-level cost-based managerial decisions are still being made separately from process-related operational decisions. In this paper, we describe how process-related decisions at the operational level can be guided by cost considerations and how these cost-informed decision rules can be supported by a workflow management system. The paper presents the conceptual framework together with data requirements and technical challenges that need to be addressed to realise cost-informed workflow execution. The feasibility of our approach is demonstrated using a prototype implementation in the YAWL workflow environment.
Resumo:
The Action Lecture program is an innovative teaching method run in some nursery and primary schools in Paris and designed to improve pupils’ literacy. We report the results of an evaluation of this program. We describe the experimental protocol that was built to estimate the program’s impact on several types of indicators. Data were processed following a Differences-in-Differences (DID) method. Then we use the estimation of the impact on academic achievement to conduct a cost-effectiveness analysis and take a reduction of the class size program as a benchmark. The results are positive for the Action Lecture program.
Resumo:
Background Falls are one of the most frequently occurring adverse events that impact upon the recovery of older hospital inpatients. Falls can threaten both immediate and longer-term health and independence. There is need to identify cost-effective means for preventing falls in hospitals. Hospital-based falls prevention interventions tested in randomized trials have not yet been subjected to economic evaluation. Methods Incremental cost-effectiveness analysis was undertaken from the health service provider perspective, over the period of hospitalization (time horizon) using the Australian Dollar (A$) at 2008 values. Analyses were based on data from a randomized trial among n = 1,206 acute and rehabilitation inpatients. Decision tree modeling with three-way sensitivity analyses were conducted using burden of disease estimates developed from trial data and previous research. The intervention was a multimedia patient education program provided with trained health professional follow-up shown to reduce falls among cognitively intact hospital patients. Results The short-term cost to a health service of one cognitively intact patient being a faller could be as high as A$14,591 (2008). The education program cost A$526 (2008) to prevent one cognitively intact patient becoming a faller and A$294 (2008) to prevent one fall based on primary trial data. These estimates were unstable due to high variability in the hospital costs accrued by individual patients involved in the trial. There was a 52% probability the complete program was both more effective and less costly (from the health service perspective) than providing usual care alone. Decision tree modeling sensitivity analyses identified that when provided in real life contexts, the program would be both more effective in preventing falls among cognitively intact inpatients and cost saving where the proportion of these patients who would otherwise fall under usual care conditions is at least 4.0%. Conclusions This economic evaluation was designed to assist health care providers decide in what circumstances this intervention should be provided. If the proportion of cognitively intact patients falling on a ward under usual care conditions is 4% or greater, then provision of the complete program in addition to usual care will likely both prevent falls and reduce costs for a health service.
Resumo:
In the electricity market environment, coordination of system reliability and economics of a power system is of great significance in determining the available transfer capability (ATC). In addition, the risks associated with uncertainties should be properly addressed in the ATC determination process for risk-benefit maximization. Against this background, it is necessary that the ATC be optimally allocated and utilized within relative security constraints. First of all, the non-sequential Monte Carlo stimulation is employed to derive the probability density distribution of ATC of designated areas incorporating uncertainty factors. Second, on the basis of that, a multi-objective optimization model is formulated to determine the multi-area ATC so as to maximize the risk-benefits. Then, the solution to the developed model is achieved by the fast non-dominated sorting (NSGA-II) algorithm, which could decrease the risk caused by uncertainties while coordinating the ATCs of different areas. Finally, the IEEE 118-bus test system is served for demonstrating the essential features of the developed model and employed algorithm.
Resumo:
In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.
Resumo:
This paper describes a risk model for estimating the likelihood of collisions at low-exposure railway level crossings, demonstrating the effect that differences in safety integrity can have on the likelihood of a collision. The model facilitates the comparison of safety benefits between level crossings with passive controls (stop or give-way signs) and level crossings that have been hypothetically upgraded with conventional or low-cost warning devices. The scenario presented illustrates how treatment of a cross-section of level crossings with low cost devices can provide a greater safety benefit compared to treatment with conventional warning devices for the same budget.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.