998 resultados para Randomized Optimization


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AIM: To test the hypothesis that a 'basal plus' regimenadding once-daily main-meal fast-acting insulin to basal insulin once dailywould be non-inferior to biphasic insulin twice daily as assessed by glycated haemoglobin (HbA1c) concentration (predefined as ≤0.4%), but would provide superior treatment satisfaction. METHODS: This open-label trial enrolled adults to an 8- or 12-week run-in period, during which oral therapies except metformin were stopped and insulin glargine dose was titrated. Those with fasting glucose <7 mmol/l but HbA1c >7% (53 mmol/mol) were randomized to insulin glargine/glulisine once daily (n = 170) or insulin aspart/aspart protamine 30/70 twice daily (n = 165) for 24 weeks, with dose titration to glucose targets using standardized algorithms. RESULTS: For HbA1c, the basal plus regimen was non-inferior to biphasic insulin (least squares mean difference, 0.21%, upper 97.5% confidence limit 0.38%) meeting the predefined non-inferiority margin of 0.4%. Treatment satisfaction (Diabetes Treatment Satisfaction Questionnaire change version and Insulin Treatment Satisfaction Questionnaire total scores) significantly favoured basal plus. No difference was observed between the basal plus and the biphasic insulin groups in responders (HbA1c <7%, 20.6 vs 27.9%; p = 0.12), weight gain (2.06 vs 2.50 kg; p = 0.2), diabetes-specific quality of life (Audit of Diabetes-Dependent Quality of Life average weighted impact (AWI) score) and generic health status (five-dimension European Quality of Life questionnaire). Overall hypoglycaemia rates were similar between groups (15.3 vs 18.2 events/patient-year; p = 0.22); nocturnal hypoglycaemia was higher with the basal plus regimen (5.7 vs 3.6 events/patient-year; p = 0.02). CONCLUSION: In long-standing type 2 diabetes with suboptimal glycaemia despite oral therapies and basal insulin, the basal plus regimen was non-inferior to biphasic insulin for biomedical outcomes, with a similar overall hypoglycaemia rate but more nocturnal events.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE Management of ureteral stones remains controversial. To determine whether optimizing extracorporeal shock wave lithotripsy (ESWL) delivery rates improves treatment of solitary ureteral stones, we compared outcomes of two SW delivery rates in a prospective, randomized trial. MATERIALS AND METHODS From July 2010 to October 2012, 254 consecutive patients were randomized to undergo ESWL at SW delivery rates of either 60 pulses (n=130) or 90 pulses (n=124) per min. The primary endpoint was stone-free rate at 3-month follow-up. Secondary endpoints included stone disintegration, treatment time, complications, and the rate of secondary treatments. Descriptive statistics were used to compare endpoints between the two groups. Adjusted odds ratios and 95% confidence intervals were calculated to assess predictors of success. RESULTS The stone-free rate at 3 months was significantly higher in patients who underwent ESWL at a SW delivery rate of 90 pulses per min than in those receiving 60 pulses (91% vs. 80%, p=0.01). Patients with proximal and mid-ureter stones, but not those with distal ureter stones, accounted for the observed difference (100% vs. 83%; p=0.005; 96% vs. 73%, p=0.03; and 81% vs. 80%, p=0.9, respectively). Treatment time, complications, and the rate of secondary treatments were comparable between the two groups. In multivariable analysis, SW delivery rate of 90 pulses per min, proximal stone location, stone density, stone size and the absence of an indwelling JJ stent were independent predictors of success. CONCLUSIONS Optimization of ESWL delivery rates can achieve excellent results for ureteral stones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary - Cooking banana is one of the most important crops in Uganda; it is a staple food and source of household income in rural areas. The most common cooking banana is locally called matooke, a Musa sp triploid acuminate genome group (AAA-EAHB). It is perishable and traded in fresh form leading to very high postharvest losses (22-45%). This is attributed to: non-uniform level of harvest maturity, poor handling, bulk transportation and lack of value addition/processing technologies, which are currently the main challenges for trade and export, and diversified utilization of matooke. Drying is one of the oldest technologies employed in processing of agricultural produce. A lot of research has been carried out on drying of fruits and vegetables, but little information is available on matooke. Drying of matooke and milling it to flour extends its shelf-life is an important means to overcome the above challenges. Raw matooke flour is a generic flour developed to improve shelf stability of the fruit and to find alternative uses. It is rich in starch (80 - 85%db) and subsequently has a high potential as a calorie resource base. It possesses good properties for both food and non-food industrial use. Some effort has been done to commercialize the processing of matooke but there is still limited information on its processing into flour. It was imperative to carry out an in-depth study to bridge the following gaps: lack of accurate information on the maturity window within which matooke for processing into flour can be harvested leading to non-uniform quality of matooke flour; there is no information on moisture sorption isotherm for matooke from which the minimum equilibrium moisture content in relation to temperature and relative humidity is obtainable, below which the dry matooke would be microbiologically shelf-stable; and lack of information on drying behavior of matooke and standardized processing parameters for matooke in relation to physicochemical properties of the flour. The main objective of the study was to establish the optimum harvest maturity window and optimize the processing parameters for obtaining standardized microbiologically shelf-stable matooke flour with good starch quality attributes. This research was designed to: i) establish the optimum maturity harvest window within which matooke can be harvested to produce a consistent quality of matooke flour, ii) establish the sorption isotherms for matooke, iii) establish the effect of process parameters on drying characteristics of matooke, iv) optimize the drying process parameters for matooke, v) validate the models of maturity and optimum process parameters and vi) standardize process parameters for commercial processing of matooke. Samples were obtained from a banana plantation at Presidential Initiative on Banana Industrial Development (PIBID), Technology Business Incubation Center (TBI) at Nyaruzunga – Bushenyi in Western Uganda. A completely randomized design (CRD) was employed in selecting the banana stools from which samples for the experiments were picked. The cultivar Mbwazirume which is soft cooking and commonly grown in Bushenyi was selected for the study. The static gravitation method recommended by COST 90 Project (Wolf et al., 1985), was used for determination of moisture sorption isotherms. A research dryer developed for this research. All experiments were carried out in laboratories at TBI. The physiological maturity of matooke cv. mbwazirume at Bushenyi is 21 weeks. The optimum harvest maturity window for commercial processing of matooke flour (Raw Tooke Flour - RTF) at Bushenyi is between 15-21 weeks. The finger weight model is recommended for farmers to estimate harvest maturity for matooke and the combined model of finger weight and pulp peel ratio is recommended for commercial processors. Matooke isotherms exhibited type II curve behavior which is characteristic of foodstuffs. The GAB model best described all the adsorption and desorption moisture isotherms. For commercial processing of matooke, in order to obtain a microbiologically shelf-stable dry product. It is recommended to dry it to moisture content below or equal to 10% (wb). The hysteresis phenomenon was exhibited by the moisture sorption isotherms for matooke. The isoteric heat of sorption for both adsorptions and desorption isotherms increased with decreased moisture content. The total isosteric heat of sorption for matooke: adsorption isotherm ranged from 4,586 – 2,386 kJ/kg and desorption isotherm from 18,194– 2,391 kJ/kg for equilibrium moisture content from 0.3 – 0.01 (db) respectively. The minimum energy required for drying matooke from 80 – 10% (wb) is 8,124 kJ/kg of water removed. Implying that the minimum energy required for drying of 1 kg of fresh matooke from 80 - 10% (wb) is 5,793 kJ. The drying of matooke takes place in three steps: the warm-up and the two falling rate periods. The drying rate constant for all processing parameters ranged from 5,793 kJ and effective diffusivity ranged from 1.5E-10 - 8.27E-10 m2/s. The activation energy (Ea) for matooke was 16.3kJ/mol (1,605 kJ/kg). Comparing the activation energy (Ea) with the net isosteric heat of sorption for desorption isotherm (qst) (1,297.62) at 0.1 (kg water/kg dry matter), indicated that Ea was higher than qst suggesting that moisture molecules travel in liquid form in matooke slices. The total color difference (ΔE*) between the fresh and dry samples, was lowest for effect of thickness of 7 mm, followed by air velocity of 6 m/s, and then drying air temperature at 70˚C. The drying system controlled by set surface product temperature, reduced the drying time by 50% compared to that of a drying system controlled by set air drying temperature. The processing parameters did not have a significant effect on physicochemical and quality attributes, suggesting that any drying air temperature can be used in the initial stages of drying as long as the product temperature does not exceed gelatinization temperature of matooke (72˚C). The optimum processing parameters for single-layer drying of matooke are: thickness = 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode. From practical point of view it is recommended that for commercial processing of matooke, to employ multi-layer drying of loading capacity equal or less than 7 kg/m², thickness 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Self-management is seen as a primary mechanism to support the optimization of care for people with chronic diseases such as symptomatic vascular disease. There are no established and evidence-based stroke-specific chronic disease self-management programs. Our aim is to evaluate whether a stroke-specific program is safe and feasible as part of a Phase II randomized-controlled clinical trial.
Methods Stroke survivors are recruited from a variety of sources including: hospital stroke services, local paper advertisements, Stroke South Australia newsletter (volunteer peer support organization), Divisions of General Practice, and community service providers across Adelaide, South Australia. Subjects are invited to participate in a multi-center, single-blind, randomized, controlled trial. Eligible participants are randomized to either;
• standard care,
• standard care plus a six week generic chronic condition self-management group education program, or,
• standard care plus an eight week stroke specific self-management education group program.
Interventions are conducted after discharge from hospital. Participants are assessed at baseline, immediate post intervention and six months.
Study Outcomes The primary outcome measures determine study feasibility and safety, measuring, recruitment, participation, compliance and adverse events.
Secondary outcomes include:
• positive and active engagement in life measured by the Health Education Impact Questionnaire,
• improvements in quality of life measured by the Assessment of Quality of Life instrument,
• improvements in mood measured by the Irritability, Depression and Anxiety Scale,
• health resource utilization measured by a participant held diary and safety.

Conclusion The results of this study will determine whether a definitive Phase III efficacy trial is justified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health care expenditure (as % of GDP) has been rising in all OECD countries over the last decades. Now, in the context of the economic downturn, there is an even more pressing need to better guarantee the sustainability of health care systems. This requires that policy makers are informed about optimal allocation of budgets. We take the Dutch mental health system in the primary care setting as an example of new ways to approach optimal allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS Device-based pacing-induced diaphragmatic stimulation (PIDS) may have therapeutic potential for chronic heart failure (HF) patients. We studied the effects of PIDS on cardiac function and functional outcomes. METHODS AND RESULTS In 24 chronic HF patients with CRT, an additional electrode was attached to the left diaphragm. Randomized into two groups, patients received the following PIDS modes for 3 weeks in a different sequence: (i) PIDS off (control group); (ii) PIDS 0 ms mode (PIDS simultaneously with ventricular CRT pulse); or (iii) PIDS optimized mode (PIDS with optimized delay to ventricular CRT pulse). For PIDS optimization, acoustic cardiography was used. Effects of each PIDS mode on dyspnoea, power during exercise testing, and LVEF were assessed. Dyspnoea improved with the PIDS 0 ms mode (P = 0.057) and the PIDS optimized mode (P = 0.034) as compared with the control group. Maximal power increased from median 100.5 W in the control group to 104.0 W in the PIDS 0 ms mode (P = 0.092) and 109.5 W in the PIDS optimized mode (P = 0.022). Median LVEF was 33.5% in the control group, 33.0% in the PIDS 0 ms mode, and 37.0% in the PIDS optimized mode (P = 0.763 and P = 0.009 as compared with the control group, respectively). PIDS was asymptomatic in all patients. CONCLUSION PIDS improves dyspnoea, working capacity, and LVEF in chronic HF patients over a 3 week period in addition to CRT. This pilot study demonstrates proof of principle of an innovative technology which should be confirmed in a larger sample. TRIAL REGISTRATION NCT00769678.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of imatinib dose on response rates and survival in older patients with chronic myeloid leukemia in chronic phase has not been studied well. We analyzed data from the German CML-Study IV, a randomized five-arm treatment optimization study in newly diagnosed BCR-ABL-positive chronic myeloid leukemia in chronic phase. Patients randomized to imatinib 400 mg/day (IM400) or imatinib 800 mg/day (IM800) and stratified according to age (≥65 years vs. <65 years) were compared regarding dose, response, adverse events, rates of progression, and survival. The full 800 mg dose was given after a 6-week run-in period with imatinib 400 mg/day. The dose could then be reduced according to tolerability. A total of 828 patients were randomized to IM400 or IM800. Seven hundred eighty-four patients were evaluable (IM400, 382; IM800, 402). One hundred ten patients (29 %) on IM400 and 83 (21 %) on IM800 were ≥65 years. The median dose per day was lower for patients ≥65 years on IM800, with the highest median dose in the first year (466 mg/day for patients ≥65 years vs. 630 mg/day for patients <65 years). Older patients on IM800 achieved major molecular remission and deep molecular remission as fast as younger patients, in contrast to standard dose imatinib with which older patients achieved remissions much later than younger patients. Grades 3 and 4 adverse events were similar in both age groups. Five-year relative survival for older patients was comparable to that of younger patients. We suggest that the optimal dose for older patients is higher than 400 mg/day. ClinicalTrials.gov identifier: NCT00055874

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for solving some hard combinatorial optimization problems is suggested, admitting a certain reformulation. Considering such a problem, several different similar problems are prepared which have the same set of solutions. They are solved on computer in parallel until one of them will be solved, and that solution is accepted. Notwithstanding the evident overhead, the whole run-time could be significantly reduced due to dispersion of velocities of combinatorial search in regarded cases. The efficiency of this approach is investigated on the concrete problem of finding short solutions of non-deterministic system of linear logical equations.