877 resultados para progressive hedging


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ce projet de recherche a été réalisé avec la collaboration de FPInnovations. Une part des travaux concernant le problème de récolte chilien a été effectuée à l'Instituto Sistemas Complejos de Ingeniería (ISCI) à Santiago (Chili).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to describe the teaching and leadership experiences of a science teacher who, as head of department, was preparing to introduce changes in the science department of an independent school in response to the requirements of the new junior science syllabus in Queensland, Australia. This teacher consented to classroom observations and interviews with the researchers where his beliefs about teaching practice and change were explored. Other science teachers at the school also were interviewed about their reactions to the planned changes. Interpretive analysis of the data provides an account of the complex interactions, negotiations, compromises, concessions, and trade-offs faced by the teacher during a period of education reform. Perceived barriers existing within the school that impeded proposed change are identified

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Management of osteoarthritis (OA) includes the use of non-pharmacological and pharmacological therapies. Although walking is commonly recommended for reducing pain and increasing physical function in people with OA, glucosamine sulphate has also been used to alleviate pain and slow the progression of OA. This study evaluated the effects of a progressive walking program and glucosamine sulphate intake on OA symptoms and physical activity participation in people with mild to moderate hip or knee OA. Methods: Thirty-six low active participants (aged 42 to 73 years) were provided with 1500 mg glucosamine sulphate per day for 6 weeks, after which they began a 12-week progressive walking program, while continuing to take glucosamine. They were randomized to walk 3 or 5 days per week and given a pedometer to monitor step counts. For both groups, step level of walking was gradually increased to 3000 steps/day during the first 6 weeks of walking, and to 6000 steps/day for the next 6 weeks. Primary outcomes included physical activity levels, physical function (self-paced step test), and the WOMAC Osteoarthritis Index for pain, stiffness and physical function. Assessments were conducted at baseline and at 6-, 12-, 18-, and 24-week follow-ups. The Mann Whitney Test was used to examine differences in outcome measures between groups at each assessment, and the Wilcoxon Signed Ranks Test was used to examine differences in outcome measures between assessments. Results: During the first 6 weeks of the study (glucosamine supplementation only), physical activity levels, physical function, and total WOMAC scores improved (P<0.05). Between the start of the walking program (Week 6) and the final follow-up (Week 24), further improvements were seen in these outcomes (P<0.05) although most improvements were seen between Weeks 6 and 12. No significant differences were found between walking groups. Conclusions: In people with hip or knee OA, walking a minimum of 3000 steps (~30 minutes), at least 3 days/week, in combination with glucosamine sulphate, may reduce OA symptoms. A more robust study with a larger sample is needed to support these preliminary findings. Trial Registration: Australian Clinical Trials Registry ACTRN012607000159459.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a simple variation of the Allingham and Sandmo (1972) construct and integrates it to a dynamic general equilibrium framework with heterogeneous agents. We study an overlapping generations framework i n which agents must initially decide whether to evade taxes or not. In the event they decide to evade, they then have to decide the extent of income or wealth they wish to under-report. We find that in comparison with the basic approach, the ‘evade or not’ choice drastically reduced the extent of evasion in the economy. This outcome is the result of an anomaly intrinsic to the basic Allingham and Sandmo version of the model, which makes the evade-or-not extension a more suitable approach to modelling the issue. We also find that the basic model, and the model with and ‘evade-or-not’ choice have strikingly different political economy implications, , which suggest fruitful avenues of empirical research.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To assess the efficacy of bilateral pedunculopontine nucleus (PPN) deep brain stimulation (DBS) as a treatment for primary progressive freezing of gait (PPFG). ------ ----- Methods: A patient with PPFG underwent bilateral PPN-DBS and was followed clinically for over 14 months. ------ ----- Results: The PPFG patient exhibited a robust improvement in gait and posture following PPN-DBS. When PPN stimulation was deactivated, postural stability and gait skills declined to pre-DBS levels, and fluoro-2-deoxy-d-glucose positron emission tomography revealed hypoactive cerebellar and brainstem regions, which significantly normalised when PPN stimulation was reactivated. ------ ----- Conclusions: This case demonstrates that the advantages of PPN-DBS may not be limited to addressing freezing of gait (FOG) in idiopathic Parkinson's disease. The PPN may also be an effective DBS target to address other forms of central gait failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the interactions between knowledge and power in the adoption of technologies central to municipal water supply plans, specifically investigating decisions in Progressive Era Chicago regarding water meters. The invention and introduction into use of the reliable water meter early in the Progressive Era allowed planners and engineers to gauge water use, and enabled communities willing to invest in the new infrastructure to allocate costs for provision of supply to consumers relative to use. In an era where efficiency was so prized and the role of technocratic expertise was increasing, Chicago’s continued failure to adopt metering (despite levels of per capita consumption nearly twice that of comparable cities and acknowledged levels of waste nearing half of system production) may indicate that the underlying characteristics of the city’s political system and its elite stymied the implementation of metering technologies as in Smith’s (1977) comparative study of nineteenth century armories. Perhaps, as with Flyvbjerg’s (1998) study of the city of Aalborg, the powerful know what they want and data will not interfere with their conclusions: if the data point to a solution other than what is desired, then it must be that the data are wrong. Alternatively, perhaps the technocrats failed adequately to communicate their findings in a language which the political elite could understand, with the failure lying in assumptions of scientific or technical literacy rather than with dissatisfaction in outcomes (Benveniste 1972). When examined through a historical institutionalist perspective, the case study of metering adoption lends itself to exploration of larger issues of knowledge and power in the planning process: what governs decisions regarding knowledge acquisition, how knowledge and power interact, whether the potential to improve knowledge leads to changes in action, and, whether the decision to overlook available knowledge has an impact on future decisions.