972 resultados para adaptive strategy
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Este trabajo se inscribe en uno de los grandes campos de los estudios organizacionales: la estrategia. La perspectiva clásica en este campo promovió la idea de que proyectarse hacia el futuro implica diseñar un plan (una serie de acciones deliberadas). Avances posteriores mostraron que la estrategia podía ser comprendida de otras formas. Sin embargo, la evolución del campo privilegió en alguna medida la mirada clásica estableciendo, por ejemplo, múltiples modelos para ‘formular’ una estrategia, pero dejando en segundo lugar la manera en la que esta puede ‘emerger’. El propósito de esta investigación es, entonces, aportar al actual nivel de comprensión respecto a las estrategias emergentes en las organizaciones. Para hacerlo, se consideró un concepto opuesto —aunque complementario— al de ‘planeación’ y, de hecho, muy cercano en su naturaleza a ese tipo de estrategias: la improvisación. Dado que este se ha nutrido de valiosos aportes del mundo de la música, se acudió al saber propio de este dominio, recurriendo al uso de ‘la metáfora’ como recurso teórico para entenderlo y alcanzar el objetivo propuesto. Los resultados muestran que 1) las estrategias deliberadas y las emergentes coexisten y se complementan, 2) la improvisación está siempre presente en el contexto organizacional, 3) existe una mayor intensidad de la improvisación en el ‘como’ de la estrategia que en el ‘qué’ y, en oposición a la idea convencional al respecto, 4) se requiere cierta preparación para poder improvisar de manera adecuada.
Resumo:
Speed control of ac motors requires variable frequency, variable current, or variable voltage supply. Variable frequency supply can be obtained directly from a fixed frequency supply by using a frequency converter or from a dc source using inverters. In this paper a control technique for reference wave adaptive-current generation by modulating the inverter voltage is explained. Extension of this technique for three-phase induction-motor speed control is briefly explained. The oscillograms of the current waveforms obtained from the experimental setup are also shown.
Resumo:
A posteriori error estimation and adaptive refinement technique for fracture analysis of 2-D/3-D crack problems is the state-of-the-art. The objective of the present paper is to propose a new a posteriori error estimator based on strain energy release rate (SERR) or stress intensity factor (SIF) at the crack tip region and to use this along with the stress based error estimator available in the literature for the region away from the crack tip. The proposed a posteriori error estimator is called the K-S error estimator. Further, an adaptive mesh refinement (h-) strategy which can be used with K-S error estimator has been proposed for fracture analysis of 2-D crack problems. The performance of the proposed a posteriori error estimator and the h-adaptive refinement strategy have been demonstrated by employing the 4-noded, 8-noded and 9-noded plane stress finite elements. The proposed error estimator together with the h-adaptive refinement strategy will facilitate automation of fracture analysis process to provide reliable solutions.
Resumo:
In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use probabilistic logic programming (PLP) to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.
Resumo:
As one of key technologies in photovoltaic converter control, Maximum Power Point Tracking (MPPT) methods can keep the power conversion efficiency as high as nearly 99% under the uniform solar irradiance condition. However, these methods may fail when shading conditions occur and the power loss can over as much as 70% due to the multiple maxima in curve in shading conditions v.s. single maximum point in uniformly solar irradiance. In this paper, a Real Maximum Power Point Tracking (RMPPT) strategy under Partially Shaded Conditions (PSCs) is introduced to deal with this kind of problems. An optimization problem, based on a predictive model which will change adaptively with environment, is developed to tracking the global maxima and corresponding adaptive control strategy is presented. No additional circuits are required to obtain the environment uncertainties. Finally, simulations show the effectiveness of proposed method.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.
Resumo:
The development of strategy remains a debate for academics and a concern for practitioners. Published research has focused on producing models for strategy development and on studying how strategy is developed in organisations. The Operational Research literature has highlighted the importance of considering complexity within strategic decision making; but little has been done to link strategy development with complexity theories, despite organisations and organisational environments becoming increasingly more complex. We review the dominant streams of strategy development and complexity theories. Our theoretical investigation results in the first conceptual framework which links an established Strategic Operational Research model, the Strategy Development Process model, with complexity via Complex Adaptive Systems theory. We present preliminary findings from the use of this conceptual framework applied to a longitudinal, in-depth case study, to demonstrate the advantages of using this integrated conceptual model. Our research shows that the conceptual model proposed provides rich data and allows for a more holistic examination of the strategy development process. © 2012 Operational Research Society Ltd. All rights reserved.
Resumo:
This paper addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. We use empirical GPS and radio contact data from a largescale animal tracking deployment to model node mobility, GPS and radio performance. These models are used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose a versatile contact logging strategy that relies on RSSI ranging and GPS lock back-offs for reducing the node energy consumption relative to GPS duty cycling. Results show that our strategy can cut the node energy consumption by half while meeting application specific positioning criteria.
Resumo:
The use of adaptive wing/aerofoil designs is being considered as promising techniques in aeronautic/aerospace since they can reduce aircraft emissions, improve aerodynamic performance of manned or unmanned aircraft. The paper investigates the robust design and optimisation for one type of adaptive techniques; Active Flow Control (AFC) bump at transonic flow conditions on a Natural Laminar Flow (NLF) aerofoil designed to increase aerodynamic efficiency (especially high lift to drag ratio). The concept of using Shock Control Bump (SCB) is to control supersonic flow on the suction/pressure side of NLF aerofoil: RAE 5243 that leads to delaying shock occurrence or weakening its strength. Such AFC technique reduces total drag at transonic speeds due to reduction of wave drag. The location of Boundary Layer Transition (BLT) can influence the position the supersonic shock occurrence. The BLT position is an uncertainty in aerodynamic design due to the many factors, such as surface contamination or surface erosion. The paper studies the SCB shape design optimisation using robust Evolutionary Algorithms (EAs) with uncertainty in BLT positions. The optimisation method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. Two test cases are conducted; the first test assumes the BLT is at 45% of chord from the leading edge and the second test considers robust design optimisation for SCB at the variability of BLT positions and lift coefficient. Numerical result shows that the optimisation method coupled to uncertainty design techniques produces Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Background: Recent clinical studies have demonstrated an emerging subgroup of head and neck cancers that are virally mediated. This disease appears to be a distinct clinical entity with patients presenting younger and with more advanced nodal disease, having lower tobacco and alcohol exposure and highly radiosensitive tumours. This means they are living longer, often with the debilitating functional side effects of treatment. The primary objective of this study was to determine how virally mediated nasopharyngeal and oropharyngeal cancers respond to radiation therapy treatment. The aim was to determine risk categories and corresponding adaptive treatment management strategies to proactively manage these patients. Method/Results: 121 patients with virally mediated, node positive nasopharyngeal or oropharyngeal cancer who received radiotherapy treatment with curative intent between 2005 and 2010 were studied. Relevant patient demographics including age, gender, diagnosis, TNM stage, pre-treatment nodal size and dose delivered was recorded. Each patient’s treatment plan was reviewed to determine if another computed tomography (re-CT) scan was performed and at what time point (dose/fraction) this occurred. The justification for this re-CT was determined using four categories: tumour and/or nodal regression, weight loss, both or other. Patients who underwent a re-CT were further investigated to determine whether a new plan was calculated. If a re-plan was performed, the dosimetric effect was quantified by comparing dose volume histograms of planning target volumes and critical structures from the actual treatment delivered and the original treatment plan. Preliminary results demonstrated that 25/121 (20.7%) patients required a re-CT and that these re-CTs were performed between fractions 20 to 25 of treatment. The justification for these re-CTs consisted of a combination of tumour and/or nodal regression and weight loss. 16/25 (13.2%) patients had a replan calculated. 9 (7.4%) of these replans were implemented clinically due to the resultant dosimetric effect calculated. The data collected from this assessment was statistically analysed to identify the major determining factors for patients to undergo a re-CT and/or replan. Specific factors identified included nodal size and timing of the required intervention (i.e. how when a plan is to be adapted). This data was used to generate specific risk profiles that will form the basis of a biologically guided adaptive treatment management strategy for virally mediated head and neck cancer. Conclusion: Preliminary data indicates that virally mediated head and neck cancers respond significantly during radiation treatment (tumour and/or nodal regression and weight loss). Implications of this response are the potential underdosing or overdosing of tumour and/or surrounding critical structures. This could lead to sub-optimal patient outcomes and compromised quality of life. Consequently, the development of adaptive treatment strategies that improve organ sparing for this patient group is important to ensure delivery of the prescribed dose to the tumour volume whilst minimizing the dose received to surrounding critical structures. This could reduce side effects and improve overall patient quality of life. The risk profiles and associated adaptive treatment approaches developed in this study will be tested prospectively in the clinical setting in Phase 2 of this investigation.