955 resultados para Dynamic Marginal Cost


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even though computational power used for structural analysis is ever increasing, there is still a fundamental need for testing in structural engineering, either for validation of complex numerical models or to assess material behaviour. In addition to analysis of structures using scale models, many structural engineers are aware to some extent of cyclic and shake-table test methods, but less so of ‘hybrid testing’. The latter is a combination of physical testing (e.g. hydraulic
actuators) and computational modelling (e.g. finite element modelling). Over the past 40 years, hybrid testing of engineering structures has developed from concept through to maturity to become a reliable and accurate dynamic testing technique. The hybrid test method provides users with some additional benefits that standard dynamic testing methods do not, and the method is more cost-effective in comparison to shake-table testing. This article aims to provide the reader with a basic understanding of the hybrid test method, including its contextual development and potential as a dynamic testing technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Highway structures such as bridges are subject to continuous degradation primarily due to ageing, loading and environmental factors. A rational transport policy must monitor and provide adequate maintenance to this infrastructure to guarantee the required levels of transport service and safety. Increasingly in recent years, bridges are being instrumented and monitored on an ongoing basis due to the implementation of Bridge Management Systems. This is very effective and provides a high level of protection to the public and early warning if the bridge becomes unsafe. However, the process can be expensive and time consuming, requiring the installation of sensors and data acquisition electronics on the bridge. This paper investigates the use of an instrumented 2-axle vehicle fitted with accelerometers to monitor the dynamic behaviour of a bridge network in a simple and cost-effective manner. A simplified half car-beam interaction model is used to simulate the passage of a vehicle over a bridge. This investigation involves the frequency domain analysis of the axle accelerations as the vehicle crosses the bridge. The spectrum of the acceleration record contains noise, vehicle, bridge and road frequency components. Therefore, the bridge dynamic behaviour is monitored in simulations for both smooth and rough road surfaces. The vehicle mass and axle spacing are varied in simulations along with bridge structural damping in order to analyse the sensitivity of the vehicle accelerations to a change in bridge properties. These vehicle accelerations can be obtained for different periods of time and serve as a useful tool to monitor the variation of bridge frequency and damping with time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient and robust case sorting algorithm based on Extended Equal Area Criterion (EEAC) is proposed in this paper for power system transient stability assessment (TSA). The time-varying degree of an equivalent image system can be deduced by comparing the analysis results of Static EEAC (SEEAC) and Dynamic EEAC (DEEAC), the former of which neglects all time-varying factors while the latter partially considers the time-varying factors. Case sorting rules according to their transient stability severity are set combining the time-varying degree and fault messages. Then a case sorting algorithm is designed with the “OR” logic among multiple rules, based on which each case can be identified into one of the following five categories, namely stable, suspected stable, marginal, suspected unstable and unstable. The performance of this algorithm is verified by studying 1652 contingency cases from 9 real Chinese provincial power systems under various operating conditions. It is shown that desirable classification accuracy can be achieved for all the contingency cases at the cost of very little extra computational burden and only 9.81% of the whole cases need to carry out further detailed calculation in rigorous on-line TSA conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At a time when the traditional major airlines have struggled to remain viable, the low-cost carriers have become the major success story of the European airline industry. This paper looks behind the headlines to show that although low-cost airlines have achieved much, they too have potential weaknesses and face a number of challenges in the years ahead. The secondary and regional airports that have benefited from low-cost carrier expansion are shown to be vulnerable to future changes in airline economics, government policy and patterns of air service. An analysis of routes from London demonstrates that the low-cost airlines have been more successful in some markets than others. To attractive and historically under-served leisure destinations in Southern Europe they have stimulated dramatic growth and achieved a dominant position. To major hub cities however they typically remain marginal players and to secondary points in Northern Europe their traffic has been largely diverted from existing operators. There is also evidence that the UK market is becoming saturated and new low-cost services are poaching traffic from other low-cost routes. Passenger compensation legislation and possible environmental taxes will hit the low-cost airline industry disproportionately hard. The high elasticities of demand to price in certain markets that these airlines have exploited will operate in reverse. One of the major elements of the low-cost business model involves the use of smaller uncongested airports. These offer faster turn-arounds and lower airport charges. In many cases, local and regional government has been willing to subsidise expansion of air services to assist with economic development or tourism objectives. However, recent court cases against Ryanair now threaten these financial arrangements. The paper also examines the catchment areas for airports with low-cost service. It is shown that as well as stimulating local demand, much traffic is captured from larger markets nearby through the differential in fare levels. This has implications for surface transport, as access to these regional airports often involves long journeys by private car. Consideration is then given to the feasibility of low-cost airlines expanding into the long-haul market or to regional operations with small aircraft. Many of the cost advantages are more muted on intercontinental services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coping with an ageing population is a major concern for healthcare organisations around the world. The average cost of hospital care is higher than social care for older and terminally ill patients. Moreover, the average cost of social care increases with the age of the patient. Therefore, it is important to make efficient and fair capacity planning which also incorporates patient centred outcomes. Predictive models can provide predictions which their accuracy can be understood and quantified. Predictive modelling can help patients and carers to get the appropriate support services, and allow clinical decision-makers to improve care quality and reduce the cost of inappropriate hospital and Accident and Emergency admissions. The aim of this study is to provide a review of modelling techniques and frameworks for predictive risk modelling of patients in hospital, based on routinely collected data such as the Hospital Episode Statistics database. A number of sub-problems can be considered such as Length-of-Stay and End-of-Life predictive modelling. The methodologies in the literature are mainly focused on addressing the problems using regression methods and Markov models, and the majority lack generalisability. In some cases, the robustness, accuracy and re-usability of predictive risk models have been shown to be improved using Machine Learning methods. Dynamic Bayesian Network techniques can represent complex correlations models and include small probabilities into the solution. The main focus of this study is to provide a review of major time-varying Dynamic Bayesian Network techniques with applications in healthcare predictive risk modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One big challenge in deploying games-based learning, is the high cost and specialised skills associated with customised development. In this paper we present a serious games platform that offers tools that allow educators without special programming or artistic skills to dynamically create three dimensional (3D) scenes and verbal and non-verbal interaction with fully embodied conversational agents (ECAs) that can be used to simulate numerous educational scenarios. We present evaluation results based on the use of the platform to create two educational scenarios for politics and law in higher education. We conclude with a discussion of directions for the further work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the impact of the CO2 opportunity cost on the wholesale electricity price in the context of the Iberian electricity market (MIBEL), namely on the Portuguese system, for the period corresponding to the Phase II of the European Union Emission Trading Scheme (EU ETS). In the econometric analysis a vector error correction model (VECM) is specified to estimate both long–run equilibrium relations and short–run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The model is estimated using daily spot market prices and the four commodities prices are jointly modelled as endogenous variables. Moreover, a set of exogenous variables is incorporated in order to account for the electricity demand conditions (temperature) and the electricity generation mix (quantity of electricity traded according the technology used). The outcomes for the Portuguese electricity system suggest that the dynamic pass–through of carbon prices into electricity prices is strongly significant and a long–run elasticity was estimated (equilibrium relation) that is aligned with studies that have been conducted for other markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial dynamic reconfiguration of FPGAs can be used to implement complex applications using the concept of virtual hardware. In this work we have used partial dynamic reconfiguration to implement a JPEG decoder with reduced area. The image decoding process was adapted to be implemented on the FPGA fabric using this technique. The architecture was tested in a low cost ZYNQ-7020 FPGA that supports dynamic reconfiguration. The results show that the proposed solution needs only 40% of the resources utilized by a static implementation. The performance of the dynamic solution is about 9X slower than the static solution by trading-off internal resources of the FPGA. A throughput of 7 images per second is achievable with the proposed partial dynamic reconfiguration solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a dynamic setting-price duopoly model in which a dominant (leader) firm moves first and a subordinate (follower) firm moves second. We suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We analyse the effect of the production costs uncertainty on the profits of the firms, for different values of the intercept demand parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.