955 resultados para Dynamic Marginal Cost
Resumo:
Contexte: La régurgitation mitrale (RM) est une maladie valvulaire nécessitant une intervention dans les cas les plus grave. Une réparation percutanée de la valve mitrale avec le dispositif MitraClip est un traitement sécuritaire et efficace pour les patients à haut risque chirurgical. Nous voulons évaluer les résultats cliniques et l'impact économique de cette thérapie par rapport à la gestion médicale des patients en insuffisance cardiaque avec insuffisance mitrale symptomatique. Méthodes: L'étude a été composée de deux phases; une étude d'observation de patients souffrant d'insuffisance cardiaque et de régurgitation mitrale traitée avec une thérapie médicale ou le MitraClip, et un modèle économique. Les résultats de l'étude observationnelle ont été utilisés pour estimer les paramètres du modèle de décision, qui a estimé les coûts et les avantages d'une cohorte hypothétique de patients atteints d'insuffisance cardiaque et insuffisance mitrale sévère traitée avec soit un traitement médical standard ou MitraClip. Résultats: La cohorte de patients traités avec le système MitraClip était appariée par score de propension à une population de patients atteints d'insuffisance cardiaque, et leurs résultats ont été comparés. Avec un suivi moyen de 22 mois, la mortalité était de 21% dans la cohorte MitraClip et de 42% dans la cohorte de gestion médicale (p = 0,007). Le modèle de décision a démontré que MitraClip augmente l'espérance de vie de 1,87 à 3,60 années et des années de vie pondérées par la qualité (QALY) de 1,13 à 2,76 ans. Le coût marginal était 52.500 $ dollars canadiens, correspondant à un rapport coût-efficacité différentiel (RCED) de 32,300.00 $ par QALY gagné. Les résultats étaient sensibles à l'avantage de survie. Conclusion: Dans cette cohorte de patients atteints d'insuffisance cardiaque symptomatique et d insuffisance mitrale significative, la thérapie avec le MitraClip est associée à une survie supérieure et est rentable par rapport au traitement médical.
Resumo:
La coordinació i assignació de tasques en entorns distribuïts ha estat un punt important de la recerca en els últims anys i aquests temes són el cor dels sistemes multi-agent. Els agents en aquests sistemes necessiten cooperar i considerar els altres agents en les seves accions i decisions. A més a més, els agents han de coordinar-se ells mateixos per complir tasques complexes que necessiten més d'un agent per ser complerta. Aquestes tasques poden ser tan complexes que els agents poden no saber la ubicació de les tasques o el temps que resta abans de que les tasques quedin obsoletes. Els agents poden necessitar utilitzar la comunicació amb l'objectiu de conèixer la tasca en l'entorn, en cas contrari, poden perdre molt de temps per trobar la tasca dins de l'escenari. De forma similar, el procés de presa de decisions distribuït pot ser encara més complexa si l'entorn és dinàmic, amb incertesa i en temps real. En aquesta dissertació, considerem entorns amb sistemes multi-agent amb restriccions i cooperatius (dinàmics, amb incertesa i en temps real). En aquest sentit es proposen dues aproximacions que permeten la coordinació dels agents. La primera és un mecanisme semi-centralitzat basat en tècniques de subhastes combinatòries i la idea principal es minimitzar el cost de les tasques assignades des de l'agent central cap als equips d'agents. Aquest algoritme té en compte les preferències dels agents sobre les tasques. Aquestes preferències estan incloses en el bid enviat per l'agent. La segona és un aproximació d'scheduling totalment descentralitzat. Això permet als agents assignar les seves tasques tenint en compte les preferències temporals sobre les tasques dels agents. En aquest cas, el rendiment del sistema no només depèn de la maximització o del criteri d'optimització, sinó que també depèn de la capacitat dels agents per adaptar les seves assignacions eficientment. Addicionalment, en un entorn dinàmic, els errors d'execució poden succeir a qualsevol pla degut a la incertesa i error de accions individuals. A més, una part indispensable d'un sistema de planificació és la capacitat de re-planificar. Aquesta dissertació també proveeix una aproximació amb re-planificació amb l'objectiu de permetre als agent re-coordinar els seus plans quan els problemes en l'entorn no permeti la execució del pla. Totes aquestes aproximacions s'han portat a terme per permetre als agents assignar i coordinar de forma eficient totes les tasques complexes en un entorn multi-agent cooperatiu, dinàmic i amb incertesa. Totes aquestes aproximacions han demostrat la seva eficiència en experiments duts a terme en l'entorn de simulació RoboCup Rescue.
Resumo:
This paper discusses experimental and theoretical investigations and Computational Fluid Dynamics (CFD) modelling considerations to evaluate the performance of a square section wind catcher system connected to the top of a test room for the purpose of natural ventilation. The magnitude and distribution of pressure coefficients (C-p) around a wind catcher and the air flow into the test room were analysed. The modelling results indicated that air was supplied into the test room through the wind catcher's quadrants with positive external pressure coefficients and extracted out of the test room through quadrants with negative pressure coefficients. The air flow achieved through the wind catcher depends on the speed and direction of the wind. The results obtained using the explicit and AIDA implicit calculation procedures and CFX code correlate relatively well with the experimental results at lower wind speeds and with wind incidents at an angle of 0 degrees. Variation in the C-p and air flow results were observed particularly with a wind direction of 45 degrees. The explicit and implicit calculation procedures were found to be quick and easy to use in obtaining results whereas the wind tunnel tests were more expensive in terms of effort, cost and time. CFD codes are developing rapidly and are widely available especially with the decreasing prices of computer hardware. However, results obtained using CFD codes must be considered with care, particularly in the absence of empirical data.
Resumo:
The last few years have proved that Vertical Axis Wind Turbines (VAWTs) are more suitable for urban areas than Horizontal Axis Wind Turbines (HAWTs). To date, very little has been published in this area to assess good performance and lifetime of VAWTs either in open or urban areas. At low tip speed ratios (TSRs<5), VAWTs are subjected to a phenomenon called 'dynamic stall'. This can really affect the fatigue life of a VAWT if it is not well understood. The purpose of this paper is to investigate how CFD is able to simulate the dynamic stall for 2-D flow around VAWT blades. During the numerical simulations different turbulence models were used and compared with the data available on the subject. In this numerical analysis the Shear Stress Transport (SST) turbulence model seems to predict the dynamic stall better than the other turbulence models available. The limitations of the study are that the simulations are based on a 2-D case with constant wind and rotational speeds instead of considering a 3-D case with variable wind speeds. This approach was necessary for having a numerical analysis at low computational cost and time. Consequently, in the future it is strongly suggested to develop a more sophisticated model that is a more realistic simulation of a dynamic stall in a three-dimensional VAWT.
Resumo:
Air distribution systems are one of the major electrical energy consumers in air-conditioned commercial buildings which maintain comfortable indoor thermal environment and air quality by supplying specified amounts of treated air into different zones. The sizes of air distribution lines affect energy efficiency of the distribution systems. Equal friction and static regain are two well-known approaches for sizing the air distribution lines. Concerns to life cycle cost of the air distribution systems, T and IPS methods have been developed. Hitherto, all these methods are based on static design conditions. Therefore, dynamic performance of the system has not been yet addressed; whereas, the air distribution systems are mostly performed in dynamic rather than static conditions. Besides, none of the existing methods consider any aspects of thermal comfort and environmental impacts. This study attempts to investigate the existing methods for sizing of the air distribution systems and proposes a dynamic approach for size optimisation of the air distribution lines by taking into account optimisation criteria such as economic aspects, environmental impacts and technical performance. These criteria have been respectively addressed through whole life costing analysis, life cycle assessment and deviation from set-point temperature of different zones. Integration of these criteria into the TRNSYS software produces a novel dynamic optimisation approach for duct sizing. Due to the integration of different criteria into a well- known performance evaluation software, this approach could be easily adopted by designers in busy nature of design. Comparison of this integrated approach with the existing methods reveals that under the defined criteria, system performance is improved up to 15% compared to the existing methods. This approach is interpreted as a significant step forward reaching to the net zero emission building in future.
Resumo:
Nowadays utilising the proper HVAC system is essential both in extreme weather conditions and dense buildings design. Hydraulic loops are the most common parts in all air conditioning systems. This article aims to investigate the performance of different hydraulic loop arrangements in variable flow systems. Technical, economic and environmental assessments have been considered in this process. A dynamic system simulation is generated to evaluate the system performance and an economic evaluation is conducted by whole life cost assessment. Moreover, environmental impacts have been studied by considering the whole life energy consumption, CO2 emission, the embodied energy and embodied CO2 of the system components. Finally, decision-making in choosing the most suitable hydraulic system among five well-known alternatives has been proposed.
Resumo:
The agility of inter-organizational process represents the ability of virtual enterprise to respond rapidly to the changing market environment. Many theories and methodologies about inter-organizational process have been developed but the dynamic agility has seldom been addressed. A virtual enterprise whose process has a high dynamic agility will be able to adjust with the changing environment in short time and low cost. This paper analyzes the agility of inter-organizational process from a dynamic perspective. Two indexes are proposed to evaluate the dynamic agility: time and cost. Furthermore, the method to measure the dynamic agility using simulation is studied. Finally, a case study is given to illustrate the method to measure the dynamic agility.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
This study reconstructs the depositional environments that accompanied both ice advance and ice retreat of the last British–Irish Ice Sheet in NE England during the Last Glacial Maximum, and proposes three regional ice-flow phases. The Late Devensian (29–22 cal. ka BP) Tyne Gap Ice Stream initially deposited the Blackhall Till Formation during shelf-edge glaciation (Phase I). This subglacial traction till comprises several related facies, including stratified and laminated diamictons, tectonites, and sand and gravel beds deposited both in subglacial canals and in proglacial streams. Eventually, stagnation of the Tyne Gap Ice Stream led to ice-marginal sedimentation in County Durham (Phase II). During the Dimlington Stadial (21 cal. ka BP), the North Sea Lobe advanced towards the coastline of N Norfolk. This resulted initially in sandur deposition (widespread, tabular sand and gravel; the Peterlee Sand and Gravel Formation; Phase II) and ultimately in deposition of the Horden Till Formation (Phase III), a massive subglacial till. As the North Sea Lobe overrode previous formations, it thrusted and stacked sediments in County Durham, and dammed proglacial lakes between the east-coast ice, the Pennine uplands and the remaining Pennine ice. The North Sea Lobe retreated after Heinrich Event 1 (16 ka). This study highlights the complexity of ice flow during the Late Devensian glaciation of NE England, with changing environmental and oceanic conditions forcing a mobile and sensitive ice sheet.
Resumo:
This article forecasts the extent to which the potential benefits of adopting transgenic crops may be reduced by costs of compliance with coexistence regulations applicable in various member states of the EU. A dynamic economic model is described and used to calculate the potential yield and gross margin of a set of crops grown in a selection of typical rotation scenarios. The model simulates varying levels of pest, weed, and drought pressures, with associated management strategies regarding pesticide and herbicide application, and irrigation. We report on the initial use of the model to calculate the net reduction in gross margin attributable to coexistence costs for insect-resistant (IR) and herbicide-tolerant (HT) maize grown continuously or in a rotation, HT soya grown in a rotation, HT oilseed rape grown in a rotation, and HT sugarbeet grown in a rotation. Conclusions are drawn about conditions favoring inclusion of a transgenic crop in a crop rotation, having regard to farmers’ attitude toward risk.
Resumo:
Unorganized traffic is a generalized form of travel wherein vehicles do not adhere to any predefined lanes and can travel in-between lanes. Such travel is visible in a number of countries e.g. India, wherein it enables a higher traffic bandwidth, more overtaking and more efficient travel. These advantages are visible when the vehicles vary considerably in size and speed, in the absence of which the predefined lanes are near-optimal. Motion planning for multiple autonomous vehicles in unorganized traffic deals with deciding on the manner in which every vehicle travels, ensuring no collision either with each other or with static obstacles. In this paper the notion of predefined lanes is generalized to model unorganized travel for the purpose of planning vehicles travel. A uniform cost search is used for finding the optimal motion strategy of a vehicle, amidst the known travel plans of the other vehicles. The aim is to maximize the separation between the vehicles and static obstacles. The search is responsible for defining an optimal lane distribution among vehicles in the planning scenario. Clothoid curves are used for maintaining a lane or changing lanes. Experiments are performed by simulation over a set of challenging scenarios with a complex grid of obstacles. Additionally behaviours of overtaking, waiting for a vehicle to cross and following another vehicle are exhibited.
Resumo:
Dynamic electricity pricing can produce efficiency gains in the electricity sector and help achieve energy policy goals such as increasing electric system reliability and supporting renewable energy deployment. Retail electric companies can offer dynamic pricing to residential electricity customers via smart meter-enabled tariffs that proxy the cost to procure electricity on the wholesale market. Current investments in the smart metering necessary to implement dynamic tariffs show policy makers’ resolve for enabling responsive demand and realizing its benefits. However, despite these benefits and the potential bill savings these tariffs can offer, adoption among residential customers remains at low levels. Using a choice experiment approach, this paper seeks to determine whether disclosing the environmental and system benefits of dynamic tariffs to residential customers can increase adoption. Although sampling and design issues preclude wide generalization, we found that our environmentally conscious respondents reduced their required discount to switch to dynamic tariffs around 10% in response to higher awareness of environmental and system benefits. The perception that shifting usage is easy to do also had a significant impact, indicating the potential importance of enabling technology. Perhaps the targeted communication strategy employed by this study is one way to increase adoption and achieve policy goals.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.