886 resultados para Optimal test set
Resumo:
Dependency on thermal generation and continued wind power growth in Europe due to renewable energy and greenhouse gas emissions targets has resulted in an interesting set of challenges for power systems. The variability of wind power impacts dispatch and balancing by grid operators, power plant operations by generating companies and market wholesale costs. This paper quantifies the effects of high wind power penetration on power systems with a dependency on gas generation using a realistic unit commitment and economic dispatch model. The test system is analyzed under two scenarios, with and without wind, over one year. The key finding of this preliminary study is that despite increased ramping requirements in the wind scenario, the unit cost of electricity due to sub-optimal operation of gas generators does not show substantial deviation from the no wind scenario.
Resumo:
Background: The COMET (Core Outcome Measures in Effectiveness Trials) Initiative is developing a publicly accessible online resource to collate the knowledge base for core outcome set development (COS) and the applied work from different health conditions. Ensuring that the database is as comprehensive as possible and keeping it up to date are key to its value for users. This requires the development and application of an optimal, multi-faceted search strategy to identify relevant material. This paper describes the challenges of designing and implementing such a search, outlining the development of the search strategy for studies of COS development, and, in turn, the process for establishing a database of COS.
Methods: We investigated the performance characteristics of this strategy including sensitivity, precision and numbers needed to read. We compared the contribution of databases towards identifying included studies to identify the best combination of methods to retrieve all included studies.
Results: Recall of the search strategies ranged from 4% to 87%, and precision from 0.77% to 1.13%. MEDLINE performed best in terms of recall, retrieving 216 (87%) of the 250 included records, followed by Scopus (44%). The Cochrane Methodology Register found just 4% of the included records. MEDLINE was also the database with the highest precision. The number needed to read varied between 89 (MEDLINE) and 130 (SCOPUS).
Conclusions: We found that two databases and hand searching were required to locate all of the studies in this review. MEDLINE alone retrieved 87% of the included studies, but actually 97% of the included studies were indexed on MEDLINE. The Cochrane Methodology Register did not contribute any records that were not found in the other databases, and will not be included in our future searches to identify studies developing COS. SCOPUS had the lowest precision rate (0.77) and highest number needed to read (130). In future COMET searches for COS a balance needs to be struck between the work involved in screening large numbers of records, the frequency of the searching and the likelihood that eligible studies will be identified by means other than the database searches.
Resumo:
BACKGROUND: Despite vaccines and improved medical intensive care, clinicians must continue to be vigilant of possible Meningococcal Disease in children. The objective was to establish if the procalcitonin test was a cost-effective adjunct for prodromal Meningococcal Disease in children presenting at emergency department with fever without source.
METHODS AND FINDINGS: Data to evaluate procalcitonin, C-reactive protein and white cell count tests as indicators of Meningococcal Disease were collected from six independent studies identified through a systematic literature search, applying PRISMA guidelines. The data included 881 children with fever without source in developed countries.The optimal cut-off value for the procalcitonin, C-reactive protein and white cell count tests, each as an indicator of Meningococcal Disease, was determined. Summary Receiver Operator Curve analysis determined the overall diagnostic performance of each test with 95% confidence intervals. A decision analytic model was designed to reflect realistic clinical pathways for a child presenting with fever without source by comparing two diagnostic strategies: standard testing using combined C-reactive protein and white cell count tests compared to standard testing plus procalcitonin test. The costs of each of the four diagnosis groups (true positive, false negative, true negative and false positive) were assessed from a National Health Service payer perspective. The procalcitonin test was more accurate (sensitivity=0.89, 95%CI=0.76-0.96; specificity=0.74, 95%CI=0.4-0.92) for early Meningococcal Disease compared to standard testing alone (sensitivity=0.47, 95%CI=0.32-0.62; specificity=0.8, 95% CI=0.64-0.9). Decision analytic model outcomes indicated that the incremental cost effectiveness ratio for the base case was £-8,137.25 (US $ -13,371.94) per correctly treated patient.
CONCLUSIONS: Procalcitonin plus standard recommended tests, improved the discriminatory ability for fatal Meningococcal Disease and was more cost-effective; it was also a superior biomarker in infants. Further research is recommended for point-of-care procalcitonin testing and Markov modelling to incorporate cost per QALY with a life-time model.
Resumo:
Boolean games are a framework for reasoning about the rational behaviour of agents, whose goals are formalized using propositional formulas. They offer an attractive alternative to normal-form games, because they allow for a more intuitive and more compact encoding. Unfortunately, however, there is currently no general, tailor-made method available to compute the equilibria of Boolean games. In this paper, we introduce a method for finding the pure Nash equilibria based on disjunctive answer set programming. Our method is furthermore capable of finding the core elements and the Pareto optimal equilibria, and can easily be modified to support other forms of optimality, thanks to the declarative nature of disjunctive answer set programming. Experimental results clearly demonstrate the effectiveness of the proposed method.
Resumo:
The research presented, investigates the optimal set of operational codes (opcodes) that create a robust indicator of malicious software (malware) and also determines a program’s execution duration for accurate classification of benign and malicious software. The features extracted from the dataset are opcode density histograms, extracted during the program execution. The classifier used is a support vector machine and is configured to select those features to produce the optimal classification of malware over different program run lengths. The findings demonstrate that malware can be detected using dynamic analysis with relatively few opcodes.
Resumo:
A relação entre a epidemiologia, a modelação matemática e as ferramentas computacionais permite construir e testar teorias sobre o desenvolvimento e combate de uma doença. Esta tese tem como motivação o estudo de modelos epidemiológicos aplicados a doenças infeciosas numa perspetiva de Controlo Ótimo, dando particular relevância ao Dengue. Sendo uma doença tropical e subtropical transmitida por mosquitos, afecta cerca de 100 milhões de pessoas por ano, e é considerada pela Organização Mundial de Saúde como uma grande preocupação para a saúde pública. Os modelos matemáticos desenvolvidos e testados neste trabalho, baseiam-se em equações diferenciais ordinárias que descrevem a dinâmica subjacente à doença nomeadamente a interação entre humanos e mosquitos. É feito um estudo analítico dos mesmos relativamente aos pontos de equilíbrio, sua estabilidade e número básico de reprodução. A propagação do Dengue pode ser atenuada através de medidas de controlo do vetor transmissor, tais como o uso de inseticidas específicos e campanhas educacionais. Como o desenvolvimento de uma potencial vacina tem sido uma aposta mundial recente, são propostos modelos baseados na simulação de um hipotético processo de vacinação numa população. Tendo por base a teoria de Controlo Ótimo, são analisadas as estratégias ótimas para o uso destes controlos e respetivas repercussões na redução/erradicação da doença aquando de um surto na população, considerando uma abordagem bioeconómica. Os problemas formulados são resolvidos numericamente usando métodos diretos e indiretos. Os primeiros discretizam o problema reformulando-o num problema de optimização não linear. Os métodos indiretos usam o Princípio do Máximo de Pontryagin como condição necessária para encontrar a curva ótima para o respetivo controlo. Nestas duas estratégias utilizam-se vários pacotes de software numérico. Ao longo deste trabalho, houve sempre um compromisso entre o realismo dos modelos epidemiológicos e a sua tratabilidade em termos matemáticos.
Resumo:
The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
Congestion management of transmission power systems has achieve high relevance in competitive environments, which require an adequate approach both in technical and economic terms. This paper proposes a new methodology for congestion management and transmission tariff determination in deregulated electricity markets. The congestion management methodology is based on a reformulated optimal power flow, whose main goal is to obtain a feasible solution for the re-dispatch minimizing the changes in the transactions resulting from market operation. The proposed transmission tariffs consider the physical impact caused by each market agents in the transmission network. The final tariff considers existing system costs and also costs due to the initial congestion situation and losses. This paper includes a case study for the 118 bus IEEE test case.
Resumo:
To maintain a power system within operation limits, a level ahead planning it is necessary to apply competitive techniques to solve the optimal power flow (OPF). OPF is a non-linear and a large combinatorial problem. The Ant Colony Search (ACS) optimization algorithm is inspired by the organized natural movement of real ants and has been successfully applied to different large combinatorial optimization problems. This paper presents an implementation of Ant Colony optimization to solve the OPF in an economic dispatch context. The proposed methodology has been developed to be used for maintenance and repairing planning with 48 to 24 hours antecipation. The main advantage of this method is its low execution time that allows the use of OPF when a large set of scenarios has to be analyzed. The paper includes a case study using the IEEE 30 bus network. The results are compared with other well-known methodologies presented in the literature.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
This paper presents a Unit Commitment model with reactive power compensation that has been solved by Genetic Algorithm (GA) optimization techniques. The GA has been developed a computational tools programmed/coded in MATLAB. The main objective is to find the best generations scheduling whose active power losses are minimal and the reactive power to be compensated, subjected to the power system technical constraints. Those are: full AC power flow equations, active and reactive power generation constraints. All constraints that have been represented in the objective function are weighted with a penalty factors. The IEEE 14-bus system has been used as test case to demonstrate the effectiveness of the proposed algorithm. Results and conclusions are dully drawn.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.