47 resultados para reliability algorithms
em Instituto Politécnico do Porto, Portugal
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
Electricity markets are complex environments, involving numerous entities trying to obtain the best advantages and profits while limited by power-network characteristics and constraints.1 The restructuring and consequent deregulation of electricity markets introduced a new economic dimension to the power industry. Some observers have criticized the restructuring process, however, because it has failed to improve market efficiency and has complicated the assurance of reliability and fairness of operations. To study and understand this type of market, we developed the Multiagent Simulator of Competitive Electricity Markets (MASCEM) platform based on multiagent simulation. The MASCEM multiagent model includes players with strategies for bid definition, acting in forward, day-ahead, and balancing markets and considering both simple and complex bids. Our goal with MASCEM was to simulate as many market models and player types as possible. This approach makes MASCEM both a short- and mediumterm simulation as well as a tool to support long-term decisions, such as those taken by regulators. This article proposes a new methodology integrated in MASCEM for bid definition in electricity markets. This methodology uses reinforcement learning algorithms to let players perceive changes in the environment, thus helping them react to the dynamic environment and adapt their bids accordingly.
Resumo:
Background: Acute respiratory infections are usual in children under three years old occurring in upper respiratory tract, having an impact on child and caregiver’s quality of life predisposing to otitis media or bronchiolitis. There are few valid and reliable measures to determine the child’s respiratory condition and to guide the physiotherapy intervention. Aim: To assess the intra and inter rater reliability of nasal auscultation, to analyze the relation between sounds’ classification and middle ear’s pressure and compliance as well as with the Clinical Severity Score. Methods: A cross-sectional observational study was composed by 125 nursery children aged up to three years old. Tympanometry, pulmonary and nasal auscultation and application of Clinical Severity Score were performed to each child. Nasal auscultation sounds’ were recorded and sent to 3 blinded experts, that classified, as “obstructed” and “unobstructed”, with a 48 hours interval, in order to analyze inter and intra rater reliability. Results: Nasal auscultation revealed a substantial inter and intra rater reliability (=0,749 and evaluator A - K= 0,691; evaluator B - K= 0,605 and evaluator C - K= 0,724, respectively). Both ears’ pressure was significantly lower in children with an "unobstructed" nasal sound when compared with an “obstructed” nasal sound (t=-3,599, p<0,001 in left ear; t=-2,258, p=0,026 in right ear). Compliance in both ears was significantly lower in children with an "obstructed" nasal sound when compared with “unobstructed” nasal sound (t=-2,728, p=0,007 in left ear; t=-3,830, p<0,001 in right ear). There was a statistically significant association between sounds’ classification and tympanograms types in both ear’s (=11,437, p=0,003 in left ear; =13,535, p=0,001 in right ear). There was a trend to children with an "unobstructed" nasal sound that had a lower clinical severity score when compared with “obstructed” children. Conclusion: It was observed a good intra and substantial inter reliability for nasal auscultation. Nasal auscultation sounds’ classification was related to middle ears’ pressure and compliance.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
In the proposed model, the independent system operator (ISO) provides the opportunity for maintenance outage rescheduling of generating units before each short-term (ST) time interval. Long-term (LT) scheduling for 1 or 2 years in advance is essential for the ISO and the generation companies (GENCOs) to decide their LT strategies; however, it is not possible to be exactly followed and requires slight adjustments. The Cournot-Nash equilibrium is used to characterize the decision-making procedure of an individual GENCO for ST intervals considering the effective coordination with LT plans. Random inputs, such as parameters of the demand function of loads, hourly demand during the following ST time interval and the expected generation pattern of the rivals, are included as scenarios in the stochastic mixed integer program defined to model the payoff-maximizing objective of a GENCO. Scenario reduction algorithms are used to deal with the computational burden. Two reliability test systems were chosen to illustrate the effectiveness of the proposed model for the ST decision-making process for future planned outages from the point of view of a GENCO.
Resumo:
The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.