39 resultados para Service Failure
em Instituto Politécnico do Porto, Portugal
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Power systems operation in a liberalized environment requires that market players have access to adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tools must include ancillary market simulation. This paper deals with ancillary services negotiation in electricity markets. The proposed concepts and methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case concerning the dispatch of ancillary services using two different methods (Linear Programming and Genetic Algorithm approaches) is included in the paper.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
Book Subtitle International Conference, CENTERIS 2010, Viana do Castelo, Portugal, October 20-22, 2010, Proceedings, Part II
Resumo:
Atualmente e devido às conjunturas sócio económicas que as empresas atravessam, é importante maximizar tanto os recursos materiais como humanos. Essa consciência faz com que cada vez mais as empresas tentem que os seus colaboradores possam desempenhar um papel importante no processo de decisão. Cada vez mais a diferença entre o sucesso e o fracasso depende da estratégia que cada empresa opte por envergar. Sendo assim cada atividade desempenhada por um seu colaborador deve estar alinhada com os objetivos estratégicos da empresa. O contexto em que a presente tese se insere tem por base uma pesquisa aos vários métodos multicritério existentes, de forma a que o serviço que seja adjudicado possa ser executado de forma transparente e eficiente, sem nunca descorar a sua otimização. O método de apoio à decisão escolhido foi o Analytic Hierarchy Process (AHP). A necessidade de devolver aos decisores/gestores a melhor solução resultante da aplicação de um método de apoio à decisão numa empresa de serviços energéticos foi a base para a escolha da tese. Dos resultados obtidos conclui-se que a aplicação do método AHP foi adequada, conseguindo responder a todos os objetivos inicialmente propostos. Foi também possível verificar os benefícios que advêm da sua aplicação, que por si só, ajudaram a perceber que é necessário haver uma maior entreajuda e consenso entre as decisões a tomar.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
OBJECTIVE: Bacillus Calmette-Guérin (BCG) immunotherapy is the gold standard treatment for superficial bladder tumors with intermediate/high risk of recurrence or progression. However, approximately 30% of patients fail to respond to the treatment. Effective BCG therapy needs precise activation of the type 1 helper cells immune pathway. Tumor-associated macrophages (TAMs) often assume an immunoregulatory M2 phenotype and may directly interfere with the BCG-induced antitumor immune response. Thus, we aim to clarify the influence of TAMs, in particular of the M2 phenotype in stroma and tumor areas, in BCG treatment outcome. PATIENTS AND METHODS: The study included 99 patients with bladder cancer treated with BCG. Tumors resected before treatment were evaluated using immunohistochemistry for CD68 and CD163 antigens, which identify a lineage macrophage marker and a M2-polarized specific cell surface receptor, respectively. CD68+ and CD163+ macrophages were evaluated within the stroma and tumor areas, and high density of infiltrating cells spots were selected for counting. Hypoxia, an event known to modulate macrophage phenotype, was also assessed through hypoxia induced factor (HIF)-1α expression. RESULTS: Patients in whom BCG failed had high stroma-predominant CD163+ macrophage counts (high stroma but low tumor CD163+ macrophages counts) when compared with the ones with a successful treatment (71% vs. 47%, P = 0.017). Furthermore, patients presenting this phenotype showed decreased recurrence-free survival (log rank, P = 0.008) and a clear 2-fold increased risk of BCG treatment failure was observed in univariate analysis (hazard ratio = 2.343; 95% CI: 1.197-4.587; P = 0.013). Even when adjusted for potential confounders, such as age and therapeutic scheme, multivariate analysis revealed 2.6-fold increased risk of recurrence (hazard ratio = 2.627; 95% CI: 1.340-5.150; P = 0.005). High stroma-predominant CD163+ macrophage counts were also associated with low expression of HIF-1α in tumor areas, whereas high counts of CD163+ in the tumor presented high expression of HIF-1α in tumor nests. CONCLUSIONS: TAMs evaluation using CD163 is a good indicator of BCG treatment failure. Moreover, elevated infiltration of CD163+ macrophages, predominantly in stroma areas but not in the tumor, may be a useful indicator of BCG treatment outcome, possibly owing to its immunosuppressive phenotype.
Resumo:
Resource constraints are becoming a problem as many of the wireless mobile devices have increased generality. Our work tries to address this growing demand on resources and performance, by proposing the dynamic selection of neighbor nodes for cooperative service execution. This selection is in uenced by user's quality of service requirements expressed in his request, tailoring provided service to user's speci c needs. In this paper we improve our proposal's formulation algorithm with the ability to trade o time for the quality of the solution. At any given time, a complete solution for service execution exists, and the quality of that solution is expected to improve overtime.
Resumo:
Smartphones and other internet enabled devices are now common on our everyday life, thus unsurprisingly a current trend is to adapt desktop PC applications to execute on them. However, since most of these applications have quality of service (QoS) requirements, their execution on resource-constrained mobile devices presents several challenges. One solution to support more stringent applications is to offload some of the applications’ services to surrogate devices nearby. Therefore, in this paper, we propose an adaptable offloading mechanism which takes into account the QoS requirements of the application being executed (particularly its real-time requirements), whilst allowing offloading services to several surrogate nodes. We also present how the proposed computing model can be implemented in an Android environment
Resumo:
Wireless sensor networks (WSNs) emerge as underlying infrastructures for new classes of large-scale networked embedded systems. However, WSNs system designers must fulfill the quality-of-service (QoS) requirements imposed by the applications (and users). Very harsh and dynamic physical environments and extremely limited energy/computing/memory/communication node resources are major obstacles for satisfying QoS metrics such as reliability, timeliness, and system lifetime. The limited communication range of WSN nodes, link asymmetry, and the characteristics of the physical environment lead to a major source of QoS degradation in WSNs-the ldquohidden node problem.rdquo In wireless contention-based medium access control (MAC) protocols, when two nodes that are not visible to each other transmit to a third node that is visible to the former, there will be a collision-called hidden-node or blind collision. This problem greatly impacts network throughput, energy-efficiency and message transfer delays, and the problem dramatically increases with the number of nodes. This paper proposes H-NAMe, a very simple yet extremely efficient hidden-node avoidance mechanism for WSNs. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes that scales to multiple clusters via a cluster grouping strategy that guarantees no interference between overlapping clusters. Importantly, H-NAMe is instantiated in IEEE 802.15.4/ZigBee, which currently are the most widespread communication technologies for WSNs, with only minor add-ons and ensuring backward compatibility with their protocols standards. H-NAMe was implemented and exhaustively tested using an experimental test-bed based on ldquooff-the-shelfrdquo technology, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. H-NAMe effectiveness was also demonstrated in a target tracking application with mobile robots - over a WSN deployment.
Resumo:
Wireless sensor networks (WSNs) are one of today’s most prominent instantiations of the ubiquituous computing paradigm. In order to achieve high levels of integration, WSNs need to be conceived considering requirements beyond the mere system’s functionality. While Quality-of-Service (QoS) is traditionally associated with bit/data rate, network throughput, message delay and bit/packet error rate, we believe that this concept is too strict, in the sense that these properties alone do not reflect the overall quality-ofservice provided to the user/application. Other non-functional properties such as scalability, security or energy sustainability must also be considered in the system design. This paper identifies the most important non-functional properties that affect the overall quality of the service provided to the users, outlining their relevance, state-of-the-art and future research directions.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
In this paper we propose a framework for the support of mobile application with Quality of Service (QoS) requirements, such as voice or video, capable of supporting distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.
Resumo:
In heterogeneous environments, diversity of resources among the devices may affect their ability to perform services with specific QoS constraints, and drive peers to group themselves in a coalition for cooperative service execution. The dynamic selection of peers should be influenced by user’s QoS requirements as well as local computation availability, tailoring provided service to user’s specific needs. However, complex dynamic real-time scenarios may prevent the possibility of computing optimal service configurations before execution. An iterative refinement approach with the ability to trade off deliberation time for the quality of the solution is proposed. We state the importance of quickly finding a good initial solution and propose heuristic evaluation functions that optimise the rate at which the quality of the current solution improves as the algorithms have more time to run.