56 resultados para failure time data
Resumo:
The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.
Resumo:
Graphics processor units (GPUs) today can be used for computations that go beyond graphics and such use can attain a performance that is orders of magnitude greater than a normal processor. The software executing on a graphics processor is composed of a set of (often thousands of) threads which operate on different parts of the data and thereby jointly compute a result which is delivered to another thread executing on the main processor. Hence the response time of a thread executing on the main processor is dependent on the finishing time of the execution of threads executing on the GPU. Therefore, we present a simple method for calculating an upper bound on the finishing time of threads executing on a GPU, in particular NVIDIA Fermi. Developing such a method is nontrivial because threads executing on a GPU share hardware resources at very fine granularity.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
Wireless Sensor Networks (WSNs) are highly distributed systems in which resource allocation (bandwidth, memory) must be performed efficiently to provide a minimum acceptable Quality of Service (QoS) to the regions where critical events occur. In fact, if resources are statically assigned independently from the location and instant of the events, these resources will definitely be misused. In other words, it is more efficient to dynamically grant more resources to sensor nodes affected by critical events, thus providing better network resource management and reducing endto- end delays of event notification and tracking. In this paper, we discuss the use of a WSN management architecture based on the active network management paradigm to provide the real-time tracking and reporting of dynamic events while ensuring efficient resource utilization. The active network management paradigm allows packets to transport not only data, but also program scripts that will be executed in the nodes to dynamically modify the operation of the network. This presumes the use of a runtime execution environment (middleware) in each node to interpret the script. We consider hierarchical (e.g. cluster-tree, two-tiered architecture) WSN topologies since they have been used to improve the timing performance of WSNs as they support deterministic medium access control protocols.
Resumo:
Mestrado em Engenharia Civil - Ramo Tecnologia e gestão das Construções
Resumo:
The goal of this study is the analysis of the dynamical properties of financial data series from worldwide stock market indexes during the period 2000–2009. We analyze, under a regional criterium, ten main indexes at a daily time horizon. The methods and algorithms that have been explored for the description of dynamical phenomena become an effective background in the analysis of economical data. We start by applying the classical concepts of signal analysis, fractional Fourier transform, and methods of fractional calculus. In a second phase we adopt the multidimensional scaling approach. Stock market indexes are examples of complex interacting systems for which a huge amount of data exists. Therefore, these indexes, viewed from a different perspectives, lead to new classification patterns.
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.
Resumo:
Among the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions in order to reduce the availability of fuel mass. However, the impact of these activities on soil physical and chemical properties varies according to the type of both soil and vegetation and is not fully understood. Therefore, soil monitoring campaigns are often used to measure these impacts. In this paper we have successfully used three statistical data treatments - the Kolmogorov-Smirnov test followed by the ANOVA and the Kruskall-Wallis tests – to investigate the variability among the soil pH, soil moisture, soil organic matter and soil iron variables for different monitoring times and sampling procedures.
Resumo:
The accuracy of the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) measurements is insufficient for many outdoor navigation tasks. As a result, in the late nineties, a new methodology – the Differential GPS (DGPS) – was developed. The differential approach is based on the calculation and dissemination of the range errors of the GPS satellites received. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that establishes Internet links with remote DGPS sources and performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate (submetric) outdoor navigation tasks.
Resumo:
This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão
Resumo:
A necessidade de utilizar métodos de ligação entre componentes de forma mais rápida, eficaz e com melhores resultados, tem causado a crescente utilização das juntas adesivas, em detrimento dos métodos tradicionais de ligação tais como a soldadura, brasagem, ligações aparafusadas e rebitadas. A utilização das juntas adesivas tem vindo a aumentar em diversas aplicações industriais por estas apresentarem vantagens das quais se destacam a redução de peso, redução de concentrações de tensões e facilidade de fabrico. No entanto, também apresentam desvantagens, como a necessidade de preparação das juntas e o descentramento da carga aplicada que provoca efeitos de flexão, os quais dão origem a tensões normais na direcção da espessura do adesivo (tensões de arrancamento), afectando assim a resistência da junta. A combinação da ligação adesiva com a soldadura por pontos permite algumas vantagens em comparação com as juntas adesivas tradicionais como a maior resistência, aumento da rigidez, melhor resistência ao corte e arrancamento e também à fadiga. Neste trabalho é apresentado um estudo experimental e numérico de juntas de sobreposição simples adesivas e híbridas (adesivas-soldadas). Os adesivos utilizados são o Araldite AV138®, apresentado como sendo frágil, e os adesivos Araldite 2015® e Sikaforce® 7752, intitulados como adesivos dúcteis. Foram considerados substratos de aço (C45E) em juntas com diferentes comprimentos de sobreposição ( ), que foram sujeitas a esforços de tracção. Foi realizada uma análise dos valores experimentais e efectuada uma comparação destes valores com os resultados obtidos por Elementos Finitos (EF) no software ABAQUS®, que incluiu uma análise de tensões na camada de adesivo e previsão do comportamento das juntas por Modelos de Dano Coesivo (MDC). A análise por MDC permitiu obter os modos de rotura, as curvas força-deslocamento e a resistência das juntas com bastante precisão, com excepção das juntas coladas com o adesivo Sikaforce® 7752. Estes resultados permitiram validar a técnica de modelação proposta para as juntas coladas e híbridas, o que representa uma base para posterior aplicação desta técnica em projecto, com as vantagens decorrentes da redução do tempo de projecto e maior facilidade de optimização.
Resumo:
The current practices in the consumption metering by electricity utilities is currently largely based on monthly consumption reading. The consumption metering device is always calculating the cumulative consumption. Then, it is possible to calculate the difference between the actual and the previous consumption evaluation in order to estimate the monthly consumption. The power systems planning needs in many aspects to handle consumption data obtained for shorter periods, namely in the Demand Response programs planning. The work presented in this paper is based on the application of typical consumption profiles that are previously defined for a certain power system area. Such profiles are then used in order to estimate the 15 minutes consumption for a certain consumer or consumer type.
Resumo:
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.
Resumo:
Para garantir que um equipamento opere com segurança e fiabilidade durante o seu ciclo de vida, desde a sua instalação ao desmantelamento, devem ser realizadas inspeções e/ou monitorizações que, dependendo dos dados recolhidos, podem implicar avaliações Fitness- For-Service (FFS) que definirão a necessidade de reparação ou alteração do equipamento ou das condições processuais. A combinação de inspeção ou monitorização com os melhores procedimentos e técnicas de avaliação atuais fazem sobressair insuficiências dos procedimentos mais antigos. Usando métodos mais avançados de avaliação, validados e suportados através de uma vasta experiência de campo, pode-se agora avaliar defeitos nos ativos (equipamentos) e determinar a adequação ao serviço com uma análise FFS. As análises FFS tornaram-se cada vez mais aceites em toda a indústria ao longo dos últimos anos. A norma API 579 - 1/ASME FFS-1: 2007 fornece diretrizes para avaliar os tipos de danos que afetam os equipamentos processuais e a norma API RP 571: 2011 descreve os mecanismos de degradação que afetam os equipamentos nas petroquímicas e refinarias, que incluem os danos por corrosão, desalinhamentos, deformações plásticas, laminações, fissuras, entre outros. Este trabalho consiste na análise de Integridade Estrutural de uma Flare Industrial que surgiu da necessidade real de análise do equipamento no âmbito da atividade profissional do candidato. O estudo realizado a nível profissional é de grande abrangência, incluindo a inspeção do equipamento, identificação da falha ou dano, recolha e registo de dados, definição de estratégia de atuação e seleção de técnicas de avaliação da condição para posterior alteração, reparação ou desmantelamento. Na presente dissertação de mestrado em Engenharia Mecânica, Ramo Construções Mecânicas, pretende-se efetuar o estudo aprofundado de uma das etapas de projeto, nomeadamente estudar a causa, avaliar a falha e implicações estruturais ou processuais devido à degradação interior do riser de uma flare, com base numa análise FFS, assumindo a operabilidade em segurança e garantindo resolutas condições de funcionamento. A presente análise FFS tem como finalidade validar ou não a integridade atual (avaliação técnica quantitativa) de modo a conhecer se o item em questão é seguro e confiável para continuar a operar em condições específicas durante um período de tempo determinado, tendo em consideração as condições verificadas no ato de inspeção e especificações de projeto, segundo os itens de verificação de segurança estabelecidos na norma EN 1990:2002/A1:2005/AC:2010 e com base na norma API 579-1/ASME FFS-1: 2007. No âmbito do presente trabalho foram realizadas as seguintes tarefas: - Identificar e analisar os mecanismos de degradação com base na norma API RP 571: 2011, face às condições processuais e geométricas que o equipamento em análise está sujeito de modo a perceber a evolução da degradação estrutural e quais as implicações na sua longevidade. - Avaliar e propor soluções de reparação e ou alterações processuais e ou geométricas do equipamento em análise de modo a permitir a continuidade em funcionamento sem afetar as condições de segurança e por conseguinte minimizar ou evitar o elevado custo económico associado a um novo equipamento e tempo de paragem processual. - Assumir limites estruturais face às condições processuais e ações externas ou adjacentes. - Garantir a segurança e qualidade da vida humana ou meio que o rodeia, do equipamento, das instalações, das infraestruturas envolventes e evitar o colapso económico, quer por motivos processuais, quer indeminizações ou agravamento do prémio do seguro. Verificou-se que os resultados serão a base para as alterações inerentes, tais como reforço estrutural, alteração da geometria do defletor, ajuste no tensionamento dos cabos e controlo da espessura mínima, de modo a alargar o período de vida útil da flare com segurança.