15 resultados para INTERNAL FAULT

em Instituto Politécnico do Porto, Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Small firms are a major player in development. Thus, entrepreneurship is frequently attached to these rms and it must be present in daily management of factors such as planning and cooperation. We intend to analyze these factors, comparing familiar and non-familiar businesses. This study was conducted in a Portuguese region in the north of Portugal - Vale do Sousa . The results allow us to conclude that even with some managerial di erences it was not possible to identify distinct patterns between them. The main goal of this paper is to open research lines on important issues to distinguish familiar from non-familiar businesses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As electronic devices get smaller and more complex, dependability assurance is becoming fundamental for many mission critical computer based systems. This paper presents a case study on the possibility of using the on-chip debug infrastructures present in most current microprocessors to execute real time fault injection campaigns. The proposed methodology is based on a debugger customized for fault injection and designed for maximum flexibility, and consists of injecting bit-flip type faults on memory elements without modifying or halting the target application. The debugger design is easily portable and applicable to different architectures, providing a flexible and efficient mechanism for verifying and validating fault tolerant components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the significant impact that cultural events may have in local communities and the inherent organization complexity, it is important to understand their specificities. Most of the times cultural events disregard marketing and often marketing is distant from art. Thus an analysis of an inside perspective might bring significant returns to the organization of such an event. This paper considers the three editions (2011, 2012 and 2013) of a cultural event – Noc Noc – organized by a local association in the city of Guimarães, Portugal. Its format is based in analogous events, as Noc Noc intends to convert everyday spaces (homes, commercial outlets and a number of other buildings) into cultural spaces, processed and transformed by artists, hosts and audiences. By interviewing a sample of people (20) who have hosted this cultural event, sometimes doubling as artists, and by experiencing the three editions of the event, this paper illustrates how the internal public understands this particular cultural event, analyzing specifically their motivations, ways of acting and participating, as well as their relationship with the public, with the organization of the event and with art in general. Results support that artists and hosts motivations must be identified in a timely and appropriate moment, as well as their views of this particular cultural event, in order to keep them participating, since low budget cultural events such as this one may have a key role in small scale cities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activity rhythms in animal groups arise both from external changes in the environment, as well as from internal group dynamics. These cycles are reminiscent of physical and chemical systems with quasiperiodic and even chaotic behavior resulting from “autocatalytic” mechanisms. We use nonlinear differential equations to model how the coupling between the self-excitatory interactions of individuals and external forcing can produce four different types of activity rhythms: quasiperiodic, chaotic, phase locked, and displaying over or under shooting. At the transition between quasiperiodic and chaotic regimes, activity cycles are asymmetrical, with rapid activity increases and slower decreases and a phase shift between external forcing and activity. We find similar activity patterns in ant colonies in response to varying temperature during the day. Thus foraging ants operate in a region of quasiperiodicity close to a cascade of transitions leading to chaos. The model suggests that a wide range of temporal structures and irregularities seen in the activity of animal and human groups might be accounted for by the coupling between collectively generated internal clocks and external forcings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução: No andebol, o ombro é elevado numa amplitude superior a 90º e move-se com elevada velocidade de execução o que pode originar deslocação anterior da cabeça do úmero e diminuição da rotação medial. A técnica MWM pode ser uma mais valia na correção da falha posicional e recuperação da amplitude de movimento de rotação medial da articulação gleno-umeral. Objetivo: Este estudo teve como objetivo verificar os efeitos imediatos da técnica de MWM na amplitude de movimento de rotação medial da articulação gleno-umeral em jogadores de andebol. Métodos: O presente estudo, duplamente cego, é do tipo experimental. Foram incluídos no estudo 30 indivíduos do sexo masculino, jogadores de andebol, distribuídos, aleatoriamente, em dois grupos de 15, experimental e controlo. Em ambos os grupos foi avaliada a amplitude de movimento da rotação medial da gleno-umeral, em dois momentos, pré e pós intervenção. O grupo experimental foi submetido à técnica de MWM no movimento de rotação medial da gleno-umeral no membro dominante. Ao grupo de controlo, foi solicitada a realização do movimento ativo de rotação medial no membro dominante, o fisioterapeuta manteve os mesmos contactos manuais mas não aplicou pressão na cabeça do úmero. Para a comparação entre os grupos experimental e controlo recorreu-se ao teste de Mann-Whitney e para analisar diferenças entre os dois momentos, para cada grupo, foi utilizado o teste de Wilcoxon. Resultados: Foram encontradas diferenças significativas no grupo experimental e controlo, contudo essa diferença foi superior no grupo experimental. Após a intervenção, o grupo experimental apresentou amplitudes de rotação medial da gleno-umeral significativamente mais elevadas às do grupo de controlo (U=0,50; p <0,001). Conclusão: A técnica de MWM para rotação medial produziu um aumento significativo na amplitude desse movimento.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity Markets are not only a new reality but an evolving one as the involved players and rules change at a relatively high rate. Multi-agent simulation combined with Artificial Intelligence techniques may result in very helpful sophisticated tools. This paper presents a new methodology for the management of coalitions in electricity markets. This approach is tested using the multi-agent market simulator MASCEM (Multi-Agent Simulator of Competitive Electricity Markets), taking advantage of its ability to provide the means to model and simulate Virtual Power Players (VPP). VPPs are represented as coalitions of agents, with the capability of negotiating both in the market and internally, with their members in order to combine and manage their individual specific characteristics and goals, with the strategy and objectives of the VPP itself. A case study using real data from the Iberian Electricity Market is performed to validate and illustrate the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atualmente, as Tecnologias de Informação (TI) são cada vez mais vitais dentro das organizações. As TI são o motor de suporte do negócio. Para grande parte das organizações, o funcionamento e desenvolvimento das TI têm como base infraestruturas dedicadas (internas ou externas) denominadas por Centro de Dados (CD). Nestas infraestruturas estão concentrados os equipamentos de processamento e armazenamento de dados de uma organização, por isso, são e serão cada vez mais desafiadas relativamente a diversos fatores tais como a escalabilidade, disponibilidade, tolerância à falha, desempenho, recursos disponíveis ou disponibilizados, segurança, eficiência energética e inevitavelmente os custos associados. Com o aparecimento das tecnologias baseadas em computação em nuvem e virtualização, abrese todo um leque de novas formas de endereçar os desafios anteriormente descritos. Perante este novo paradigma, surgem novas oportunidades de consolidação dos CD que podem representar novos desafios para os gestores de CD. Por isso, é no mínimo irrealista para as organizações simplesmente eliminarem os CD ou transforma-los segundo os mais altos padrões de qualidade. As organizações devem otimizar os seus CD, contudo um projeto eficiente desta natureza, com capacidade para suportar as necessidades impostas pelo mercado, necessidades dos negócios e a velocidade da evolução tecnológica, exigem soluções complexas e dispendiosas tanto para a sua implementação como a sua gestão. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar os CD inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, evolução histórica, a sua topologia, arquitetura e normas existentes que regem os mesmos. Posteriormente o estudo detalha algumas das principais tendências condicionadoras do futuro dos CD. Explorando o conhecimento teórico resultante do estudo anterior, desenvolve-se uma metodologia de avaliação dos CD baseado em critérios de decisão. O estudo culmina com uma análise sobre uma nova solução tecnológica e a avaliação de três possíveis cenários de implementação: a primeira baseada na manutenção do atual CD; a segunda baseada na implementação da nova solução em outro CD em regime de hosting externo; e finalmente a terceira baseada numa implementação em regime de IaaS.