948 resultados para sicurezza safety error detection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a novel bipolar time-spread (TS) echo hiding based watermarking method is proposed for stereo audio signals, to overcome the low robustness problem in the traditional TS echo hiding method. At the embedding, echo signals with opposite polarities are added to both channels of the host audio signal. This improves the imperceptibility of the watermarking scheme, since added watermarks have similar effects in both channels. Then decoding part is developed, in order to improve the robustness of the watermarking scheme against common attacks. Since these novel embedding and decoding methods utilize the advantage of two channels in stereo audio signals, it significantly reduces the interference of host signal at watermark extraction which is the main reason for error detection in the traditional TS echo hiding based watermarking under closed-loop attack. The effectiveness of the proposed watermarking scheme is theoretically analyzed and verified by simulations under common attacks. The proposed echo hiding method outperforms conventional TS echo hiding based watermarking when their perceptual qualities are similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work focuses on the relationship between organizational culture and quality culture in the hotel sector of NATAL/RN with respect to employee performance. The themes organizational culture and quality have been the research focus of administration theorists and a constant concern of professional managers, since the Japanese demonstrated effective forms or western management. In this study, the Competing Values Model (C.V.M.) (Quinn e Cameron, 1996; Quinn, 1998; Santos, 1998, 2000; Teixeira, 2001), which was tested on north-American organizations and considered a high value academic and professional instrument, was applied. The model maps the organizational culture on a profile with four elements: clan, adhocracy, market and hierarchy. The C.V.M., associated with the taximetrics created by Cameron (which classifies quality culture in for levels: status quo, error detection, error prevention and perpetual creative quality) has been related with organizational performance. In this study, these two models are used jointly and tested in the hotel sector. The results indicate that the strongest element of the profile is clan, which is characterized by internal focus, participation and people involvement, followed by the adhocracy element, which has an external focus, emphasizes flexibility and is characterized by dynamic enterprising and creativity. Regarding the level of the culture s quality in the hotel, the highest level, that of perpetual improvement and creativity, which attempts to enchant and to surprise the clients, was most frequently cited, followed by the error detection level, which has as its goal to discover and correct mistakes, trying, consequently, to reduce waste. The results suggest that employee performance as measured on some indicators is related to elements of the organizational culture profile and quality level

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article has the purpose to review the main codes used to detect and correct errors in data communication specifically in the computer's network. The Hamming's code and the Ciclic Redundancy Code (CRC) are presented as the focus of this article as well as CRC hardware implementation. Each code is reviewed in details in order to fill the gaps in the literature and to make it accessible to the computer science and engineering students as well as to anyone who may be interested in learning the technique to treat error in data communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the results of a search for the flavor-changing neutral current decay Bs 0 → μ+ μ-. using a data set with integrated luminosity of 240 pb-1 of pp̄ collisions at √s = 1.96 TeV collected with the D0 detector in run II of the Fermilab Tevatron collider. We find the upper limit on the branching fraction to be B(Bs 0 → μ+ π-) ≤ 5.0 × 10-7 at the 95% C.L. assuming no contributions from the decay Bd 0 → μ+ μ- in the signal region. This limit is the most stringent upper bound on the branching fraction Bs 0 → μ+ μ- to date. © 2005 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] Se presenta el análisis de Calidad del Dato utilizado en la construcción de una herramienta de observación diseñada ad hoc. Se trata de un sistema mixto de formatos de campo y sistemas de categorías exhaustivas y mutuamente excluyentes (E/ME) que tiene como objetivo codificar la fase de ataque del balonmano playa. Se utilizan como criterios: minuto, marcador, zona de finalización y jugador que finaliza. Se han codificado 12 observaciones de selecciones nacionales absolutas masculinas. El análisis se ha realizado utilizando la concordancia consensuada (aproximación cualitativa de la calidad del dato), elaborando un archivo de detección de errores, calculando el índice Kappa de Cohen, los índices de correlación Tau-B de Kendall, Pearson y Spearman; y un análisis de Generalizabilidad. Los resultados de los coeficientes de correlación muestran un índice mínimo de .993, los índices Kappa de Cohen se sitúan en .917 y los índices de generalizabilidad son óptimos. Estos resultados aseguran que la herramienta de observación, además de tener un buen ajuste, permite registrar con fiabilidad y precisión.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La perdita di pacchetti durante una trasmissione su una rete Wireless influisce in maniera fondamentale sulla qualità del collegamento tra due End-System. Lo scopo del progetto è quello di implementare una tecnica di ritrasmissione asimmetrica anticipata dei pacchetti perduti, in modo da minimizzare i tempi di recupero dati e migliorare la qualità della comunicazione. Partendo da uno studio su determinati tipi di ritrasmissione, in particolare quelli implementati dal progetto ABPS, Always Best Packet Switching, si è maturata l'idea che un tipo di ritrasmissione particolarmente utile potrebbe avvenire a livello Access Point: nel caso in cui la perdita di pacchetti avvenga tra l'AP e il nodo mobile che vi è collegato via IEEE802.11, invece che attendere la ritrasmissione TCP e Effettuata dall'End-System sorgente è lo stesso Access Point che e effettua una ritrasmissione verso il nodo mobile per permettere un veloce recupero dei dati perduti. Tale funzionalità stata quindi concettualmente divisa in due parti, la prima si riferisce all'applicazione che si occupa della bufferizzazione di pacchetti che attraversano l'AP e della loro copia in memoria per poi ritrasmetterli in caso di segnalazione di mancata acquisizione, la seconda riguardante la modifica al kernel che permette la segnalazione anticipata dell'errore. E' già stata sviluppata un'applicazione che prevede una ritrasmissione anticipata da parte dell'Access Point Wifi, cioè una ritrasmissione prima che la notifica di avvenuta perdita raggiunga l'end-point sorgente e appoggiata su un meccanismo di simulazione di Error Detection. Inoltre è stata anche realizzata la ritrasmissione asincrona e anticipata del TCP. Questo documento tratta della realizzazione di una nuova applicazione che fornisca una più effciente versione del buffer di pacchetti e utilizzi il meccanismo di una ritrasmissione asimmetrica e anticipata del TCP, cioè attivare la ritrasmissione su richiesta del TCP tramite notifiche di validità del campo Acknowledgement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the main Executive Control theories are exposed. Methods typical of Cognitive and Computational Neuroscience are introduced and the role of behavioural tasks involving conflict resolution in the response elaboration, after the presentation of a stimulus to the subject, are highlighted. In particular, the Eriksen Flanker Task and its variants are discussed. Behavioural data, from scientific literature, are illustrated in terms of response times and error rates. During experimental behavioural tasks, EEG is registered simultaneously. Thanks to this, event related potential, related with the current task, can be studied. Different theories regarding relevant event related potential in this field - such as N2, fERN (feedback Error Related Negativity) and ERN (Error Related Negativity) – are introduced. The aim of this thesis is to understand and simulate processes regarding Executive Control, including performance improvement, error detection mechanisms, post error adjustments and the role of selective attention, with the help of an original neural network model. The network described here has been built with the purpose to simulate behavioural results of a four choice Eriksen Flanker Task. Model results show that the neural network can simulate response times, error rates and event related potentials quite well. Finally, results are compared with behavioural data and discussed in light of the mentioned Executive Control theories. Future perspective for this new model are outlined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El tema del control del aparato a través del cual actúa el Estado, también llamado administración estatal, adquiere gran relevancia debido a la evolución del concepto del control, que ya no se restringe a la simple detección de errores y corrección de desviaciones del pasado, sino que aparece como un valioso auxiliar de la toma de decisiones capaz de reorientar acciones y metas hacia lo que es mejor para las organizaciones y sus integrantes. En tal sentido, verificar la exactitud con que se cumplen las decisiones de gobierno, evitar desviaciones, redefinir metas a alcanzar y cursos de acción a transitar, hacen del control una función importante que es necesario estudiar, comprender, explicitar. El trabajo indaga en los sistemas y modalidades del control público en la Nación, precisando las características de la administración estatal y de los órganos encargados de su control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is now widely accepted that separating programs into modules is useful in program development and maintenance. While many Prolog implementations include useful module systems, we argüe that these systems can be improved in a number of ways, such as, for example, being more amenable to effective global analysis and transformation and allowing sepárate compilation or sensible creation of standalone executables. We discuss a number of issues related to the design of such an improved module system for Prolog and propose some novel solutions. Based on this, we present the choices made in the Ciao module system, which has been designed to meet a number of objectives: allowing sepárate compilation, extensibility in features and in syntax, amenability to modular global analysis and transformation, enhanced error detection, support for meta-programming and higher-order, compatibility to the extent possible with official and de-facto standards, etc.