970 resultados para Error detection


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Imaging studies have shown reduced frontal lobe resources following total sleep deprivation (TSD). The anterior cingulate cortex (ACC) in the frontal region plays a role in performance monitoring and cognitive control; both error detection and response inhibition are impaired following sleep loss. Event-related potentials (ERPs) are an electrophysiological tool used to index the brain's response to stimuli and information processing. In the Flanker task, the error-related negativity (ERN) and error positivity (Pe) ERPs are elicited after erroneous button presses. In a Go/NoGo task, NoGo-N2 and NoGo-P3 ERPs are elicited during high conflict stimulus processing. Research investigating the impact of sleep loss on ERPs during performance monitoring is equivocal, possibly due to task differences, sample size differences and varying degrees of sleep loss. Based on the effects of sleep loss on frontal function and prior research, it was expected that the sleep deprivation group would have lower accuracy, slower reaction time and impaired remediation on performance monitoring tasks, along with attenuated and delayed stimulus- and response-locked ERPs. In the current study, 49 young adults (24 male) were screened to be healthy good sleepers and then randomly assigned to a sleep deprived (n = 24) or rested control (n = 25) group. Participants slept in the laboratory on a baseline night, followed by a second night of sleep or wake. Flanker and Go/NoGo tasks were administered in a battery at 1O:30am (i.e., 27 hours awake for the sleep deprivation group) to measure performance monitoring. On the Flanker task, the sleep deprivation group was significantly slower than controls (p's <.05), but groups did not differ on accuracy. No group differences were observed in post-error slowing, but a trend was observed for less remedial accuracy in the sleep deprived group compared to controls (p = .09), suggesting impairment in the ability to take remedial action following TSD. Delayed P300s were observed in the sleep deprived group on congruent and incongruent Flanker trials combined (p = .001). On the Go/NoGo task, the hit rate (i.e., Go accuracy) was significantly lower in the sleep deprived group compared to controls (p <.001), but no differences were found on false alarm rates (i.e., NoGo Accuracy). For the sleep deprived group, the Go-P3 was significantly smaller (p = .045) and there was a trend for a smaller NoGo-N2 compared to controls (p = .08). The ERN amplitude was reduced in the TSD group compared to controls in both the Flanker and Go/NoGo tasks. Error rate was significantly correlated with the amplitude of response-locked ERNs in control (r = -.55, p=.005) and sleep deprived groups (r = -.46, p = .021); error rate was also correlated with Pe amplitude in controls (r = .46, p=.022) and a trend was found in the sleep deprived participants (r = .39, p =. 052). An exploratory analysis showed significantly larger Pe mean amplitudes (p = .025) in the sleep deprived group compared to controls for participants who made more than 40+ errors on the Flanker task. Altered stimulus processing as indexed by delayed P3 latency during the Flanker task and smaller amplitude Go-P3s during the Go/NoGo task indicate impairment in stimulus evaluation and / or context updating during frontal lobe tasks. ERN and NoGoN2 reductions in the sleep deprived group confirm impairments in the monitoring system. These data add to a body of evidence showing that the frontal brain region is particularly vulnerable to sleep loss. Understanding the neural basis of these deficits in performance monitoring abilities is particularly important for our increasingly sleep deprived society and for safety and productivity in situations like driving and sustained operations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cryptosystem using linear codes was developed in 1978 by Mc-Eliece. Later in 1985 Niederreiter and others developed a modified version of cryptosystem using concepts of linear codes. But these systems were not used frequently because of its larger key size. In this study we were designing a cryptosystem using the concepts of algebraic geometric codes with smaller key size. Error detection and correction can be done efficiently by simple decoding methods using the cryptosystem developed. Approach: Algebraic geometric codes are codes, generated using curves. The cryptosystem use basic concepts of elliptic curves cryptography and generator matrix. Decrypted information takes the form of a repetition code. Due to this complexity of decoding procedure is reduced. Error detection and correction can be carried out efficiently by solving a simple system of linear equations, there by imposing the concepts of security along with error detection and correction. Results: Implementation of the algorithm is done on MATLAB and comparative analysis is also done on various parameters of the system. Attacks are common to all cryptosystems. But by securely choosing curve, field and representation of elements in field, we can overcome the attacks and a stable system can be generated. Conclusion: The algorithm defined here protects the information from an intruder and also from the error in communication channel by efficient error correction methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The general packet radio service (GPRS) has been developed to allow packet data to be transported efficiently over an existing circuit-switched radio network, such as GSM. The main application of GPRS are in transporting Internet protocol (IP) datagrams from web servers (for telemetry or for mobile Internet browsers). Four GPRS baseband coding schemes are defined to offer a trade-off in requested data rates versus propagation channel conditions. However, data rates in the order of > 100 kbits/s are only achievable if the simplest coding scheme is used (CS-4) which offers little error detection and correction (EDC) (requiring excellent SNR) and the receiver hardware is capable of full duplex which is not currently available in the consumer market. A simple EDC scheme to improve the GPRS block error rate (BLER) performance is presented, particularly for CS-4, however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel and improving the user's application data rate. As GPRS requires intensive processing in the baseband, a viable field programmable gate array (FPGA) solution is presented in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The General Packet Radio Service (GPRS) was developed to allow packet data to be transported efficiently over an existing circuit switched radio network. The main applications for GPRS are in transporting IP datagram’s from the user’s mobile Internet browser to and from the Internet, or in telemetry equipment. A simple Error Detection and Correction (EDC) scheme to improve the GPRS Block Error Rate (BLER) performance is presented, particularly for coding scheme 4 (CS-4), however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel, improving throughput and the user’s application data rate. As GPRS requires intensive processing in the baseband, a viable hardware solution for a GPRS BLER co-processor is discussed that has been currently implemented in a Field Programmable Gate Array (FPGA) and presented in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Individual differences in cognitive style can be characterized along two dimensions: ‘systemizing’ (S, the drive to analyze or build ‘rule-based’ systems) and ‘empathizing’ (E, the drive to identify another's mental state and respond to this with an appropriate emotion). Discrepancies between these two dimensions in one direction (S > E) or the other (E > S) are associated with sex differences in cognition: on average more males show an S > E cognitive style, while on average more females show an E > S profile. The neurobiological basis of these different profiles remains unknown. Since individuals may be typical or atypical for their sex, it is important to move away from the study of sex differences and towards the study of differences in cognitive style. Using structural magnetic resonance imaging we examined how neuroanatomy varies as a function of the discrepancy between E and S in 88 adult males from the general population. Selecting just males allows us to study discrepant E-S profiles in a pure way, unconfounded by other factors related to sex and gender. An increasing S > E profile was associated with increased gray matter volume in cingulate and dorsal medial prefrontal areas which have been implicated in processes related to cognitive control, monitoring, error detection, and probabilistic inference. An increasing E > S profile was associated with larger hypothalamic and ventral basal ganglia regions which have been implicated in neuroendocrine control, motivation and reward. These results suggest an underlying neuroanatomical basis linked to the discrepancy between these two important dimensions of individual differences in cognitive style.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This exploratory study is concerned with the performance of Egyptian children with Down syndrome on counting and error detection tasks and investigates how these children acquire counting. Observations and interviews were carried out to collect further information about their performance in a class context. Qualitative and quantitative analysis suggested a notable deficit in counting in Egyptian children with Down syndrome with none of the children able to recite the number string up to ten or count a set of five objects correctly. They performed less well on tasks which added more load on memory. The tentative finding of this exploratory study supported previous research findings that children with Down syndrome acquire counting by rote and links this with their learning experiences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: The current study examined younger and older adults’ error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related difference would be present in this type of common error detection task. Method: Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. Results: There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. Discussion: The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one’s own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a novel bipolar time-spread (TS) echo hiding based watermarking method is proposed for stereo audio signals, to overcome the low robustness problem in the traditional TS echo hiding method. At the embedding, echo signals with opposite polarities are added to both channels of the host audio signal. This improves the imperceptibility of the watermarking scheme, since added watermarks have similar effects in both channels. Then decoding part is developed, in order to improve the robustness of the watermarking scheme against common attacks. Since these novel embedding and decoding methods utilize the advantage of two channels in stereo audio signals, it significantly reduces the interference of host signal at watermark extraction which is the main reason for error detection in the traditional TS echo hiding based watermarking under closed-loop attack. The effectiveness of the proposed watermarking scheme is theoretically analyzed and verified by simulations under common attacks. The proposed echo hiding method outperforms conventional TS echo hiding based watermarking when their perceptual qualities are similar.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work focuses on the relationship between organizational culture and quality culture in the hotel sector of NATAL/RN with respect to employee performance. The themes organizational culture and quality have been the research focus of administration theorists and a constant concern of professional managers, since the Japanese demonstrated effective forms or western management. In this study, the Competing Values Model (C.V.M.) (Quinn e Cameron, 1996; Quinn, 1998; Santos, 1998, 2000; Teixeira, 2001), which was tested on north-American organizations and considered a high value academic and professional instrument, was applied. The model maps the organizational culture on a profile with four elements: clan, adhocracy, market and hierarchy. The C.V.M., associated with the taximetrics created by Cameron (which classifies quality culture in for levels: status quo, error detection, error prevention and perpetual creative quality) has been related with organizational performance. In this study, these two models are used jointly and tested in the hotel sector. The results indicate that the strongest element of the profile is clan, which is characterized by internal focus, participation and people involvement, followed by the adhocracy element, which has an external focus, emphasizes flexibility and is characterized by dynamic enterprising and creativity. Regarding the level of the culture s quality in the hotel, the highest level, that of perpetual improvement and creativity, which attempts to enchant and to surprise the clients, was most frequently cited, followed by the error detection level, which has as its goal to discover and correct mistakes, trying, consequently, to reduce waste. The results suggest that employee performance as measured on some indicators is related to elements of the organizational culture profile and quality level

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article has the purpose to review the main codes used to detect and correct errors in data communication specifically in the computer's network. The Hamming's code and the Ciclic Redundancy Code (CRC) are presented as the focus of this article as well as CRC hardware implementation. Each code is reviewed in details in order to fill the gaps in the literature and to make it accessible to the computer science and engineering students as well as to anyone who may be interested in learning the technique to treat error in data communication.