958 resultados para Error detection
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.
Resumo:
Malapropism is a semantic error that is hardly detectable because it usually retains syntactical links between words in the sentence but replaces one content word by a similar word with quite different meaning. A method of automatic detection of malapropisms is described, based on Web statistics and a specially defined Semantic Compatibility Index (SCI). For correction of the detected errors, special dictionaries and heuristic rules are proposed, which retains only a few highly SCI-ranked correction candidates for the user’s selection. Experiments on Web-assisted detection and correction of Russian malapropisms are reported, demonstrating efficacy of the described method.
Resumo:
Fallibility is inherent in human cognition and so a system that will monitor performance is indispensable. While behavioral evidence for such a system derives from the finding that subjects slow down after trials that are likely to produce errors, the neural and behavioral characterization that enables such control is incomplete. Here, we report a specific role for dopamine/basal ganglia in response conflict by accessing deficits in performance monitoring in patients with Parkinson's disease. To characterize such a deficit, we used a modification of the oculomotor countermanding task to show that slowing down of responses that generate robust response conflict, and not post-error per se, is deficient in Parkinson's disease patients. Poor performance adjustment could be either due to impaired ability to slow RT subsequent to conflicts or due to impaired response conflict recognition. If the latter hypothesis was true, then PD subjects should show evidence of impaired error detection/correction, which was found to be the case. These results make a strong case for impaired performance monitoring in Parkinson's patients.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Neste trabalho discutimos vários sistemas de dígitos verificadores utilizados no Brasil, muitos deles semelhantes a esquemas usados mundialmente, e fazemos uma análise da sua capacidade de detectar os diversos tipos de erros que são comuns na entrada de dados em sistemas computacionais. A análise nos mostra que os esquemas escolhidos constituem decisões subotimizadas e quase nunca obtêm a melhor taxa de detecção de erros possível. Os sistemas de dígitos verificadores são baseados em três teorias da álgebra: aritmética modular, teoria de grupos e quasigrupos. Para os sistemas baseados em aritmética modular, apresentamos várias melhorias que podem ser introduzidas. Desenvolvemos um novo esquema ótimo baseado em aritmética modular base 10 com três permutações para identificadores de tamanho maior do que sete. Descrevemos também o esquema Verhoeff, já antigo, mas pouquíssimo utilizado e que também é uma alternativa de melhoria para identificadores de tamanho até sete. Desenvolvemos ainda, esquemas ótimos para qualquer base modular prima que detectam todos os tipos de erros considerados. A dissertação faz uso ainda de elementos da estatística, no estudo das probabilidades de detecção de erros e de algoritmos, na obtenção de esquemas ótimos.
Resumo:
研究了新疆阜康地区森林植被资源与环境的特征和其30年来的变化,利用Arcinfo强大的空间分析功能,对资源、DEM模型、景观指数、环境价值和新疆降水量的地统计学规律进行较全面的分析。本文分为五个部分: 1、新疆阜康地区森林资源与环境空间数据库的建立森林资源与环境空间数据库的建立是它们空问分析的基础。利用多期的遥感图象和该区的地形图,建立森林分类图形和属性库(包括森林和环境自变量集)一体化的GIS空间数据库。为了提高TM遥感图象的分类精度,利用ERDAS图象处理软件,对它进行包括主成分、降噪、去条带和自然色彩变换等增强处理,采用监督分类和人工判读相结合的方法进行分类,采用R2V、ERDAS、Arcview、Arcinfo等软件的集成,使得小班面层与某些线层的无缝联接。成功地形成一套适于西部GIS的森林资源与环境空间数据库的技术路径。此外,对新疆阜康北部地区森林资源动态进行初步分析。 2、新疆阜康地区数字高程模型(DEM)及其粗差检测分析为了提高生态建模的精度,模拟和提取该区的地面特征至关重要。在已建立的森林资源与环境空间数据库的支持下,利用Arcinfo和ERDAS,建立了新疆阜康地区的1:5万数字高程模型(DEM)。通过提取地形的海拔、坡度、坡向特征因子,分析森林植被的垂直分布。通过对DEM的粗差检测分析,分析阜康地区的数字高程模型精度。 3、新疆阜康地区景观格局变化分析在1977年、1987年、1999年森林资源与环境空间数据库的支持下,利用景观分析软件编制三个时段的新疆阜康地区植被景观类型图,并分析了近30年来新疆阜康地区景观动态与景观格局变化。结果表明:①在此期间整个研究区的斑块数减少,斑块平均面积扩大,景观中面积在不同景观要素类型之间的分配更加不均衡,景观面积向少数几种类型聚集。说明了在这期间阜康地区的景观类型有向单一化方向发展的趋势;②农耕地分布呈破碎化的趋势,斑块平均面积变小,斑块间离散程度也更高:这些变化说明人为的经济活动在阜康地区的加剧,③天然林面积减少较多,水域的面积却呈现上升的趋势,冰川及永久积雪的面积呈下降趋势, 4、新疆阜康地区森林生态效益的初步分析从广义森林生态效益定义出发,针对12种森林生态效益因变量不完全独立、且各自的自变量集不完全相同,引入具有多对多特征且整体上相容的似乎不相关广义线性模型。通过构造12种森林生态效益的“有效面积系数”和“市场逼近系数”,在森林资源与环境空间数据库的支持下,对新疆阜康地区两期的森林生态效益进行科学的计量。结果表明:新疆阜康地区的森林生态效益货币量1987年是90673.8万元,1999年是84134.4万元,总体上呈下降趋势。 5、利用新疆气象站资料研究年降雨量的空间分布规律利用ArcGIS地统计学模块,在2000年新疆气候信息空间数据库和新疆DEM模型的支持下,做出了新疆地区的年降水量空间分布图。根据新疆气候资料建立趋势而分析模型、模拟了新疆降水量空间分布的趋势值。采用3种算法(距离权重法、普通Kriging法、协同Kriging方法)计算并比较分析了研究区多年的平均降水量的时空变化。利用模拟产生的精度最优的栅格降水空间数据库,建立的多年平均降水资源信息系统,可快速计算研究区内任一地域单元中降水的总量及其空间变化,可以生成高精度的气候要素空间分布图。
Resumo:
随着微电子器件复杂度的提高,空间辐射对于计算机程序的正确性影响正越来越明显。一般情况下,这些影响并不是永久的,而是瞬时故障。无论是太空中的信息处理系统、嵌入式实时控制系统,还是计算机集群、高性能超级计算机都可能由于错误的输出而导致灾难性的后果。 传统的可靠性系统采用抗辐射部件和冗余的硬件来达到可靠性的要求,但是其价格昂贵,性能落后于今天的商用部件(COTS)。针对COTS在容错能力上存在的不足,软件容错技术可以在不改变硬件结构的情况下,有效的提高计算机系统的可靠性。 瞬时故障在软件层面上主要表现为控制流错误和数据流错误,本文主要针对控制流错误进行容错处理。软件实现的控制流容错技术通过在编译时加入冗余的容错逻辑,在程序执行时进行控制流错误的检测和处理。 如何在保证容错能力的同时,尽量降低冗余逻辑所带来的系统开销,是控制流容错需要解决的主要问题。本文从控制流错误的基本概念,容错单元的选择,签名信息的建立,签名点和检测点的插入位置几个角度对控制流容错进行研究,主要内容有: 1.对常见的控制流容错方法进行了分析比较,对其优点和不足予以说明。 2.对控制流错误进行了分类,以此为基础,提出了基于相关前驱基本块的控制流容错方法(CFCLRB)。 3.提出了一种签名流模型,提出了基于签名流模型的控制流容错方法(CFCSF)。该方法能够对基本块间控制流错误进行检测,具有较低的时间开销、空间开销和较高的错误覆盖率。同时,该方法可以根据容错尺度的要求,灵活的插入和删除签名点与检测点,具有极强的扩展性。该方法还可以应对动态函数指针这种编译时难以确定的控制流情况。 4.基于汇编指令对上述方法予以实现,并实现了国际上常用的控制流容错方法Control Flow Checking by Software Signatures(CFCSS)和Control-flow Error Detection through Assertions(CEDA)做为对比。通过加入冗余的指令逻辑,完成了对原程序的容错功能。 5.基于PIN工具实现了对控制流错误的注入,在相同的实验环境下对CFCLRB ,CFCSF,CFCSS,CEDA进行了对比实验。实验表明, CFCLRB的时间开销为26.9%,空间开销为27.6%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到97.32%。CFCSF的时间开销为14.7%,空间开销为22.1%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到96.79%。相比CFCSS,该方法的时间开销从37.2%下降到14.7%,空间开销从31.2%下降到22.1%,错误覆盖率从95.16%提升到96.79%。相比CEDA,该方法的时间开销从26.9%下降到14.7%,空间开销从27.1%下降到22.1%,错误覆盖率仅从97.39%下降到96.79%。 最后,本文对控制流容错的未来研究方向进行了展望。
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
Study conducted to evaluate the effectiveness of four assistive technology (AT) tools on literacy: (1) speech synthesis, (2) spellchecker, (3) homophone tool, and (4) dictionary. All four of these programs are featured in TextHelp’s Read&Write Gold software package. A total of 93 secondary-level students with reading disabilities participated in the study. The participants completed a number of computer-based literacy tests after being assigned to a Read&Write group or a control group that utilized Microsoft Word. The results indicated that improvements in the following areas for the Read&Write group: (1) reading comprehension, (2) homophone error detection, (3) spelling error detection, and (4) word meanings. The Microsoft Word group also improved in the areas of word meanings and error detection, though performed worse on homophone error detection. The authors contend that these results indicate that speech synthesis, spell checkers, homophone tools, and dictionary programs have a positive effect on literacy among students with reading disabilities. This study was conducted by researchers at the Queen’s University in Belfast, Ireland.
Resumo:
Students' learning process can be negatively affected when their reading and comprehension control is not appropriated. This research focuses on the analysis of how a group of students from high school evaluate their reading comprehension in manipulated scientific texts. An analysis tool was designed to determine the students' degree of comprehension control when reading a scientific short text with an added contradiction. The results have revealed that the students from 1st and 3rd ESO do not properly self-evaluated their reading comprehension. A different behavior has been observed in 1st Bachillerato, where appropriate evaluation and regulation seem to be more frequent. Moreover, no significant differences have been found regarding the type of text, year or gender. Finally, as identified by previous research, the correlations between the students' comprehension control and their school marks have shown to have a weak relationship and inversely proportional to the students' age.
Resumo:
Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.
Resumo:
Imaging studies have shown reduced frontal lobe resources following total sleep deprivation (TSD). The anterior cingulate cortex (ACC) in the frontal region plays a role in performance monitoring and cognitive control; both error detection and response inhibition are impaired following sleep loss. Event-related potentials (ERPs) are an electrophysiological tool used to index the brain's response to stimuli and information processing. In the Flanker task, the error-related negativity (ERN) and error positivity (Pe) ERPs are elicited after erroneous button presses. In a Go/NoGo task, NoGo-N2 and NoGo-P3 ERPs are elicited during high conflict stimulus processing. Research investigating the impact of sleep loss on ERPs during performance monitoring is equivocal, possibly due to task differences, sample size differences and varying degrees of sleep loss. Based on the effects of sleep loss on frontal function and prior research, it was expected that the sleep deprivation group would have lower accuracy, slower reaction time and impaired remediation on performance monitoring tasks, along with attenuated and delayed stimulus- and response-locked ERPs. In the current study, 49 young adults (24 male) were screened to be healthy good sleepers and then randomly assigned to a sleep deprived (n = 24) or rested control (n = 25) group. Participants slept in the laboratory on a baseline night, followed by a second night of sleep or wake. Flanker and Go/NoGo tasks were administered in a battery at 1O:30am (i.e., 27 hours awake for the sleep deprivation group) to measure performance monitoring. On the Flanker task, the sleep deprivation group was significantly slower than controls (p's <.05), but groups did not differ on accuracy. No group differences were observed in post-error slowing, but a trend was observed for less remedial accuracy in the sleep deprived group compared to controls (p = .09), suggesting impairment in the ability to take remedial action following TSD. Delayed P300s were observed in the sleep deprived group on congruent and incongruent Flanker trials combined (p = .001). On the Go/NoGo task, the hit rate (i.e., Go accuracy) was significantly lower in the sleep deprived group compared to controls (p <.001), but no differences were found on false alarm rates (i.e., NoGo Accuracy). For the sleep deprived group, the Go-P3 was significantly smaller (p = .045) and there was a trend for a smaller NoGo-N2 compared to controls (p = .08). The ERN amplitude was reduced in the TSD group compared to controls in both the Flanker and Go/NoGo tasks. Error rate was significantly correlated with the amplitude of response-locked ERNs in control (r = -.55, p=.005) and sleep deprived groups (r = -.46, p = .021); error rate was also correlated with Pe amplitude in controls (r = .46, p=.022) and a trend was found in the sleep deprived participants (r = .39, p =. 052). An exploratory analysis showed significantly larger Pe mean amplitudes (p = .025) in the sleep deprived group compared to controls for participants who made more than 40+ errors on the Flanker task. Altered stimulus processing as indexed by delayed P3 latency during the Flanker task and smaller amplitude Go-P3s during the Go/NoGo task indicate impairment in stimulus evaluation and / or context updating during frontal lobe tasks. ERN and NoGoN2 reductions in the sleep deprived group confirm impairments in the monitoring system. These data add to a body of evidence showing that the frontal brain region is particularly vulnerable to sleep loss. Understanding the neural basis of these deficits in performance monitoring abilities is particularly important for our increasingly sleep deprived society and for safety and productivity in situations like driving and sustained operations.