952 resultados para Dem gross error detection
Resumo:
International audience
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.
Resumo:
Malapropism is a semantic error that is hardly detectable because it usually retains syntactical links between words in the sentence but replaces one content word by a similar word with quite different meaning. A method of automatic detection of malapropisms is described, based on Web statistics and a specially defined Semantic Compatibility Index (SCI). For correction of the detected errors, special dictionaries and heuristic rules are proposed, which retains only a few highly SCI-ranked correction candidates for the user’s selection. Experiments on Web-assisted detection and correction of Russian malapropisms are reported, demonstrating efficacy of the described method.
Resumo:
Fallibility is inherent in human cognition and so a system that will monitor performance is indispensable. While behavioral evidence for such a system derives from the finding that subjects slow down after trials that are likely to produce errors, the neural and behavioral characterization that enables such control is incomplete. Here, we report a specific role for dopamine/basal ganglia in response conflict by accessing deficits in performance monitoring in patients with Parkinson's disease. To characterize such a deficit, we used a modification of the oculomotor countermanding task to show that slowing down of responses that generate robust response conflict, and not post-error per se, is deficient in Parkinson's disease patients. Poor performance adjustment could be either due to impaired ability to slow RT subsequent to conflicts or due to impaired response conflict recognition. If the latter hypothesis was true, then PD subjects should show evidence of impaired error detection/correction, which was found to be the case. These results make a strong case for impaired performance monitoring in Parkinson's patients.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Neste trabalho discutimos vários sistemas de dígitos verificadores utilizados no Brasil, muitos deles semelhantes a esquemas usados mundialmente, e fazemos uma análise da sua capacidade de detectar os diversos tipos de erros que são comuns na entrada de dados em sistemas computacionais. A análise nos mostra que os esquemas escolhidos constituem decisões subotimizadas e quase nunca obtêm a melhor taxa de detecção de erros possível. Os sistemas de dígitos verificadores são baseados em três teorias da álgebra: aritmética modular, teoria de grupos e quasigrupos. Para os sistemas baseados em aritmética modular, apresentamos várias melhorias que podem ser introduzidas. Desenvolvemos um novo esquema ótimo baseado em aritmética modular base 10 com três permutações para identificadores de tamanho maior do que sete. Descrevemos também o esquema Verhoeff, já antigo, mas pouquíssimo utilizado e que também é uma alternativa de melhoria para identificadores de tamanho até sete. Desenvolvemos ainda, esquemas ótimos para qualquer base modular prima que detectam todos os tipos de erros considerados. A dissertação faz uso ainda de elementos da estatística, no estudo das probabilidades de detecção de erros e de algoritmos, na obtenção de esquemas ótimos.
Resumo:
随着微电子器件复杂度的提高,空间辐射对于计算机程序的正确性影响正越来越明显。一般情况下,这些影响并不是永久的,而是瞬时故障。无论是太空中的信息处理系统、嵌入式实时控制系统,还是计算机集群、高性能超级计算机都可能由于错误的输出而导致灾难性的后果。 传统的可靠性系统采用抗辐射部件和冗余的硬件来达到可靠性的要求,但是其价格昂贵,性能落后于今天的商用部件(COTS)。针对COTS在容错能力上存在的不足,软件容错技术可以在不改变硬件结构的情况下,有效的提高计算机系统的可靠性。 瞬时故障在软件层面上主要表现为控制流错误和数据流错误,本文主要针对控制流错误进行容错处理。软件实现的控制流容错技术通过在编译时加入冗余的容错逻辑,在程序执行时进行控制流错误的检测和处理。 如何在保证容错能力的同时,尽量降低冗余逻辑所带来的系统开销,是控制流容错需要解决的主要问题。本文从控制流错误的基本概念,容错单元的选择,签名信息的建立,签名点和检测点的插入位置几个角度对控制流容错进行研究,主要内容有: 1.对常见的控制流容错方法进行了分析比较,对其优点和不足予以说明。 2.对控制流错误进行了分类,以此为基础,提出了基于相关前驱基本块的控制流容错方法(CFCLRB)。 3.提出了一种签名流模型,提出了基于签名流模型的控制流容错方法(CFCSF)。该方法能够对基本块间控制流错误进行检测,具有较低的时间开销、空间开销和较高的错误覆盖率。同时,该方法可以根据容错尺度的要求,灵活的插入和删除签名点与检测点,具有极强的扩展性。该方法还可以应对动态函数指针这种编译时难以确定的控制流情况。 4.基于汇编指令对上述方法予以实现,并实现了国际上常用的控制流容错方法Control Flow Checking by Software Signatures(CFCSS)和Control-flow Error Detection through Assertions(CEDA)做为对比。通过加入冗余的指令逻辑,完成了对原程序的容错功能。 5.基于PIN工具实现了对控制流错误的注入,在相同的实验环境下对CFCLRB ,CFCSF,CFCSS,CEDA进行了对比实验。实验表明, CFCLRB的时间开销为26.9%,空间开销为27.6%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到97.32%。CFCSF的时间开销为14.7%,空间开销为22.1%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到96.79%。相比CFCSS,该方法的时间开销从37.2%下降到14.7%,空间开销从31.2%下降到22.1%,错误覆盖率从95.16%提升到96.79%。相比CEDA,该方法的时间开销从26.9%下降到14.7%,空间开销从27.1%下降到22.1%,错误覆盖率仅从97.39%下降到96.79%。 最后,本文对控制流容错的未来研究方向进行了展望。
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
Study conducted to evaluate the effectiveness of four assistive technology (AT) tools on literacy: (1) speech synthesis, (2) spellchecker, (3) homophone tool, and (4) dictionary. All four of these programs are featured in TextHelp’s Read&Write Gold software package. A total of 93 secondary-level students with reading disabilities participated in the study. The participants completed a number of computer-based literacy tests after being assigned to a Read&Write group or a control group that utilized Microsoft Word. The results indicated that improvements in the following areas for the Read&Write group: (1) reading comprehension, (2) homophone error detection, (3) spelling error detection, and (4) word meanings. The Microsoft Word group also improved in the areas of word meanings and error detection, though performed worse on homophone error detection. The authors contend that these results indicate that speech synthesis, spell checkers, homophone tools, and dictionary programs have a positive effect on literacy among students with reading disabilities. This study was conducted by researchers at the Queen’s University in Belfast, Ireland.
Resumo:
Students' learning process can be negatively affected when their reading and comprehension control is not appropriated. This research focuses on the analysis of how a group of students from high school evaluate their reading comprehension in manipulated scientific texts. An analysis tool was designed to determine the students' degree of comprehension control when reading a scientific short text with an added contradiction. The results have revealed that the students from 1st and 3rd ESO do not properly self-evaluated their reading comprehension. A different behavior has been observed in 1st Bachillerato, where appropriate evaluation and regulation seem to be more frequent. Moreover, no significant differences have been found regarding the type of text, year or gender. Finally, as identified by previous research, the correlations between the students' comprehension control and their school marks have shown to have a weak relationship and inversely proportional to the students' age.