11 resultados para Memory errors
em Cochin University of Science
Resumo:
In this thesis, the concept of reversed lack of memory property and its generalizations is studied.We we generalize this property which involves operations different than the ”addition”. In particular an associative, binary operator ” * ” is considered. The univariate reversed lack of memory property is generalized using the binary operator and a class of probability distributions which include Type 3 extreme value, power function, reflected Weibull and negative Pareto distributions are characterized (Asha and Rejeesh (2009)). We also define the almost reversed lack of memory property and considered the distributions with reversed periodic hazard rate under the binary operation. Further, we give a bivariate extension of the generalized reversed lack of memory property and characterize a class of bivariate distributions which include the characterized extension (CE) model of Roy (2002a) apart from the bivariate reflected Weibull and power function distributions. We proved the equality of local proportionality of the reversed hazard rate and generalized reversed lack of memory property. Study of uncertainty is a subject of interest common to reliability, survival analysis, actuary, economics, business and many other fields. However, in many realistic situations, uncertainty is not necessarily related to the future but can also refer to the past. Recently, Di Crescenzo and Longobardi (2009) introduced a new measure of information called dynamic cumulative entropy. Dynamic cumulative entropy is suitable to measure information when uncertainty is related to the past, a dual concept of the cumulative residual entropy which relates to uncertainty of the future lifetime of a system. We redefine this measure in the whole real line and study its properties. We also discuss the implications of generalized reversed lack of memory property on dynamic cumulative entropy and past entropy.In this study, we extend the idea of reversed lack of memory property to the discrete set up. Here we investigate the discrete class of distributions characterized by the discrete reversed lack of memory property. The concept is extended to the bivariate case and bivariate distributions characterized by this property are also presented. The implication of this property on discrete reversed hazard rate, mean past life, and discrete past entropy are also investigated.
Resumo:
The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.
Resumo:
Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.
Resumo:
Embedded systems, especially Wireless Sensor Nodes are highly prone to Type Safety and Memory Safety issues. Contiki, a prominent Operating System in the domain is even more affected by the problem since it makes extensive use of Type casts and Pointers. The work is an attempt to nullify the possibility of Safety violations in Contiki. We use a powerful, still efficient tool called Deputy to achieve this. We also try to automate the process
Resumo:
The present study is an attempt to highlight the problem of typographical errors in OPACS. The errors made while typing catalogue entries as well as importing bibliographical records from other libraries exist unnoticed by librarians resulting the non-retrieval of available records and affecting the quality of OPACs. This paper follows previous research on the topic mainly by Jeffrey Beall and Terry Ballard. The word “management” was chosen from the list of likely to be misspelled words identified by previous research. It was found that the word is wrongly entered in several forms in local, national and international OPACs justifying the observations of Ballard that typos occur in almost everywhere. Though there are lots of corrective measures proposed and are in use, the study asserts the fact that human effort is needed to get rid of the problem. The paper is also an invitation to the library professionals and system designers to construct a strategy to solve the issue
Resumo:
Embedded systems, especially Wireless Sensor Nodes are highly prone to Type Safety and Memory Safety issues. Contiki, a prominent Operating System in the domain is even more affected by the problem since it makes extensive use of Type casts and Pointers. The work is an attempt to nullify the possibility of Safety violations in Contiki. We use a powerful, still efficient tool called Deputy to achieve this. We also try to automate the process
Resumo:
The present study is an attempt to highlight the problem of typographical errors in OPACS. The errors made while typing catalogue entries as well as importing bibliographical records from other libraries exist unnoticed by librarians resulting the non-retrieval of available records and affecting the quality of OPACs. This paper follows previous research on the topic mainly by Jeffrey Beall and Terry Ballard. The word “management” was chosen from the list of likely to be misspelled words identified by previous research. It was found that the word is wrongly entered in several forms in local, national and international OPACs justifying the observations of Ballard that typos occur in almost everywhere. Though there are lots of corrective measures proposed and are in use, the study asserts the fact that human effort is needed to get rid of the problem. The paper is also an invitation to the library professionals and system designers to construct a strategy to solve the issue
Resumo:
This paper introduces a simple and efficient method and its implementation in an FPGA for reducing the odometric localization errors caused by over count readings of an optical encoder based odometric system in a mobile robot due to wheel-slippage and terrain irregularities. The detection and correction is based on redundant encoder measurements. The method suggested relies on the fact that the wheel slippage or terrain irregularities cause more count readings from the encoder than what corresponds to the actual distance travelled by the vehicle. The standard quadrature technique is used to obtain four counts in each encoder period. In this work a three-wheeled mobile robot vehicle with one driving-steering wheel and two-fixed rear wheels in-axis, fitted with incremental optical encoders is considered. The CORDIC algorithm has been used for the computation of sine and cosine terms in the update equations. The results presented demonstrate the effectiveness of the technique
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example