9 resultados para Pure error

em Cochin University of Science


Relevância:

20.00% 20.00%

Publicador:

Resumo:

DC and AC electrical conductivity measurements in single crystals of diammonium hydrogen phosphate along the c axis show anomalous variations at 174, 246 and 416 K. The low-frequency dielectric constant also exhibits peaks exactly at these temperatures with a thermal hysteresis of 13 degrees C for the peak at 416 K. These specific features of the electrical properties are in agreement with earlier NMR second-moment data and can be identified with three distinct phase transitions that occur in the crystal. The electrical conductivity values have been found to increase linearly with impurity concentration in specimens doped with a specific amount of SO42- ions. The mechanisms of the phase transition and of the electrical conduction process are discussed in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of axiswise measurements of the electrical conductivity (dc and ac) and dielectric constant of NH4H2PO4 confirm the occurrence of the recently suggested high‐temperature phase transition in this crystal (at 133 °C). The corresponding transition in ND4D2PO4 observed here for the first time takes place at 141.5 °C. The mechanism involved in these transitions and those associated with the electrical conduction and dielectric anomalies are explained on the basis of the motional effects of the ammonium ions in these crystals. Conductivity values for deuterated crystals give direct evidence for the predominance of protonic conduction throughout the entire range of temperatures studied (30–260 °C).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis Entitled INVESTIGATIONS ON THE STRUCTURAL, OPTICAL AND MAGNETIC PROPERTIES OF NANOSTRUCTURED CERIUM OXIDE IN PURE AND DOPED FORMS AND ITS POLYMER NANOCOMPOSITES.Synthesis and processing of nanomatelials and nanostmctures are the essential aspects of nanotechnology. Studies on new physical properties and applications of nanomaterials and nanostructures are possible only when nanostructured materials are made available with desired size, morphology,crystal structure and chemical composition.Recently, several methods have been developed to prepare pure and doped CeO2 powder, including wet chemical synthesis, thermal hydrolysis, flux method, hydrothermal synthesis, gas condensation method, microwave technique etc. In all these, some special reaction conditions, such as high temperature, high pressure, capping agents, expensive or toxic solvents etc. have been involved.Another hi gh-li ght of the present work is room temperature ferromagnetism in cerium oxdie thin films deposited by spray pyrolysis technique.The observation of self trapped exciton mediated PL in ceria nanocrystals is another important outcome of the present study. STE mediated mechanism has been proposed for CeO2 nanocrystals based on the dependence of PL intensity on the annealing temperature. It would be interesting to extent these investigations to the doped forms of cerium oxide and cerium oxide thin films to get deeper Insight into STE mechanism.Due to time constraints detailed investigations could not be canied out on the preparation and properties of free standing films of polymer/ceria nanocomposites. It has been observed that good quality free standing films of PVDF/ceria, PS/C61‘l8, PMMA/ceria can be obtained using solution casting technique. These polymer nanocomposite films show high dielectric constant around 20 and offer prospects of applications as gate electrodes in metal-oxide semiconductor devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimum conditions and experimental details for the formation of v-Fe203 from goethite have been worked out. In another method, a cheap complexing medium of starch was employed for precipitating acicular ferrous oxalate, which on decomposition in nitrogen and subsequent oxidation yielded acicular y-Fe203. On the basis of thermal decomposition in dry and moist nitrogen, DTA, XRD, GC and thermodynamic arguments, the mechanism of decomposition was elucidated. New materials obtained by doping ~'-Fe203 with 1-16 atomic percent magnesium, cobalt, nickel and copper, were synthesised and characterized

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.