876 resultados para Random error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we study reliability measures such as geometric vitality function and conditional Shannon’s measures of uncertainty proposed by Ebrahimi (1996) and Sankaran and Gupta (1999), respectively, for the doubly (interval) truncated random variables. In survival analysis and reliability engineering, these measures play a significant role in studying the various characteristics of a system/component when it fails between two time points. The interrelationships among these uncertainty measures for various distributions are derived and proved characterization theorems arising out of them

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study the relationship between the failure rate and the mean residual life of doubly truncated random variables. Accordingly, we develop characterizations for exponential, Pareto 11 and beta distributions. Further, we generalize the identities for fire Pearson and the exponential family of distributions given respectively in Nair and Sankaran (1991) and Consul (1995). Applications of these measures in file context of lengthbiased models are also explored

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanocrystalline Fe–Ni thin films were prepared by partial crystallization of vapour deposited amorphous precursors. The microstructure was controlled by annealing the films at different temperatures. X-ray diffraction, transmission electron microscopy and energy dispersive x-ray spectroscopy investigations showed that the nanocrystalline phase was that of Fe–Ni. Grain growth was observed with an increase in the annealing temperature. X-ray photoelectron spectroscopy observations showed the presence of a native oxide layer on the surface of the films. Scanning tunnelling microscopy investigations support the biphasic nature of the nanocrystalline microstructure that consists of a crystalline phase along with an amorphous phase. Magnetic studies using a vibrating sample magnetometer show that coercivity has a strong dependence on grain size. This is attributed to the random magnetic anisotropy characteristic of the system. The observed coercivity dependence on the grain size is explained using a modified random anisotropy model

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One comes across directions as the observations in a number of situations. The first inferential question that one should answer when dealing with such data is, “Are they isotropic or uniformly distributed?” The answer to this question goes back in history which we shall retrace a bit and provide an exact and approximate solution to this so-called “Pearson’s Random Walk” problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is the investigation of the error which results from the method of approximate approximations applied to functions defined on compact in- tervals, only. This method, which is based on an approximate partition of unity, was introduced by V. Mazya in 1991 and has mainly been used for functions defied on the whole space up to now. For the treatment of differential equations and boundary integral equations, however, an efficient approximation procedure on compact intervals is needed. In the present paper we apply the method of approximate approximations to functions which are defined on compact intervals. In contrast to the whole space case here a truncation error has to be controlled in addition. For the resulting total error pointwise estimates and L1-estimates are given, where all the constants are determined explicitly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wird eine gemeinsame Optimierung der Hybrid-Betriebsstrategie und des Verhaltens des Verbrennungsmotors vorgestellt. Die Übernahme von den im Steuergerät verwendeten Funktionsmodulen in die Simulationsumgebung für Fahrzeuglängsdynamik stellt eine effiziente Applikationsmöglichkeit der Originalparametrierung dar. Gleichzeitig ist es notwendig, das Verhalten des Verbrennungsmotors derart nachzubilden, dass das stationäre und das dynamische Verhalten, inklusive aller relevanten Einflussmöglichkeiten, wiedergegeben werden kann. Das entwickelte Werkzeug zur Übertragung der in Ascet definierten Steurgerätefunktionen in die Simulink-Simulationsumgebung ermöglicht nicht nur die Simulation der relevanten Funktionsmodule, sondern es erfüllt auch weitere wichtige Eigenschaften. Eine erhöhte Flexibilität bezüglich der Daten- und Funktionsstandänderungen, sowie die Parametrierbarkeit der Funktionsmodule sind Verbesserungen die an dieser Stelle zu nennen sind. Bei der Modellierung des stationären Systemverhaltens des Verbrennungsmotors erfolgt der Einsatz von künstlichen neuronalen Netzen. Die Auswahl der optimalen Neuronenanzahl erfolgt durch die Betrachtung des SSE für die Trainings- und die Verifikationsdaten. Falls notwendig, wird zur Sicherstellung der angestrebten Modellqualität, das Interpolationsverhalten durch Hinzunahme von Gauß-Prozess-Modellen verbessert. Mit den Gauß-Prozess-Modellen werden hierbei zusätzliche Stützpunkte erzeugt und mit einer verminderten Priorität in die Modellierung eingebunden. Für die Modellierung des dynamischen Systemverhaltens werden lineare Übertragungsfunktionen verwendet. Bei der Minimierung der Abweichung zwischen dem Modellausgang und den Messergebnissen wird zusätzlich zum SSE das 2σ-Intervall der relativen Fehlerverteilung betrachtet. Die Implementierung der Steuergerätefunktionsmodule und der erstellten Steller-Sensor-Streckenmodelle in der Simulationsumgebung für Fahrzeuglängsdynamik führt zum Anstieg der Simulationszeit und einer Vergrößerung des Parameterraums. Das aus Regelungstechnik bekannte Verfahren der Gütevektoroptimierung trägt entscheidend zu einer systematischen Betrachtung und Optimierung der Zielgrößen bei. Das Ergebnis des Verfahrens ist durch das Optimum der Paretofront der einzelnen Entwurfsspezifikationen gekennzeichnet. Die steigenden Simulationszeiten benachteiligen Minimumsuchverfahren, die eine Vielzahl an Iterationen benötigen. Um die Verwendung einer Zufallsvariablen, die maßgeblich zur Steigerung der Iterationanzahl beiträgt, zu vermeiden und gleichzeitig eine Globalisierung der Suche im Parameterraum zu ermöglichen wird die entwickelte Methode DelaunaySearch eingesetzt. Im Gegensatz zu den bekannten Algorithmen, wie die Partikelschwarmoptimierung oder die evolutionären Algorithmen, setzt die neu entwickelte Methode bei der Suche nach dem Minimum einer Kostenfunktion auf eine systematische Analyse der durchgeführten Simulationsergebnisse. Mit Hilfe der bei der Analyse gewonnenen Informationen werden Bereiche mit den bestmöglichen Voraussetzungen für ein Minimum identifiziert. Somit verzichtet das iterative Verfahren bei der Bestimmung des nächsten Iterationsschrittes auf die Verwendung einer Zufallsvariable. Als Ergebnis der Berechnungen steht ein gut gewählter Startwert für eine lokale Optimierung zur Verfügung. Aufbauend auf der Simulation der Fahrzeuglängsdynamik, der Steuergerätefunktionen und der Steller-Sensor-Streckenmodelle in einer Simulationsumgebung wird die Hybrid-Betriebsstrategie gemeinsam mit der Steuerung des Verbrennungsmotors optimiert. Mit der Entwicklung und Implementierung einer neuen Funktion wird weiterhin die Verbindung zwischen der Betriebsstrategie und der Motorsteuerung erweitert. Die vorgestellten Werkzeuge ermöglichten hierbei nicht nur einen Test der neuen Funktionalitäten, sondern auch eine Abschätzung der Verbesserungspotentiale beim Verbrauch und Abgasemissionen. Insgesamt konnte eine effiziente Testumgebung für eine gemeinsame Optimierung der Betriebsstrategie und des Verbrennungsmotorverhaltens eines Hybridfahrzeugs realisiert werden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.