948 resultados para Error threshold


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paratext framework is now used in a variety of fields to assess, measure, analyze, and comprehend the elements that provide thresholds, allowing scholars to better understand digital objects. Researchers from many disciplines revisit paratextual theories in order to grasp what surrounds text in the digital age. Examining Paratextual Theory and its Applications in Digital Culture suggests a theoretical and practical tool for building bridges between disciplines interested in conducting joint research and exploration of digital culture. Helping scholars from different fields find an interdisciplinary framework and common language to study digital objects, this book serves as a useful reference for academics, librarians, professionals, researchers, and students, offering a collaborative outlook and perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les modèles animaux d’arthrose permettent d’évaluer le potentiel d’agents thérapeutiques en phase préclinique de développement. Le présent ouvrage tient compte du chien comme modèle d’arthrose naturelle (chez l’animal de compagnie) ou expérimentale (par sectionnement chirurgical du ligament croisé crânial). Au sein des expérimentations, la force de réaction au sol verticale maximale, mesurée lors de l’analyse cinétique de la locomotion, est proposée comme témoin d’effets fonctionnels et structuraux sur ces modèles d’arthrose. Sur un modèle canin d’arthrose naturelle, le seuil de changement minimal détectable a été déterminé. Les changements au dysfonctionnement locomoteur peuvent désormais être cernés en s’affranchissant de la marge d’erreur inhérente à la mesure de la force verticale maximale. Il en découle l’identification de répondants lors d’essais cliniques entrepris chez le chien arthrosique. Une analyse rétrospective a, par la suite, déterminé un taux de répondants de 62.8% et d’une taille d’effet de 0.7 pour des approches thérapeutiques actuellement proposées aux chiens arthrosiques. Cette analyse détermina également que la démonstration d’une réponse thérapeutique était favorisée en présence d’un fort dysfonctionnement locomoteur. Sur un modèle canin d’arthrose par sectionnement chirurgical du ligament croisé crânial, la force verticale maximale a démontré une relation inverse avec certains types de lésions arthrosiques évaluées à l’aide d’imagerie par résonance magnétique. Également, la sensibilité de la force verticale maximale a été mise en évidence envers la détection d’effets structuraux, au niveau de l’os sous-chondral, par un agent anti-résorptif (le tiludronate) sur ce même modèle. Les expérimentations en contexte d’arthrose naturelle canine permettent de valider davantage les résultats d’essais cliniques contrôlés utilisant la force verticale maximale comme critère d’efficacité fonctionnelle. Des évidences cliniques probantes nécessaires à la pratique d’une médecine basée sur des faits sont ainsi escomptées. En contexte d’arthrose expérimentale, la pertinence d’enregistrer le dysfonctionnement locomoteur est soulignée, puisque ce dernier est en lien avec l’état des structures. En effectuant l’analyse de la démarche, de pair avec l’évaluation des structures, il est escompté de pouvoir établir la répercussion de bénéfices structurels sur l’inconfort articulaire. Cet ouvrage suggère qu’une plateforme d’investigations précliniques, qui combine le modèle canin d’arthrose par sectionnement chirurgical du ligament croisé crânial à un essai clinique chez le chien arthrosique, soit un moyen de cerner des bénéfices structuraux ayant des impacts fonctionnels. Le potentiel inférentiel de ces modèles canins d’arthrose vers l’Homme serait ainsi favorisé en utilisant la force verticale maximale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pulsed Nd-YAG laser beam is used to produce a transient refractive index gradient in air adjoining the plane surface of the sample material. This refractive index gradient is probed by a continuous He-Ne laser beam propagating parallel to the sample surface. The observed deflection signals produced by the probe beam exhibit drastic variations when the pump laser energy density crosses the damage threshold for the sample. The measurements are used to estimate the damage threshold for a few polymer samples. The present values are found to be in good agreement with those determined by other methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The acoustic signals generated in solids due to interaction with pulsed laser beam is used to determine the ablation threshold of bulk polymer samples of teflon (polytetrafluoroethylene) and nylon under the irradiation from a Q-switched Nd:YAG laser at 1.06µm wavelength. A suitably designed piezoelectric transducer is employed for the detection of photoacoustic (PA) signals generated in this process. It has been observed that an abrupt increase in the amplitude of the PA signal occurs at the ablation threshold. Also there exist distinct values for the threshold corresponding to different mechanisms operative in producing damages like surface morphology, bond breaking and melting processes at different laser energy densities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laser‐induced damage and ablation thresholds of bulk superconducting samples of Bi2(SrCa)xCu3Oy(x=2, 2.2, 2.6, 2.8, 3) and Bi1.6 (Pb)xSr2Ca2Cu3 Oy (x=0, 0.1, 0.2, 0.3, 0.4) for irradiation with a 1.06 μm beam from a Nd‐YAG laser have been determined as a function of x by the pulsed photothermal deflection technique. The threshold values of power density for ablation as well as damage are found to increase with increasing values of x in both systems while in the Pb‐doped system the threshold values decrease above a specific value of x, coinciding with the point at which the Tc also begins to fall.  

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photothermal deflection technique was used for determining the laser damage threshold of polymer samples of teflon (PTFE) and nylon. The experiment was conducted using a Q-switched Nd-YAG laser operating at its fundamental wavelength (1-06μm, pulse width 10 nS FWHM) as irradiation source and a He-Ne laser as the probe beam, along with a position sensitive detector. The damage threshold values determined by photothermal deflection method were in good agreement with those determined by other methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we try to fit a threshold autoregressive (TAR) model to time series data of monthly coconut oil prices at Cochin market. The procedure proposed by Tsay [7] for fitting the TAR model is briefly presented. The fitted model is compared with a simple autoregressive (AR) model. The results are in favour of TAR process. Thus the monthly coconut oil prices exhibit a type of non-linearity which can be accounted for by a threshold model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biclustering is simultaneous clustering of both rows and columns of a data matrix. A measure called Mean Squared Residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within a submatrix. In this paper a novel algorithm is developed for biclustering gene expression data using the newly introduced concept of MSR difference threshold. In the first step high quality bicluster seeds are generated using K-Means clustering algorithm. Then more genes and conditions (node) are added to the bicluster. Before adding a node the MSR X of the bicluster is calculated. After adding the node again the MSR Y is calculated. The added node is deleted if Y minus X is greater than MSR difference threshold or if Y is greater than MSR threshold which depends on the dataset. The MSR difference threshold is different for gene list and condition list and it depends on the dataset also. Proper values should be identified through experimentation in order to obtain biclusters of high quality. The results obtained on bench mark dataset clearly indicate that this algorithm is better than many of the existing biclustering algorithms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.