939 resultados para Omission Error
Resumo:
Results of two experiments are reported that examined how people respond to rectangular targets of different sizes in simple hitting tasks. If a target moves in a straight line and a person is constrained to move along a linear track oriented perpendicular to the targetrsquos motion, then the length of the target along its direction of motion constrains the temporal accuracy and precision required to make the interception. The dimensions of the target perpendicular to its direction of motion place no constraints on performance in such a task. In contrast, if the person is not constrained to move along a straight track, the targetrsquos dimensions may constrain the spatial as well as the temporal accuracy and precision. The experiments reported here examined how people responded to targets of different vertical extent (height): the task was to strike targets that moved along a straight, horizontal path. In experiment 1 participants were constrained to move along a horizontal linear track to strike targets and so target height did not constrain performance. Target height, length and speed were co-varied. Movement time (MT) was unaffected by target height but was systematically affected by length (briefer movements to smaller targets) and speed (briefer movements to faster targets). Peak movement speed (Vmax) was influenced by all three independent variables: participants struck shorter, narrower and faster targets harder. In experiment 2, participants were constrained to move in a vertical plane normal to the targetrsquos direction of motion. In this task target height constrains the spatial accuracy required to contact the target. Three groups of eight participants struck targets of different height but of constant length and speed, hence constant temporal accuracy demand (different for each group, one group struck stationary targets = no temporal accuracy demand). On average, participants showed little or no systematic response to changes in spatial accuracy demand on any dependent measure (MT, Vmax, spatial variable error). The results are interpreted in relation to previous results on movements aimed at stationary targets in the absence of visual feedback.
Resumo:
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing
Resumo:
The quantitative description of the quantum entanglement between a qubit and its environment is considered. Specifically, for the ground state of the spin-boson model, the entropy of entanglement of the spin is calculated as a function of α, the strength of the ohmic coupling to the environment, and ɛ, the level asymmetry. This is done by a numerical renormalization group treatment of the related anisotropic Kondo model. For ɛ=0, the entanglement increases monotonically with α, until it becomes maximal for α→1-. For fixed ɛ>0, the entanglement is a maximum as a function of α for a value, α=αM
Resumo:
In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.
Resumo:
An investigation was undertaken to test the effectiveness of two procedures for recording boundaries and plot positions for scientific studies on farms on Leyte Island, the Philippines. The accuracy of a Garmin 76 Global Positioning System (GPS) unit and a compass and chain was checked under the same conditions. Tree canopies interfered with the ability of the satellite signal to reach the GPS and therefore the GPS survey was less accurate than the compass and chain survey. Where a high degree of accuracy is required, a compass and chain survey remains the most effective method of surveying land underneath tree canopies, providing operator error is minimised. For a large number of surveys and thus large amounts of data, a GPS is more appropriate than a compass and chain survey because data are easily up-loaded into a Geographic Information System (GIS). However, under dense canopies where satellite signals cannot reach the GPS, it may be necessary to revert to a compass survey or a combination of both methods.
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.
Resumo:
Bulk density of undisturbed soil samples can be measured using computed tomography (CT) techniques with a spatial resolution of about 1 mm. However, this technique may not be readily accessible. On the other hand, x-ray radiographs have only been considered as qualitative images to describe morphological features. A calibration procedure was set up to generate two-dimensional, high-resolution bulk density images from x-ray radiographs made with a conventional x-ray diffraction apparatus. Test bricks were made to assess the accuracy of the method. Slices of impregnated soil samples were made using hardsetting seedbeds that had been gamma scanned at 5-mm depth increments in a previous study. The calibration procedure involved three stages: (i) calibration of the image grey levels in terms of glass thickness using a staircase made from glass cover slips, (ii) measurement of ratio between the soil and resin mass attenuation coefficients and the glass mass attenuation coefficient, using compacted bricks of known thickness and bulk density, and (iii) image correction accounting for the heterogeneity of the irradiation field. The procedure was simple, rapid, and the equipment was easily accessible. The accuracy of the bulk density determination was good (mean relative error 0.015), The bulk density images showed a good spatial resolution, so that many structural details could be observed. The depth functions were consistent with both the global shrinkage and the gamma probe data previously obtained. The suggested method would be easily applied to the new fuzzy set approach of soil structure, which requires generation of bulk density images. Also, it would be an invaluable tool for studies requiring high-resolution bulk density measurement, such as studies on soil surface crusts.
Resumo:
This study examined the relationship between isokinetic hip extensor/hip flexor strength, 1-RM squat strength, and sprint running performance for both a sprint-trained and non-sprint-trained group. Eleven male sprinters and 8 male controls volunteered for the study. On the same day subjects ran 20-m sprints from both a stationary start and with a 50-m acceleration distance, completed isokinetic hip extension/flexion exercises at 1.05, 4.74, and 8.42 rad.s(-1), and had their squat strength estimated. Stepwise multiple regression analysis showed that equations for predicting both 20-m maximum velocity nm time and 20-m acceleration time may be calculated with an error of less than 0.05 sec using only isokinetic and squat strength data. However, a single regression equation for predicting both 20-m acceleration and maximum velocity run times from isokinetic or squat tests was not found. The regression analysis indicated that hip flexor strength at all test velocities was a better predictor of sprint running performance than hip extensor strength.
Resumo:
This study tested hypotheses that locus of causality attributions for the academic performance of others are influenced by whether the other is a specific individual, or a typical other, and whether the other is similar or dissimilar to self. The research was carried out in two studies. Study 1 entailed development of two scales to measure perceptions of interpersonal similarity: 254 Australian undergraduates rated their similarity to either a specific other or to typical other students. In Study 2, 332 subjects completed one of the 16-item scales developed in Study 1, along with Rosenberg's self-esteem scale, and self-attribution and other-attribution versions of the Multidimensional Multi-attribution Causation Scale (MMCS). Results showed that attributions for the academic performance of others were strongly affected by whether the other was perceived to be similar or dissimilar to self, especially when the other was a specific individual. In particular, causal attributions for similar specific others were more favourable than attributions for self.
Resumo:
The performance of three analytical methods for multiple-frequency bioelectrical impedance analysis (MFBIA) data was assessed. The methods were the established method of Cole and Cole, the newly proposed method of Siconolfi and co-workers and a modification of this procedure. Method performance was assessed from the adequacy of the curve fitting techniques, as judged by the correlation coefficient and standard error of the estimate, and the accuracy of the different methods in determining the theoretical values of impedance parameters describing a set of model electrical circuits. The experimental data were well fitted by all curve-fitting procedures (r = 0.9 with SEE 0.3 to 3.5% or better for most circuit-procedure combinations). Cole-Cole modelling provided the most accurate estimates of circuit impedance values, generally within 1-2% of the theoretical values, followed by the Siconolfi procedure using a sixth-order polynomial regression (1-6% variation). None of the methods, however, accurately estimated circuit parameters when the measured impedances were low (<20 Omega) reflecting the electronic limits of the impedance meter used. These data suggest that Cole-Cole modelling remains the preferred method for the analysis of MFBIA data.
Resumo:
The hepatic disposition and metabolite kinetics of a homologous series of diflunisal O-acyl esters (acetyl, butanoyl, pentanoyl, anti hexanoyl) were determined using a single-pass perfused in situ rat liver preparation. The experiments were conducted using 2% BSA Krebs-Henseleit buffer (pH 7.4), and perfusions were performed at 30 mL/min in each liver. O-Acyl esters of diflunisal and pregenerated diflunisal were injected separately into the portal vein. The venous outflow samples containing the esters and metabolite diflunisal were analyzed by high performance liquid chromatography (HPLC). The normalized outflow concentration-time profiles for each parent ester and the formed metabolite, diflunisal, were analyzed using statistical moments analysis and the two-compartment dispersion model. Data (presented as mean +/- standard error for triplicate experiments) was compared using ANOVA repeated measures, significance level P < 0.05. The hepatic availability (AUC'), the fraction of the injected dose recovered in the outflowing perfusate, for O-acetyldiflunisal (C2D = 0.21 +/- 0.03) was significantly lower than the other esters (0.34-0.38). However, R-N/f(u), the removal efficiency number R-N divided by the unbound fraction in perfusate f(u), which represents the removal efficiency of unbound ester by the liver, was significantly higher for the most lipophilic ester (O-hexanoyldiflunisal, C6D = 16.50 +/- 0.22) compared to the other members of the series (9.57 to 11.17). The most lipophilic ester, C6D, had the largest permeability surface area (PS) product (94.52 +/- 38.20 mt min-l g-l liver) and tissue distribution value VT (35.62 +/- 11.33 mL g(-1) liver) in this series. The MTT of these O-acyl esters of diflunisal were not significantly different from one another. However, the metabolite diflunisal MTTs tended to increase with the increase in the parent ester lipophilicity (11.41 +/- 2.19 s for C2D to 38.63 +/- 9.81 s for C6D). The two-compartment dispersion model equations adequately described the outflow profiles for the parent esters and the metabolite diflunisal formed from the O-acyl esters of diflunisal in the liver.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
A finite element model (FEM) of the cell-compression experiment has been developed in dimensionless form to extract the fundamental cell-wall-material properties (i.e. the constitutive equation and its parameters) from experiment force-displacement data. The FEM simulates the compression of a thin-walled, liquid-filled sphere between two flat surfaces. The cell-wall was taken to be permeable and the FEM therefore accounts for volume loss during compression. Previous models assume an impermeable wall and hence a conserved cell volume during compression. A parametric study was conducted for structural parameters representative of yeast. It was shown that the common approach of assuming reasonable values for unmeasured parameters (e.g. cell-wall thickness, initial radial stretch) can give rise to nonunique solutions for both the form and constants in the cell-wall constitutive relationship. Similarly, measurement errors can also lead to an incorrectly defined cell-wall constitutive relationship. Unique determination of the fundamental wall properties by cell compression requires accurate and precise measurement of a minimum set of parameters (initial cell radius, initial cell-wall thickness, and the volume loss during compression). In the absence of such measurements the derived constitutive relationship may be in considerable error, and should be evaluated against its ability to predict the outcome of other mechanical experiments. (C) 1998 Elsevier Science Ltd. All rights reserved.