959 resultados para testing method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have used an animal model to test the reliability of a new portable continuous-wave Doppler ultrasonic cardiac output monitor, the USCOM. In six anesthetized dogs, cardiac output was measured with a high-precision transit time ultrasonic flowprobe placed on the ascending aorta. The dogs' cardiac output was increased with a dopamine infusion (0-15 mug (.) kg(-1) (.) min(-1)). Simultaneous flowprobe and USCOM cardiac output measurements were made. Up to 64 pairs of readings were collected from each dog. Data were compared by using the Bland and Altman plot method and Lin's concordance correlation coefficient. A total of 319 sets of paired readings were collected. The mean (+/-SD) cardiac output was 2.62 +/- 1.04 L/min, and readings ranged from 0.79 to 5.73 L/min. The mean bias between the 2 sets of readings was -0.01 L/min, with limits of agreement (95% confidence intervals) of -0.34 to 0.31 L/min. This represents a 13% error. In five of six dogs, there was a high degree of concordance, or agreement, between the 2 methods, with coefficients >0.9. The USCOM provided reliable measurements of cardiac output over a wide range of values. Clinical trials are needed to validate the device in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have undertaken two-dimensional gel electrophoresis proteomic profiling on a series of cell lines with different recombinant antibody production rates. Due to the nature of gel-based experiments not all protein spots are detected across all samples in an experiment, and hence datasets are invariably incomplete. New approaches are therefore required for the analysis of such graduated datasets. We approached this problem in two ways. Firstly, we applied a missing value imputation technique to calculate missing data points. Secondly, we combined a singular value decomposition based hierarchical clustering with the expression variability test to identify protein spots whose expression correlates with increased antibody production. The results have shown that while imputation of missing data was a useful method to improve the statistical analysis of such data sets, this was of limited use in differentiating between the samples investigated, and highlighted a small number of candidate proteins for further investigation. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Our aim was to determine if insomnia severity, dysfunctional beliefs about sleep, and depression predicted sleep-related safety behaviors. Method: Standard sleep-related measures (such as the Insomnia Severity Index; the Dysfunctional Beliefs About Sleep scale; the Depression, Anxiety, and Stress Scale; and the Sleep-Related Behaviors Questionnaire) were administered. Additionally, 14 days of sleep diary (Pittsburg Sleep Diary) data and actual use of sleep-related behaviors were collected. Results: Regression analysis revealed that dysfunctional beliefs about sleep predicted sleep-related safety behaviors. Insomnia severity did not predict sleep-related safety behaviors. Depression accounted for the greatest amount of unique variance in the prediction of safety behaviors, followed by dysfunctional beliefs. Exploratory analysis revealed that participants with higher levels of depression used more sleep-related behaviors and reported greater dysfunctional beliefs about their sleep. Conclusion: The findings underlie the significant influence that dysfunctional beliefs have on individuals' behaviors. Moreover, the results suggest that depression may need to be considered as an explicit component of cognitive-behavioral models of insomnia. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Univariate linkage analysis is used routinely to localise genes for human complex traits. Often, many traits are analysed but the significance of linkage for each trait is not corrected for multiple trait testing, which increases the experiment-wise type-I error rate. In addition, univariate analyses do not realise the full power provided by multivariate data sets. Multivariate linkage is the ideal solution but it is computationally intensive, so genome-wide analysis and evaluation of empirical significance are often prohibitive. We describe two simple methods that efficiently alleviate these caveats by combining P-values from multiple univariate linkage analyses. The first method estimates empirical pointwise and genome-wide significance between one trait and one marker when multiple traits have been tested. It is as robust as an appropriate Bonferroni adjustment, with the advantage that no assumptions are required about the number of independent tests performed. The second method estimates the significance of linkage between multiple traits and one marker and, therefore, it can be used to localise regions that harbour pleiotropic quantitative trait loci (QTL). We show that this method has greater power than individual univariate analyses to detect a pleiotropic QTL across different situations. In addition, when traits are moderately correlated and the QTL influences all traits, it can outperform formal multivariate VC analysis. This approach is computationally feasible for any number of traits and was not affected by the residual correlation between traits. We illustrate the utility of our approach with a genome scan of three asthma traits measured in families with a twin proband.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary form only given. The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent nondeterminism and a number of specific concurrency problems such as interference and deadlock. In previous work, we proposed a method for verifying concurrent Java components based on a mix of code inspection, static analysis tools, and the ConAn testing tool. The method was derived from an analysis of concurrency failures in Java components, but was not applied in practice. In this paper, we explore the method by applying it to an implementation of the well-known readers-writers problem and a number of mutants of that implementation. We only apply it to a single, well-known example, and so we do not attempt to draw any general conclusions about the applicability or effectiveness of the method. However, the exploration does point out several strengths and weaknesses in the method, which enable us to fine-tune the method before we carry out a more formal evaluation on other, more realistic components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Testing concurrent software is difficult due to problems with inherent nondeterminism. In previous work, we have presented a method and tool support for the testing of concurrent Java components. In this paper, we extend that work by presenting and discussing techniques for testing Java thread interrupts and timed waits. Testing thread interrupts is important because every Java component that calls wait must have code dealing with these interrupts. For a component that uses interrupts and timed waits to provide its basic functionality, the ability to test these features is clearly even more important. We discuss the application of the techniques and tool support to one such component, which is a nontrivial implementation of the readers-writers problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development. In this paper, we present a method for generating passive test oracles that act as self-checking implementations. The implementation is verified using an animation tool to check that the behavior of the implementation matches the behavior of the specification. We discuss how to integrate this method into a framework developed for systematically animating specifications, which means a tester can significantly reduce testing time and effort by reusing work products from the animation. One such work product is a testgraph: a directed graph that partially models the states and transitions of the specification. Testgraphs are used to generate sequences for animation, and during testing, to execute these same sequences on the implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A description of the background to testing friction materials for automotive brakes explains the need for a rapid, inexpensive means of assessing their behaviour in a way which is both accurate and meaningful. Various methods of controlling inertia dynamometers to simulate road vehicles are rejected in favour of programming by means of a commercially available XY plotter. Investigation of brake service conditions is used to set up test schedules, and a dynamometer programming unit built to enable service conditions on vehicles to be simulated on a full scale dynamometer. A technique is developed by which accelerated testing can be achieved without operating under overload conditions, saving time and cost without sacrificing validity. The development of programming by XY plotter is described, with a method of operating one XY plotter to programme the machine, monitor its own behaviour, and plot its own results in logical sequence. Commissioning trials are described and the generation of reproducible results in frictional behaviour and material durability is discussed. Teclmiques are developed to cross check the operation of the machine in retrospect, and retrospectively correct results in the event of malfunctions. Sensitivity errors in the measuring circuits are displayed between calibrations, whilst leaving the recorded results almost unaffected by error. Typical results of brake lining tests are used to demonstrate the range of performance parameters which can be studied by use of the machine. Successful test investigations completed on the machine are reported, including comments on behaviour of cast iron drums and discs. The machine shows that materials can repeat their complex friction/ temperature/speed/pressure relationships at a reproducibility of the order of +-0.003u and +~ 0.0002 in. thickness loss during wear tests. Discussion of practical and academic implications completes the report with recommendations for further work in both fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present research represents a coherent approach to understanding the root causes of ethnic group differences in ability test performance. Two studies were conducted, each of which was designed to address a key knowledge gap in the ethnic bias literature. In Study 1, both the LR Method of Differential Item Functioning (DIF) detection and Mixture Latent Variable Modelling were used to investigate the degree to which Differential Test Functioning (DTF) could explain ethnic group test performance differences in a large, previously unpublished dataset. Though mean test score differences were observed between a number of ethnic groups, neither technique was able to identify ethnic DTF. This calls into question the practical application of DTF to understanding these group differences. Study 2 investigated whether a number of non-cognitive factors might explain ethnic group test performance differences on a variety of ability tests. Two factors – test familiarity and trait optimism – were able to explain a large proportion of ethnic group test score differences. Furthermore, test familiarity was found to mediate the relationship between socio-economic factors – particularly participant educational level and familial social status – and test performance, suggesting that test familiarity develops over time through the mechanism of exposure to ability testing in other contexts. These findings represent a substantial contribution to the field’s understanding of two key issues surrounding ethnic test performance differences. The author calls for a new line of research into these performance facilitating and debilitating factors, before recommendations are offered for practitioners to ensure fairer deployment of ability testing in high-stakes selection processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the damage of the onion thrips (Thrips tabaci Lindemann) first occurred on white cabbage in Hungary several observations have been carried out, both in Hungary and abroad, to assess varietal resistance. The use of a new evaluation method for field screening is described and the result of the monitoring of 64 varieties is reported. The most susceptible varieties were ‘Bejo 1860’, ‘SG 3164’, ‘Quisto’, ‘Green Gem’ and ‘Ramada’. On the other hand, ‘Golden Cross’, ‘Balashi’, ‘Riana’, ‘Autumn Queen’, ‘Leopard’, Ama-Daneza’ and ‘Galaxy’ suffered the least damage under natural infestation. Methods for testing the patterns of resistance are also described and evaluated. In case of plants at the few leaf growth stage significant negative correlation was found between egg mortality and the egg laying preference of adults. The results of the other antibiotic and antixenotic tests were greatly affected by differences in the physiological age and condition of the varieties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research provides data which investigates the feasibility of using fourth generation evaluation during the process of instruction. A semester length course entitled "Multicultural Communications", (PUR 5406/4934) was designed and used in this study, in response to the need for the communications profession to produce well-trained culturally sensitive practitioners for the work force and the market place. A revised pause model consisting of three one-on-one indepth interviews conducted outside of the class, three reflections periods during the class and a self-reflective essay prepared one week before the end of the course was analyzed. Narrative and graphic summaries of participant responses produced significant results. The revised pause model was found to be an effective evaluation method for use in multicultural education under certain conditions as perceived by the participants in the study. participant self-perceived behavior change and knowledge acquisition was identified through use of the revised pause model. Study results suggest that by using the revised pause model of evaluation, instructors teaching multicultural education in schools of journalism and mass communication is yet another way of enhancing their ability to become both the researcher and the research subject. In addition, the introduction of a qualitative model has been found to be a more useful way of generating participant involvement and introspection. Finally, the instructional design of the course used in the study provides communication educators with a practical way of preparing their students be effective communicators in a multicultural world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.