993 resultados para automated testing
Resumo:
The growth of molds on paper containing cellulose is a frequent occurrence when the level of relative air humidity is high or when books become wet due to water leaks in libraries. The aim of this study is to differentiate the bioreceptivity of different types of book paper for different fungi. Laboratory tests were performed with strains of Aspergillus niger, Cladosporium sp., Chaetomium globosum and Trichoderma harzianum isolated from books. Four paper types were evaluated: couche Men (offset), recycled and a reference paper containing only cellulose. The tests were carried out in chambers with relative air humidity of 95% and 100%. Mold growth was greatest in the tests at 100% relative humidity. Results of stereoscopic microscopy observation showed that Cladosporium sp. grew in 74% of these samples, A. niger in 75%, T. harzianum in 72% and C. globosum in 60%. In the chambers with 95% air humidity Cladosporium sp. grew in only 9% of the samples, A. niger in 1%, T harzianum in 3% and C globosum did not grow in any sample. The most bioreceptive paper was couche and the least receptive was recycled paper. The composition of the recycled paper, however, varies depending on the types of waste materials used to make it. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The correlation between the microdilution (MD), Etest (R) (ET), and disk diffusion (DD) methods was determined for amphotericin B, itraconazole and fluconazole. The minimal inhibitory concentration (MIC) of those antifungal agents was established for a total of 70 Candida spp. isolates from colonization and infection. The species distribution was: Candida albicans (n = 27), C. tropicalis (n = 17), C. glabrata (n = 16), C. parapsilosis (n = 8), and C. lusitaniae (n = 2). Non-Candida albicans Candida species showed higher MICs for the three antifungal agents when compared with C. albicans isolates. The overall concordance (based on the MIC value obtained within two dilutions) between the ET and the MD method was 83% for amphotericin B, 63% for itraconazole, and 64% for fluconazole. Considering the breakpoint, the agreement between the DD and MD methods was 71% for itraconazole and 67% for fluconazole. The DD zone diameters are highly reproducible and correlate well with the MD method, making agar-based methods a viable alternative to MD for susceptibility testing. However, data on agar-based tests for itraconazole and amphotericin B are yet scarce. Thus, further research must still be carded out to ensure the standardization to other antifungal agents. J. Clin. Lab. Anal. 23:324-330, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
The use of inter-laboratory test comparisons to determine the performance of individual laboratories for specific tests (or for calibration) [ISO/IEC Guide 43-1, 1997. Proficiency testing by interlaboratory comparisons - Part 1: Development and operation of proficiency testing schemes] is called Proficiency Testing (PT). In this paper we propose the use of the generalized likelihood ratio test to compare the performance of the group of laboratories for specific tests relative to the assigned value and illustrate the procedure considering an actual data from the PT program in the area of volume. The proposed test extends the test criteria in use allowing to test for the consistency of the group of laboratories. Moreover, the class of elliptical distributions are considered for the obtained measurements. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.
Resumo:
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
Tests are described showing the results obtained for the determination of REE and the trace elements Rb, Y, Zr, Nb, Cs, Ba, Hf, Ta, Pb, Th and U with ICP-MS methodology for nine basaltic reference materials, and thirteen basalts and amphibolites from the mafic-ultramafic Niquelandia Complex, central Brazil. Sample decomposition for the reference materials was performed by microwave oven digestion (HF and HNO(3), 100 mg of sample), and that for the Niquelandia samples also by Parr bomb treatment (5 days at 200 degrees C, 40 mg of sample). Results for the reference materials were similar to published values, thus showing that the microwave technique can be used with confidence for basaltic rocks. No fluoride precipitates were observed in the microwave-digested solutions. Total recovery of elements, including Zr and Hf, was obtained for the Niquelandia samples, with the exception of an amphibolite. For this latter sample, the Parr method achieved a total digestion, but not so the microwave decomposition; losses, however, were observed only for Zr and Hf, indicating difficulty in dissolving Zr-bearing minerals by microwave acid attack.
Resumo:
The widespread use of service-oriented architectures (SOAs) and Web services in commercial software requires the adoption of development techniques to ensure the quality of Web services. Testing techniques and tools concern quality and play a critical role in accomplishing quality of SOA based systems. Existing techniques and tools for traditional systems are not appropriate to these new systems, making the development of Web services testing techniques and tools required. This article presents new testing techniques to automatically generate a set of test cases and data for Web services. The techniques presented here explore data perturbation of Web services messages upon data types, integrity and consistency. To support these techniques, a tool (GenAutoWS) was developed and applied to real problems. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Often, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test, and also to a test obtained from a modified profile likelihood function. Our results generalize those in [Zucker, D.M., Lieberman, O., Manor, O., 2000. Improved small sample inference in the mixed linear model: Bartlett correction and adjusted likelihood. Journal of the Royal Statistical Society B, 62,827-838] by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report simulation results which show that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Classical hypothesis testing focuses on testing whether treatments have differential effects on outcome. However, sometimes clinicians may be more interested in determining whether treatments are equivalent or whether one has noninferior outcomes. We review the hypotheses for these noninferiority and equivalence research questions, consider power and sample size issues, and discuss how to perform such a test for both binary and survival outcomes. The methods are illustrated on 2 recent studies in hematopoietic cell transplantation.
Resumo:
In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model`s goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This paper describes the development and evaluation of a sequential injection method to automate the determination of methyl parathion by square wave adsorptive cathodic stripping voltammetry exploiting the concept of monosegmented flow analysis to perform in-line sample conditioning and standard addition. Accumulation and stripping steps are made in the sample medium conditioned with 40 mmol L-1 Britton-Robinson buffer (pH 10) in 0.25 mol L-1 NaNO3. The homogenized mixture is injected at a flow rate of 10 mu Ls(-1) toward the flow cell, which is adapted to the capillary of a hanging drop mercury electrode. After a suitable deposition time, the flow is stopped and the potential is scanned from -0.3 to -1.0 V versus Ag/AgCl at frequency of 250 Hz and pulse height of 25 mV The linear dynamic range is observed for methyl parathion concentrations between 0.010 and 0.50 mgL(-1), with detection and quantification limits of 2 and 7 mu gL(-1), respectively. The sampling throughput is 25 h(-1) if the in line standard addition and sample conditioning protocols are followed, but this frequency can be increased up to 61 h(-1) if the sample is conditioned off-line and quantified using an external calibration curve. The method was applied for determination of methyl parathion in spiked water samples and the accuracy was evaluated either by comparison to high performance liquid chromatography with UV detection, or by the recovery percentages. Although no evidences of statistically significant differences were observed between the expected and obtained concentrations, because of the susceptibility of the method to interference by other pesticides (e.g., parathion, dichlorvos) and natural organic matter (e.g., fulvic and humic acids), isolation of the analyte may be required when more complex sample matrices are encountered. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the automation of a fully electrochemical system for preconcentration, cleanup, separation and detection, comprising the hyphenation of a thin layer electrochemical flow cell with CE coupled with contactless conductivity detection (CE-C(4)D). Traces of heavy metal ions were extracted from the pulsed-flowing sample and accumulated on a glassy carbon working electrode by electroreduction for some minutes. Anodic stripping of the accumulated metals was synchronized with hydrodynamic injection into the capillary. The effect of the angle of the slant polished tip of the CE capillary and its orientation against the working electrode in the electrochemical preconcentration (EPC) flow cell and of the accumulation time were studied, aiming at maximum CE-C(4)D signal enhancement. After 6 min of EPC, enhancement factors close to 50 times were obtained for thallium, lead, cadmium and copper ions, and about 16 for zinc ions. Limits of detection below 25 nmol/L were estimated for all target analytes but zinc. A second separation dimension was added to the CE separation capabilities by staircase scanning of the potentiostatic deposition and/or stripping potentials of metal ions, as implemented with the EPC-CE-C(4)D flow system. A matrix exchange between the deposition and stripping steps, highly valuable for sample cleanup, can be straightforwardly programmed with the multi-pumping flow management system. The automated simultaneous determination of the traces of five accumulable heavy metals together with four non-accumulated alkaline and alkaline earth metals in a single run was demonstrated, to highlight the potentiality of the system.