967 resultados para Standard fire tests
Resumo:
This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.
Resumo:
PURPOSE: To assess the comparative accuracy of potential screening tests for open angle glaucoma (OAG).
METHODS: Medline, Embase, Biosis (to November 2005), Science Citation Index (to December 2005), and The Cochrane Library (Issue 4, 2005) were searched. Studies assessing candidate screening tests for detecting OAG in persons older than 40 years that reported true and false positives and negatives were included. Meta-analysis was undertaken using the hierarchical summary receiver operating characteristic model.
RESULTS: Forty studies enrolling over 48,000 people reported nine tests. Most tests were reported by only a few studies. Frequency-doubling technology (FDT; C-20-1) was significantly more sensitive than ophthalmoscopy (30, 95% credible interval [CrI] 0-62) and Goldmann applanation tonometry (GAT; 45, 95% CrI 17-68), whereas threshold standard automated perimetry (SAP) and Heidelberg Retinal Tomograph (HRT II) were both more sensitive than GAT (41, 95% CrI 14-64 and 39, 95% CrI 3-64, respectively). GAT was more specific than both FDT C-20-5 (19, 95% CrI 0-53) and threshold SAP (14, 95% CrI 1-37). Judging performance by diagnostic odds ratio, FDT, oculokinetic perimetry, and HRT II are promising tests. Ophthalmoscopy, SAP, retinal photography, and GAT had relatively poor performance as single tests. These findings are based on heterogeneous data of limited quality and as such are associated with considerable uncertainty.
CONCLUSIONS: No test or group of tests was clearly superior for glaucoma screening. Further research is needed to evaluate the comparative accuracy of the most promising tests.
Resumo:
The current eight published ISO standards associated with semiconductor photocatalysis are considered. These standards cover: (1) air purification (specifically, the removal of NO, acetaldehyde and toluene), (2) water purification (the photobleaching of methylene blue and oxidation of DMSO) (3) self-cleaning surfaces (the removal of oleic acid and subsequent change in water droplet contact angle), (4) photosterilisation (specifically probing the antibacterial action of semiconductor photocatalyst films) and (5) UV light sources for semiconductor photocatalytic ISO work. For each standard, the background is first considered, followed by a brief discussion of the standard particulars and concluding in a discussion of the pros and cons of the standard, with often recommendations for their improvement. Other possible standards for the future which would either compliment or enhance the current ones are discussed briefly.
Resumo:
In this paper, we propose new cointegration tests for single equations and panels. Inboth cases, the asymptotic distributions of the tests, which are derived with N fixed andT → ∞, are shown to be standard normals. The effects of serial correlation and crosssectionaldependence are mopped out via long-run variances. An effective bias correctionis derived which is shown to work well in finite samples; particularly when N is smallerthan T. Our panel tests are robust to possible cointegration across units.
Resumo:
This paper reports on the accuracy of new test methods developed to measure the air and water permeability of high-performance concretes (HPCs). Five representative HPC and one normal concrete (NC) mixtures were tested to estimate both repeatability and reliability of the proposed methods. Repeatability acceptance was adjudged using values of signal-noise ratio (SNR) and discrimination ratio (DR), and reliability was investigated by comparing against standard laboratory-based test methods (i.e., the RILEM gas permeability test and BS EN water penetration test). With SNR and DR values satisfying recommended criteria, it was concluded that test repeatability error has no significant influence on results. In addition, the research confirmed strong positive relationships between the proposed test methods and existing standard permeability assessment techniques. Based on these findings, the proposed test methods show strong potential to become recognized as international methods for determining the permeability of HPCs.
Resumo:
Roadside safety barriers designs are tested with passenger cars in Europe using standard EN1317 in which the impact angle for normal, high and very high containment level tests is 20°. In comparison to EN1317, the US standard MASH has higher impact angles for cars and pickups (25°) and different vehicle masses. Studies in Europe (RISER) and the US have shown values for the 90th percentile impact angle of 30°–34°. Thus, the limited evidence available suggests that the 20° angle applied in EN 1317 may be too low.
The first goal of this paper is to use the US NCHRP database (Project NCHRP 17–22) to assess the distribution of impact angle and collision speed in recent ROR accidents. Second, based on the findings of the statistical analysis and on analysis of impact angles and speeds in the literature, an LS-DYNA finite element analysis was carried out to evaluate the normal containment level of concrete barriers in non-standard collisions. The FE model was validated against a crash test of a portable concrete barrier carried out at the UK Transport Research Laboratory (TRL).
The accident data analysis for run-off road accidents indicates that a substantial proportion of accidents have an impact angle in excess of 20°. The baseline LS-DYNA model showed good comparison with experimental acceleration severity index (ASI) data and the parametric analysis indicates a very significant influence of impact angle on ASI. Accordingly, a review of European run-off road accidents and the configuration of EN 1317 should be performed.
Resumo:
Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Micro-abrasion wear tests with ball-cratering configuration are widely used. Sources of variability are already studied by different authors and conditions for testing are parameterized by BS EN 1071-6: 2007 standard which refers silicon carbide as abrasive. However, the use of other abrasives is possible and allowed. In this work, ball-cratering wear tests were performed using four different abrasive particles of three dissimilar materials: diamond, alumina and silicon carbide. Tests were carried out under the same conditions on a steel plate provided with TiB2 hard coating. For each abrasive, five different test durations were used allowing understanding the initial wear phenomena. Composition and shape of abrasive particles were investigated by SEM and EDS. Scar areas were observed by optical and electronic microscopy in order to understand the wear effects caused by each of them. Scar geometry and grooves were analyzed and compared. Wear coefficient was calculated for each situation. It was observed that diamond particles produce well-defined and circular wear scars. Different silicon carbide particles presented dissimilar results as consequence of distinct particle shape and size distribution.
Resumo:
Every year, particularly during the summer period, the Portuguese forests are devastated by forest fire that destroys their ecosystems. So in order to prevent these forest fires, public and private authorities frequently use methods for the reduction of combustible mass as the prescribed fire and the mechanical vegetation pruning. All of these methods of prevention of forest fires alter the vegetation layer and/or soil [1-2]. This work aimed the study of the variation of some chemical characteristics of soil that suffered prescribed fire. The studied an area was located in the Serra of Cabreira (Figure 1) with 54.6 ha. Twenty sampling points were randomly selected and samples were collected with a shovel before, just after the prescribed fire, and 125 and 196 days after that event. The parameters that were studied were: pH, soil moisture, organic matter and iron, magnesium and potassium total concentration. All the analysis followed International Standard Methodologies. This work allowed to conclude that: a) after the prescribed fire; i) the pH remained practically equal to the the initial value; ii) occurred a slight increase of the average of the organic matter contents and iron total contents; b) at the end of the sampling period compared to the initial values; i) the pH didn´t change significantly; ii) the average of the contents of organic matter decreased; and iii) the average of the total contents of Fe, Mg and K increased.
Resumo:
PURPOSE: To investigate the effect of intraocular straylight (IOS) induced by white opacity filters (WOF) on threshold measurements for stimuli employed in three perimeters: standard automated perimetry (SAP), pulsar perimetry (PP) and the Moorfields motion displacement test (MDT).¦METHODS: Four healthy young (24-28 years old) observers were tested six times with each perimeter, each time with one of five different WOFs and once without, inducing various levels of IOS (from 10% to 200%). An increase in IOS was measured with a straylight meter. The change in sensitivity from baseline was normalized, allowing comparison of standardized (z) scores (change divided by the SD of normative values) for each instrument.¦RESULTS: SAP and PP thresholds were significantly affected (P < 0.001) by moderate to large increases in IOS (50%-200%). The drop in motion displacement (MD) from baseline with WOF 5, was approximately 5 dB, in both SAP and PP which represents a clinically significant loss; in contrast the change in MD with MDT was on average 1 minute of arc, which is not likely to indicate a clinically significant loss.¦CONCLUSIONS: The Moorfields MDT is more robust to the effects of additional straylight in comparison with SAP or PP.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
The technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] provides an attractive method of building exact tests from statistics whose finite sample distribution is intractable but can be simulated (provided it does not involve nuisance parameters). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing the method to statistics whose null distributions involve nuisance parameters (maximized MC tests, MMC). Simplified asymptotically justified versions of the MMC method are also proposed and it is shown that they provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics (e.g., unit root asymptotics). Parametric bootstrap tests may be interpreted as a simplified version of the MMC method (without the general validity properties of the latter).