23 resultados para finite-sample test
em Aston University Research Archive
Resumo:
The Kolmogorov-Smirnov (KS) test is a non-parametric test which can be used in two different circumstances. First, it can be used as an alternative to chi-square (?2) as a ‘goodness-of-fit’ test to compare whether a given ‘observed’ sample of observations conforms to an ‘expected’ distribution of results (KS, one-sample test). An example of the use of the one-sample test to determine whether a sample of observations was normally distributed was described previously. Second, it can be used as an alternative to the Mann-Whitney test to compare two independent samples of observations (KS, two-sample test). Hence, this statnote describes the use of the KS test with reference to two scenarios: (1) to compare the observed frequency (Fo) of soil samples containing cysts of the protozoan Naegleria collected each month for a year with an expected equal frequency (Fe) across months (one-sample test), and (2) to compare the abundance of bacteria on cloths and sponges sampled in a domestic kitchen environment (two-sample test).
Resumo:
We test for departures from normal and independent and identically distributed (NIID) log returns, for log returns under the alternative hypothesis that are self-affine and either long-range dependent, or drawn randomly from an L-stable distribution with infinite higher-order moments. The finite sample performance of estimators of the two forms of self-affinity is explored in a simulation study. In contrast to rescaled range analysis and other conventional estimation methods, the variant of fluctuation analysis that considers finite sample moments only is able to identify both forms of self-affinity. When log returns are self-affine and long-range dependent under the alternative hypothesis, however, rescaled range analysis has higher power than fluctuation analysis. The techniques are illustrated by means of an analysis of the daily log returns for the indices of 11 stock markets of developed countries. Several of the smaller stock markets by capitalization exhibit evidence of long-range dependence in log returns. © 2012 Elsevier Inc. All rights reserved.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
Understanding the feasibility of applying the Team Climate Inventory (TCI) in non-Western cultures is essential for researchers attempting to understand the influence of culture on workers' perceived climate. This study describes the application of the TCI in such a setting using data from 203 administrators employed in a Taiwanese medical center. Reliability and factor analyses were performed to establish the feasibility and psychometric properties of the TCI Taiwan version. Reliabilities of both the four- and five-factor solutions exceeded .80. Factor analyses indicated a satisfactory four-factor structure, despite some variations in comparison with the U.K. version. The TCI Taiwan version is feasible and has acceptable psychometric properties. Further research is warranted regarding the degree to which disparities result from cultural differences and the specific nature of organizational systems in Chinese communities.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
This paper investigates the relationship between systems of HRM policies and organizational performance. The research is based on a sample of 178 organizations operating in the Greek manufacturing sector. A mediation model is tested to examine the link between HRM and organizational performance. The results of this study support the hypothesis that the relationship between the HRM systems of resourcing-development and reward-relations, and organizational performance, is mediated through the HRM outcomes of skills and attitudes. The paper not only supports the theory that HRM systems have a positive impact on organizational performance but also explains the mechanisms through which HRM systems improve organizational performance.
Resumo:
Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.
Resumo:
The object of this work was to further develop the idea introduced by Muaddi et al (1981) which enables some of the disadvantages of earlier destructive adhesion test methods to be overcome. The test is non-destructive in nature but it does need to be calibrated against a destructive method. Adhesion is determined by measuring the effect of plating on internal friction. This is achieved by determining the damping of vibrations of a resonating specimen before and after plating. The level of adhesion was considered by the above authors to influence the degree of damping. In the major portion of the research work the electrodeposited metal was Watt's nickel, which is ductile in nature and is therefore suitable for peel adhesion testing. The base metals chosen were aluminium alloys S1C and HE9 as it is relatively easy to produce varying levels of adhesion between the substrate and electrodeposited coating by choosing the appropriate process sequence. S1C alloy is the commercially pure aluminium and was used to produce good adhesion. HE9 aluminium alloy is a more difficult to plate alloy and was chosen to produce poorer adhesion. The "Modal Testing" method used for studying vibrations was investigated as a possible means of evaluating adhesion but was not successful and so research was concentrated on the "Q" meter. The method based on the use of a "Q" meter involves the principle of exciting vibrations in a sample, interrupting the driving signal and counting the number of oscillations of the freely decaying vibrations between two known preselected amplitudes of oscillations. It was not possible to reconstruct a working instrument using Muaddi's thesis (1982) as it had either a serious error or the information was incomplete. Hence a modified "Q" meter had to be designed and constructed but it was then difficult to resonate non-magnetic materials, such as aluminium, therefore, a comparison before and after plating could not be made. A new "Q" meter was then developed based on an Impulse Technique. A regulated miniature hammer was used to excite the test piece at the fundamental mode instead of an electronic hammer and test pieces were supported at the two predetermined nodal points using nylon threads. This instrument developed was not very successful at detecting changes due to good and poor pretreatments given before plating, however, it was more sensitive to changes at the surface such as room temperature oxidation. Statistical analysis of test results from untreated aluminium alloys show that the instrument is not always consistent, the variation was even bigger when readings were taken on different days. Although aluminium is said to form protective oxides at room temperature there was evidence that the aluminium surface changes continuously due to film formation, growth and breakdown. Nickel plated and zinc alloy immersion coated samples also showed variation in Q with time. In order to prove that the variations in Q were mainly due to surface oxidation, aluminium samples were lacquered and anodised Such treatments enveloped the active surfaces reacting with the environment and the Q variation with time was almost eliminated especially after hard anodising. This instrument detected major differences between different untreated aluminium substrates.Also Q values decreased progressively as coating thicknesses were increased. This instrument was also able to detect changes in Q due to heat-treatment of aluminium alloys.
Resumo:
Statistical software is now commonly available to calculate Power (P') and sample size (N) for most experimental designs. In many circumstances, however, sample size is constrained by lack of time, cost, and in research involving human subjects, the problems of recruiting suitable individuals. In addition, the calculation of N is often based on erroneous assumptions about variability and therefore such estimates are often inaccurate. At best, we would suggest that such calculations provide only a very rough guide of how to proceed in an experiment. Nevertheless, calculation of P' is very useful especially in experiments that have failed to detect a difference which the experimenter thought was present. We would recommend that P' should always be calculated in these circumstances to determine whether the experiment was actually too small to test null hypotheses adequately.
Resumo:
The aim of this research was to investigate the integration of computer-aided drafting and finite-element analysis in a linked computer-aided design procedure and to develop the necessary software. The Be'zier surface patch for surface representation was used to bridge the gap between the rather separate fields of drafting and finite-element analysis because the surfaces are defined by analytical functions which allow systematic and controlled variation of the shape and provide continuous derivatives up to any required degree. The objectives of this research were achieved by establishing : (i) A package which interpretes the engineering drawings of plate and shell structures and prepares the Be'zier net necessary for surface representation. (ii) A general purpose stand-alone meshed-surface modelling package for surface representation of plates and shells using the Be'zier surface patch technique. (iii) A translator which adapts the geometric description of plate and shell structures as given by the meshed-surface modeller to the form needed by the finite-element analysis package. The translator was extended to suit fan impellers by taking advantage of their sectorial symmetry. The linking processes were carried out for simple test structures, simplified and actual fan impellers to verify the flexibility and usefulness of the linking technique adopted. Finite-element results for thin plate and shell structures showed excellent agreement with those obtained by other investigators while results for the simplified and actual fan impellers also showed good agreement with those obtained in an earlier investigation where finite-element analysis input data were manually prepared. Some extensions of this work have also been discussed.
Resumo:
Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.
Resumo:
Objectives - Powdered and granulated particulate materials make up most of the ingredients of pharmaceuticals and are often at risk of undergoing unwanted agglomeration, or caking, during transport or storage. This is particularly acute when bulk powders are exposed to extreme swings in temperature and relative humidity, which is now common as drugs are produced and administered in increasingly hostile climates and are stored for longer periods of time prior to use. This study explores the possibility of using a uniaxial unconfined compression test to compare the strength of caked agglomerates exposed to different temperatures and relative humidities. This is part of a longer-term study to construct a protocol to predict the caking tendency of a new bulk material from individual particle properties. The main challenge is to develop techniques that provide repeatable results yet are presented simply enough to be useful to a wide range of industries. Methods - Powdered sucrose, a major pharmaceutical ingredient, was poured into a split die and exposed to high and low relative humidity cycles at room temperature. The typical ranges were 20–30% for the lower value and 70–80% for the higher value. The outer die casing was then removed and the resultant agglomerate was subjected to an unconfined compression test using a plunger fitted to a Zwick compression tester. The force against displacement was logged so that the dynamics of failure as well as the failure load of the sample could be recorded. The experimental matrix included varying the number of cycles, the amount between the maximum and minimum relative humidity, the height and diameters of the samples, the number of cycles and the particle size. Results - Trends showed that the tensile strength of the agglomerates increased with the number of cycles and also with the more extreme swings in relative humidity. This agrees with previous work on alternative methods of measuring the tensile strength of sugar agglomerates formed from humidity cycling (Leaper et al 2003). Conclusions - The results show that at the very least the uniaxial tester is a good comparative tester to examine the caking tendency of powdered materials, with a simple arrangement and operation that are compatible with the requirements of industry. However, further work is required to continue to optimize the height/ diameter ratio during tests.
Resumo:
The operation state of photovoltaic Module Integrated Converter (MIC) is subjected to change due to different source and load conditions, while state-swap is usually implemented with flow chart based sequential controller in the past research. In this paper, the signatures for different operational states are evaluated and investigated, which lead to an effective control integrated finite state machine (CIFSM), providing real-time state-swap as fast as the local control loop. The proposed CIFSM is implemented digitally for a boost type MIC prototype and tested under a variety of load and source conditions. The test results prove the effectiveness of the proposed CIFSM design.
Resumo:
This research addressed the question: "Which factors predict the effectiveness of healthcare teams?" It was addressed by assessing the psychometric properties of a new measure of team functioning with the use of data collected from 797 team members in 61 healthcare teams. This new measure is the Aston Team Performance Inventory (ATPI) developed by West, Markiewicz and Dawson (2005) and based on the IPO model. The ATPI was pilot tested in order to examine the reliability of this measure in the Jordanian cultural context. A sample of five teams comprising 3-6 members each was randomly selected from the Jordan Red Crescent health centers in Amman. Factors that predict team effectiveness were explored in a Jordanian sample (comprising 1622 members in 277 teams with 255 leaders from healthcare teams in hospitals in Amman) using self-report and Leader Ratings measures adapted from work by West, Borrill et al (2000) to determine team effectiveness and innovation from the leaders' point of view. The results demonstrate the validity and reliability of the measures for use in healthcare settings. Team effort and skills and leader managing had the strongest association with team processes in terms of team objectives, reflexivity, participation, task focus, creativity and innovation. Team inputs in terms of task design, team effort and skills, and organizational support were associated with team effectiveness and innovation whereas team resources were associated only with team innovation. Team objectives had the strongest mediated and direct association with team effectiveness whereas task focus had the strongest mediated and direct association with team innovation. Finally, among leadership variables, leader managing had the strongest association with team effectiveness and innovation. The theoretical and practical implications of this thesis are that: team effectiveness and innovation are influenced by multiple factors that must all be taken into account. The key factors managers need to ensure are in place for effective teams are team effort and skills, organizational support and team objectives. To conclude, the application of these findings to healthcare teams in Jordan will help improve their team effectiveness, and thus the healthcare services that they provide.