886 resultados para Type of error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments combining different groups or factors are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than simply the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, in a factorial experiment, it is important to define the design of the experiment in detail because this determines the appropriate type of ANOVA. We will discuss some of the common variations of factorial ANOVA in future statnotes. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a basis for the commercial separation of normal paraffins a detailed study has been made of factors affecting the adsorption of binary liquid mixtures of high molecular weight normal paraffins (C12, C16, and C20) from isooctane on type 5A molecular sieves. The literature relating to molecular sieve properties and applications, and to liquid-phase adsorption of high molecular weight normal paraffin compounds by zeolites, was reviewed. Equilibrium isotherms were determined experimentally for the normal paraffins under investigation at temperatures of 303oK, 323oK and 343oK and showed a non-linear, favourable- type of isotherm. A higher equilibrium amount was adsorbed with lower molecular weight normal paraffins. An increase in adsorption temperature resulted in a decrease in the adsorption value. Kinetics of adsorption were investigated for the three normal paraffins at different temperatures. The effective diffusivity and the rate of adsorption of each normal paraffin increased with an increase in temperature in the range 303 to 343oK. The value of activation energy was between 2 and 4 kcal/mole. The dynamic properties of the three systems were investigated over a range of operating conditions (i.e. temperature, flow rate, feed concentration, and molecular sieve size in the range 0.032 x 10-3 to 2 x 10-3m) with a packed column. The heights of adsorption zones calculated by two independent equations (one based on a constant width, constant velocity and adsorption zone and the second on a solute material balance within the adsorption zone) agreed within 3% which confirmed the validity of using the mass transfer zone concept to provide a simple design procedure for the systems under study. The dynamic capacity of type 5A sieves for n-eicosane was lower than for n-hexadecane and n-dodecane corresponding to a lower equilibrium loading capacity and lower overall mass transfer coefficient. The values of individual external, internal, theoretical and experimental overall mass transfer coefficient were determined. The internal resistance was in all cases rate-controlling. A mathematical model for the prediction of dynamic breakthrough curves was developed analytically and solved from the equilibrium isotherm and the mass transfer rate equation. The experimental breakthrough curves were tested against both the proposed model and a graphical method developed by Treybal. The model produced the best fit with mean relative percent deviations of 26, 22, and 13% for the n-dodecane, n-hexadecane, and n-eicosane systems respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A homologous series of ultra-violet stabilisers containing 2-hydroxybenzophenone (HBP) moiety as a uv absorbing chromophore with varying alkyl chain lengths and sizes were prepared by known chemical synthesis. The strong absorbance of the HBP chromophore was utilized to evaluate the concentration of these stabilisers in low density polyethylene films and concentration of these stabilisers in low density polyethylene films and in relevant solvents by ultra-violet/visible spectroscopy. Intrinsic diffusion coefficients, equilibrium solubilities, volatilities from LDPE films and volatility of pure stabilisers were studied over a temperature range of 5-100oC. The effects of structure, molecular weight and temperature on the above parameters were investigated and the results were analysed on the basis of theoretical models published in the literature. It has been found that an increase in alkyl chain lengths does not change the diffusion coefficients to a significant level, while attachment of polar or branched alkyl groups change their value considerably. An Arrhenius type of relationship for the temperature dependence of diffusion coefficients seems to be valid only for a narrow temperature range, and therefore extrapolation of data from one temperature to another leads to a considerable error. The evidence showed that increase in additive solubility in the polymer is favoured by lower heat of fusions and melting points of additives. This implies the validity of simple regular solution theory to provide an adequate basis for understanding the solubility of additives in polymers The volubility of stabilisers from low density polyethylene films showed that of an additive from a polymer can be expressed in terms of a first-order kinetic equation. In addition the rate of loss of stabilisers was discussed in relation to its diffusion, solubility and volatility and found that all these factors may contribute to the additive loss, although one may be a rate determining factor. Stabiliser migration from LDPE into various solvents and food simulants was studied at temperatures 5, 23, 40 and 70oC; from the plots of rate of migration versus square root time, characteristic diffusion coefficients were obtained by using the solution of Fick's diffusion equations. It was shown that the rate of migration depends primarily on partition coefficients between solvent and the polymer of the additive and also on the swelling action of the contracting media. Characteristic diffusion coefficients were found to approach to intrinsic values in non swelling solvents, whereas in the case of highly swollen polymer samples, the former may be orders of magnitude greater than the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the results of an empirical study of purchasing skills and how they vary by type of purchase. Building on the purchasing portfolio approach, purchase type was defined using 10 internally oriented, product- related dimensions, and 8 externally oriented, supply-related dimensions. Based on these dimensions, experienced purchasing personnel described a specific purchase and profiled the skills required for effective performance in purchasing that item. Cluster analysis of the 72 valid responses provided three distinct groups of purchasing situations, which were labelled strategic, tactical and routine purchase types. Of the 33 skills identified from prior literature, 24 were found to differ significantly (p>0.05) across purchase type clusters. For each cluster, the purchase situation and skills profile are described. Results show that, as product and market complexity and risk increase, both more purchasing skills are needed, and their importance for successful performance increases. Further analysis through pairwise comparisons of the clusters reveals more detailed insights. Suggestions for refining this survey tool and for its potential application in companies are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigate whether consideration of store-level heterogeneity in marketing mix effects improves the accuracy of the marketing mix elasticities, fit, and forecasting accuracy of the widely-applied SCAN*PRO model of store sales. Models with continuous and discrete representations of heterogeneity, estimated using hierarchical Bayes (HB) and finite mixture (FM) techniques, respectively, are empirically compared to the original model, which does not account for store-level heterogeneity in marketing mix effects, and is estimated using ordinary least squares (OLS). The empirical comparisons are conducted in two contexts: Dutch store-level scanner data for the shampoo product category, and an extensive simulation experiment. The simulation investigates how between- and within-segment variance in marketing mix effects, error variance, the number of weeks of data, and the number of stores impact the accuracy of marketing mix elasticities, model fit, and forecasting accuracy. Contrary to expectations, accommodating store-level heterogeneity does not improve the accuracy of marketing mix elasticities relative to the homogeneous SCAN*PRO model, suggesting that little may be lost by employing the original homogeneous SCAN*PRO model estimated using ordinary least squares. Improvements in fit and forecasting accuracy are also fairly modest. We pursue an explanation for this result since research in other contexts has shown clear advantages from assuming some type of heterogeneity in market response models. In an Afterthought section, we comment on the controversial nature of our result, distinguishing factors inherent to household-level data and associated models vs. general store-level data and associated models vs. the unique SCAN*PRO model specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long period gratings (LPGs) were written into a D-shaped optical fibre that has an elliptical core with a W-shaped refractive index profile and the first detailed investigation of such LPGs is presented. The LPGs’ attenuation bands were found to be sensitive to the polarisation of the interrogating light with a spectral separation of about 15 nm between the two orthogonal polarisation states. A finite element method was successfully used to model many of the behavioural features of the LPGs. In addition, two spectrally overlapping attenuation bands corresponding to orthogonal polarisation states were observed; modelling successfully reproduced this spectral feature. The spectral sensitivity of both orthogonal states was experimentally measured with respect to temperature and bending. These LPG devices produced blue and red wavelength shifts depending upon the orientation of the bend with measured maximum sensitivities of -3.56 and +6.51 nm m, suggesting that this type of fibre LPG may be useful as a shape/bend orientation sensor with reduced errors associated with polarisation dependence. The use of neighbouring bands to discriminate between temperature and bending was also demonstrated, leading to an overall curvature error of ±0.14 m-1 and an overall temperature error of ±0.3 °C with a maximum polarisation dependence error of ±8 × 10-2 m-1 for curvature and ±5 × 10-2 °C for temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The studies presented in this thesis were carried out because of a lack of previous research with respect to (a) the habits and attitudes towards retinoscopy and (b) the relative accuracy of dedicated retinoscopes compared to combined types in which changing the bulb allows use in spot or streak mode. An online British survey received responses from 298 optometrists. Decision tree analyses revealed that optometrists working in multiple practices tended to rely less on retinoscopy than those in the independent sector. Only half of the respondents used dynamic retinoscopy. The majority, however, agreed that retinoscopy was an important test. The University attended also influenced the type of retinoscope used and the use of autorefractors. Combined retinoscopes were used most by the more recently qualified optometrists and few agreed that combined retinoscopes were less accurate. A trial indicated that combined and dedicated retinoscopes were equally accurate. Here, 4 optometrists (2 using spot and 2 using streak retinoscopes) tested one eye of 6 patients using combined and dedicated retinoscopes. This trial also demonstrated the utility of the relatively unknown ’15 degrees of freedom’ rule that exploits replication in factorial ANOVA designs to achieve sufficient statistical power when recruitment is limited. An opportunistic international survey explored the use of retinoscopy by 468 practitioners (134 ophthalmologists, 334 optometrists) attending contact related courses. Decision tree analyses found (a) no differences in the habits of optometrists and ophthalmologists, (b) differences in the reliance on retinoscopy and use of dynamic techniques across the participating countries and (c) some evidence that younger practitioners were using static and dynamic retinoscopy least often. In conclusion, this study has revealed infrequent use of static and dynamic retinoscopy by some optometrists, which may be the only means of determining refractive error and evaluating accommodation in patients with communication difficulties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long period gratings (LPGs) were written into a D-shaped optical fibre, which has an elliptical core with a W-shaped refractive index profile. The LPG's attenuation bands were found to be sensitive to the polarisation of the interrogating light with a spectral separation of about 15nm between the two orthogonal polarisation states. In addition, two spectrally overlapping attenuation bands corresponding to orthogonal polarisation states were observed; modelling successfully reproduced this spectral feature. The spectral sensitivity of both orthogonal states was experimentally measured with respect to temperature, surrounding refractive index, and directional bending. These LPG devices produced blue and red wavelength shifts of the stop-bands due to bending in different directions. The measured spectral sensitivities to curvatures, d?/dR , ranged from -3.56nm m to +6.51nm m. The results obtained with these LPGs suggest that this type of fibre may be useful as a shape/bend sensor. It was also demonstrated that the neighbouring bands could be used to discriminate between temperature and bending and that overlapping orthogonal polarisation attenuation bands can be used to minimise error associated with polarisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a model eigenvalue problem (EVP) in 1D, with periodic or semi–periodic boundary conditions (BCs). The discretization of this type of EVP by consistent mass finite element methods (FEMs) leads to the generalized matrix EVP Kc = λ M c, where K and M are real, symmetric matrices, with a certain (skew–)circulant structure. In this paper we fix our attention to the use of a quadratic FE–mesh. Explicit expressions for the eigenvalues of the resulting algebraic EVP are established. This leads to an explicit form for the approximation error in terms of the mesh parameter, which confirms the theoretical error estimates, obtained in [2].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 62F03

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 94A12, 94A20, 30D20, 41A05.