960 resultados para Error in essence


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this research is to investigate how risk management in a healthcare organisation can be supported by knowledge management. The subject of research is the development and management of existing logs called "risk registers", through specific risk management processes employed in a N.H.S. (Foundation) Trust in England, in the U.K. Existing literature on organisational risk management stresses the importance of knowledge for the effective implementation of risk management programmes, claiming that knowledge used to perceive risk is biased by the beliefs of individuals and groups involved in risk management and therefore is considered incomplete. Further, literature on organisational knowledge management presents several definitions and categorisations of knowledge and approaches for knowledge manipulation in the organisational context as a whole. However, there is no specific approach regarding "how to deal" with knowledge in the course of organisational risk management. The research is based on a single case study, on a N.H.S. (Foundation) Trust, is influenced by principles of interpretivism and the frame of mind of Soft Systems Methodology (S.S.M.) to investigate the management of risk registers, from the viewpoint of people involved in the situation. Data revealed that knowledge about risks and about the existing risk management policy and procedures is situated in several locations in the Trust and is neither consolidated nor present where and when required. This study proposes a framework that identifies required knowledge for each of the risk management processes and outlines methods for conversion of this knowledge, based on the SECI knowledge conversion model, and activities to facilitate knowledge conversion so that knowledge is effectively used for the development of risk registers and the monitoring of risks throughout the whole Trust under study. This study has theoretical impact in the management science literature as it addresses the issue of incomplete knowledge raised in the risk management literature using concepts of the knowledge management literature, such as the knowledge conversion model. In essence, the combination of required risk and risk management related knowledge with the required type of communication for risk management creates the proposed methods for the support of each risk management process for the risk registers. Further, the indication of the importance of knowledge in risk management and the presentation of a framework that consolidates knowledge required for the risk management processes and proposes way(s) for the communication of this knowledge within a healthcare organisation have practical impact in the management of healthcare organisations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining an appropriate research methodology is considered as an important element in a research study; especially in a doctoral research study. It involves approach to the entire process of a research study, starting from theoretical underpinnings and spanning to data collection and analysis, and extending to developing the solutions for the problems investigated. Research methodology in essence is focused around the problems to be investigated in a research study and therefore varies according to the problems investigated. Thus, identifying the research methodology that best suits a research in hand is important, not only as it will benefit achieving the set objectives of a research, but also as it will serve establishing the credibility of the work. Research philosophy, approach, strategy, choice, and techniques are inherent components of the methodology. Research strategy provides the overall direction of the research including the process by which the research is conducted. Case study, experiment, survey, action research, grounded theory and ethnography are examples for such research strategies. Case study is documented as an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident. Case study was adopted as the overarching research strategy, in a doctoral study developed to investigate the resilience of construction Small and Medium-sized Enterprises (SMEs) in the UK to extreme weather events. The research sought to investigate how construction SMEs are affected by EWEs, respond to the risk of EWEs, and means of enhancing their resilience to future EWEs. It is argued that utilising case study strategy will benefit the research study, in achieving the set objectives of the research and answering the research questions raised, by comparing and contrasting with the alternative strategies available. It is also claimed that the selected strategy will contribute towards addressing the call for improved methodological pluralism in construction management research, enhancing the understanding of complex network of relationships pertinent to the industry and the phenomenon being studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods: It has been estimated that medication error harms 1-2% of patients admitted to general hospitals. There has been no previous systematic review of the incidence, cause or type of medication error in mental healthcare services. Methods: A systematic literature search for studies that examined the incidence or cause of medication error in one or more stage(s) of the medication-management process in the setting of a community or hospital-based mental healthcare service was undertaken. The results in the context of the design of the study and the denominator used were examined. Results: All studies examined medication management processes, as opposed to outcomes. The reported rate of error was highest in studies that retrospectively examined drug charts, intermediate in those that relied on reporting by pharmacists to identify error and lowest in those that relied on organisational incident reporting systems. Only a few of the errors identified by the studies caused actual harm, mostly because they were detected and remedial action was taken before the patient received the drug. The focus of the research was on inpatients and prescriptions dispensed by mental health pharmacists. Conclusion: Research about medication error in mental healthcare is limited. In particular, very little is known about the incidence of error in non-hospital settings or about the harm caused by it. Evidence is available from other sources that a substantial number of adverse drug events are caused by psychotropic drugs. Some of these are preventable and might probably, therefore, be due to medication error. On the basis of this and features of the organisation of mental healthcare that might predispose to medication error, priorities for future research are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining an appropriate research methodology is considered as an important element in a research study; especially in a doctoral research study. It involves approach to the entire process of a research study, starting from theoretical underpinnings and spanning to data collection and analysis, and extending to developing the solutions for the problems investigated. Research methodology in essence is focused around the problems to be investigated in a research study and therefore varies according to the problems investigated. Thus, identifying the research methodology that best suits a research in hand is important, not only as it will benefit achieving the set objectives of a research, but also as it will serve establishing the credibility of the work. Research philosophy, approach, strategy, choice, and techniques are inherent components of the methodology. Research strategy provides the overall direction of the research including the process by which the research is conducted. Case study, experiment, survey, action research, grounded theory and ethnography are examples for such research strategies. Case study is documented as an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident. Case study was adopted as the overarching research strategy, in a doctoral study developed to investigate the resilience of construction Small and Medium-sized Enterprises (SMEs) in the UK to extreme weather events. The research sought to investigate how construction SMEs are affected by EWEs, respond to the risk of EWEs, and means of enhancing their resilience to future EWEs. It is argued that utilising case study strategy will benefit the research study, in achieving the set objectives of the research and answering the research questions raised, by comparing and contrasting with the alternative strategies available. It is also claimed that the selected strategy will contribute towards addressing the call for improved methodological pluralism in construction management research, enhancing the understanding of complex network of relationships pertinent to the industry and the phenomenon being studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Portfolio analysis exists, perhaps, as long, as people think about acceptance of rational decisions connected with use of the limited resources. However the occurrence moment of portfolio analysis can be dated precisely enough is having connected it with a publication of pioneer work of Harry Markovittz (Markovitz H. Portfolio Selection) in 1952. The model offered in this work, simple enough in essence, has allowed catching the basic features of the financial market, from the point of view of the investor, and has supplied the last with the tool for development of rational investment decisions. The central problem in Markovitz theory is the portfolio choice that is a set of operations. Thus in estimation, both separate operations and their portfolios two major factors are considered: profitableness and risk of operations and their portfolios. The risk thus receives a quantitative estimation. The account of mutual correlation dependences between profitablenesses of operations appears the essential moment in the theory. This account allows making effective diversification of portfolio, leading to essential decrease in risk of a portfolio in comparison with risk of the operations included in it. At last, the quantitative characteristic of the basic investment characteristics allows defining and solving a problem of a choice of an optimum portfolio in the form of a problem of quadratic optimization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An expert system (ES) is a class of computer programs developed by researchers in artificial intelligence. In essence, they are programs made up of a set of rules that analyze information about a specific class of problems, as well as provide analysis of the problems, and, depending upon their design, recommend a course of user action in order to implement corrections. ES are computerized tools designed to enhance the quality and availability of knowledge required by decision makers in a wide range of industries. Decision-making is important for the financial institutions involved due to the high level of risk associated with wrong decisions. The process of making decision is complex and unstructured. The existing models for decision-making do not capture the learned knowledge well enough. In this study, we analyze the beneficial aspects of using ES for decision- making process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To investigate the relationship between pupil diameter and refractive error and how refractive correction, target luminance, and accommodation modulate this relationship. Methods: Sixty emmetropic, myopic, and hyperopic subjects (age range, 18 to 35 years) viewed an illuminated target (luminance: 10, 100, 200, 400, 1000, 2000, and 4100 cd/m2) within a Badal optical system, at 0 diopters (D) and −3 D vergence, with and without refractive correction. Refractive error was corrected using daily disposable contact lenses. Pupil diameter and accommodation were recorded continuously using a commercially available photorefractor. Results: No significant difference in pupil diameter was found between the refractive groups at 0 D or −3 D target vergence, in the corrected or uncorrected conditions. As expected, pupil diameter decreased with increasing luminance. Target vergence had no significant influence on pupil diameter. In the corrected condition, at 0 D target vergence, the accommodation response was similar in all refractive groups. At −3 D target vergence, the emmetropic and myopic groups accommodated significantly more than the hyperopic group at all luminance levels. There was no correlation between accommodation response and pupil diameter or refractive error in any refractive group. In the uncorrected condition, the accommodation response was significantly greater in the hyperopic group than in the myopic group at all luminance levels, particularly for near viewing. In the hyperopic group, the accommodation response was significantly correlated with refractive error but not pupil diameter. In the myopic group, accommodation response level was not correlated with refractive error or pupil diameter. Conclusions: Refractive error has no influence on pupil diameter, irrespective of refractive correction or accommodative demand. This suggests that the pupil is controlled by the pupillary light reflex and is not driven by retinal blur.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current study was designed to build on and extend the existing knowledge base of factors that cause, maintain, and influence child molestation. Theorized links among the type of offender and the offender's levels of moral development and social competence in the perpetration of child molestation were investigated. The conceptual framework for the study is based on the cognitive developmental stages of moral development as proposed by Kohlberg, the unified theory, or Four-Preconditions Model, of child molestation as proposed by Finkelhor, and the Information-Processing Model of Social Skills as proposed by McFall. The study sample consisted of 127 adult male child molesters participating in outpatient group therapy. All subjects completed a Self-Report Questionnaire which included questions designed to obtain relevant demographic data, questions similar to those used by the researchers for the Massachusetts Treatment Center: Child Molester Typology 3's social competency dimension, the Defining Issues Test (DIT) short form, the Social Avoidance and Distress Scale (SADS), the Rathus Assertiveness Schedule (RAS), and the Questionnaire Measure of Empathic Tendency (Empathy Scale). Data were analyzed utilizing confirmatory factor analysis, t-tests, and chi-square statistics. Partial support was found for the hypothesis that moral development is a separate but correlated construct from social competence. As predicted, although the actual mean score differences were small, a statistically significant difference was found in the current study between the mean DITP scores of the subject sample and that of the general male population, suggesting that child molesters, as a group, function at a lower level of moral development than does the general male population, and the situational offenders in the study sample demonstrated a statistically significantly higher level of moral development than the preferential offenders. The data did not support the hypothesis that situational offenders will demonstrate lower levels of social competence than preferential offenders. Relatively little significance is placed on this finding, however, because the measure for the social competency variable was likely subject to considerable measurement error in that the items used as indicators were not clearly defined. The last hypothesis, which involved the potential differences in social anxiety, assertion skills, and empathy between the situational and preferential offender types, was not supported by the data. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical imaging is an emerging technology towards non-invasive breast cancer diagnostics. In recent years, portable and patient comfortable hand-held optical imagers are developed towards two-dimensional (2D) tumor detections. However, these imagers are not capable of three-dimensional (3D) tomography because they cannot register the positional information of the hand-held probe onto the imaged tissue. A hand-held optical imager has been developed in our Optical Imaging Laboratory with 3D tomography capabilities, as demonstrated from tissue phantom studies. The overall goal of my dissertation is towards the translation of our imager to the clinical setting for 3D tomographic imaging in human breast tissues. A systematic experimental approach was designed and executed as follows: (i) fast 2D imaging, (ii) coregistered imaging, and (iii) 3D tomographic imaging studies. (i) Fast 2D imaging was initially demonstrated in tissue phantoms (1% Liposyn solution) and in vitro (minced chicken breast and 1% Liposyn). A 0.45 cm3 fluorescent target at 1:0 contrast ratio was detectable up to 2.5 cm deep. Fast 2D imaging experiments performed in vivo with healthy female subjects also detected a 0.45 cm3 fluorescent target superficially placed ∼2.5 cm under the breast tissue. (ii) Coregistered imaging was automated and validated in phantoms with ∼0.19 cm error in the probe’s positional information. Coregistration also improved the target depth detection to 3.5 cm, from multi-location imaging approach. Coregistered imaging was further validated in-vivo , although the error in probe’s positional information increased to ∼0.9 cm (subject to soft tissue deformation and movement). (iii) Three-dimensional tomography studies were successfully demonstrated in vitro using 0.45 cm3 fluorescence targets. The feasibility of 3D tomography was demonstrated for the first time in breast tissues using the hand-held optical imager, wherein a 0.45 cm3 fluorescent target (superficially placed) was recovered along with artifacts. Diffuse optical imaging studies were performed in two breast cancer patients with invasive ductal carcinoma. The images showed greater absorption at the tumor cites (as observed from x-ray mammography, ultrasound, and/or MRI). In summary, my dissertation demonstrated the potential of a hand-held optical imager towards 2D breast tumor detection and 3D breast tomography, holding a promise for extensive clinical translational efforts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation analyzes a variety of religious texts such as catechisms, confession manuals, ecclesiastical legislation, saints' lives, and sermons to determine the definitions of orthodoxy held by the Spanish clergy and the origins of such visions. The conclusion posited by this research was that there was a definite continuity between the process of Catholic reform in Spain and the process of Catholic expansion into the New World in that the objectives and concerns of the Spanish clergy in Europe and the New World were very similar. This dissertation also analyzes sources that predated the Council of Trent and demonstrates that within the Iberian context the Council of Trent cannot be used as a starting date for the attempts at Catholic reform. In essence, this work concludes the Spanish clergy's activities were influenced by humanist concepts of models and model behaviour which is reflected in their attempt to form model Catholics in Spain and the New World and in their impulse to produce written texts as standards. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.