932 resultados para ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competing approaches exist, which allow control of phase noise and frequency tuning in mode-locked lasers, but no judgement of pros and cons based on a comparative analysis was presented yet. Here, we compare results of hybrid mode-locking, hybrid mode-locking with optical injection seeding, and sideband optical injection seeding performed on the same quantum dot laser under identical bias conditions. We achieved the lowest integrated jitter of 121 fs and a record large radio-frequency (RF) tuning range of 342 MHz with sideband injection seeding of the passively mode-locked laser. The combination of hybrid mode-locking together with optical injection-locking resulted in 240 fs integrated jitter and a RF tuning range of 167 MHz. Using conventional hybrid mode-locking, the integrated jitter and the RF tuning range were 620 fs and 10 MHz, respectively. © 2014 AIP Publishing LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of ex-transportation battery system (i.e. second life EV/HEV batteries) in grid applications is an emerging field of study. A hybrid battery scheme offers a more practical approach in second life battery energy storage systems because battery modules could be from different sources/ vehicle manufacturers depending on the second life supply chain and have different characteristics e.g. voltage levels, maximum capacity and also different levels of degradations. Recent research studies have suggested a dc-side modular multilevel converter topology to integrate these hybrid batteries to a grid-tie inverter. Depending on the battery module characteristics, the dc-side modular converter can adopt different modes such as boost, buck or boost-buck to suitably transfer the power from battery to the grid. These modes have different switching techniques, control range, different efficiencies, which give a system designer choice on operational mode. This paper presents an analysis and comparative study of all the modes of the converter along with their switching performances in detail to understand the relative advantages and disadvantages of each mode to help to select the suitable converter mode. Detailed study of all the converter modes and thorough experimental results based on a multi-modular converter prototype based on hybrid batteries has been presented to validate the analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our sleep timing preference, or chronotype, is a manifestation of our internal biological clock. Variation in chronotype has been linked to sleep disorders, cognitive and physical performance, and chronic disease. Here we perform a genome-wide association study of self-reported chronotype within the UK Biobank cohort (n=100,420). We identify 12 new genetic loci that implicate known components of the circadian clock machinery and point to previously unstudied genetic variants and candidate genes that might modulate core circadian rhythms or light-sensing pathways. Pathway analyses highlight central nervous and ocular systems and fear-response-related processes. Genetic correlation analysis suggests chronotype shares underlying genetic pathways with schizophrenia, educational attainment and possibly BMI. Further, Mendelian randomization suggests that evening chronotype relates to higher educational attainment. These results not only expand our knowledge of the circadian system in humans but also expose the influence of circadian characteristics over human health and life-history variables such as educational attainment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have theoretically and experimentally investigated the dual-peak feature of tilted fiber gratings with excessively tilted structure (named as Ex-TFGs). We have explained the dual-peak feature by solving eigenvalue equations for TM0m and TE0m of a circular waveguide, in which the TE (transverse electric) and TM (transverse magnetic) core modes are coupled into TE and TM cladding modes, respectively. Meanwhile, in the experiment, we have verified that one of the dual peaks at the shorter wavelength is due to the TM mode coupling whereas the other one at the longer wavelength arises from TE mode coupling when a linearly polarized light launched into the Ex-TFG. We have also investigated the peak separation of TE and TM cladding mode for different surrounding medium refractive indexes (SRI), revealed that the dual peaks separation is decreasing as increasing of SRI, which agrees very well with the theoretical analysis results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients.^ For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data.^ Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explores two important aspects of entrepreneurship — liquidity constraints and serial entrepreneurs, with an additional analysis of occupational choice among wage workers. In the first essay, I revisit the question of whether entrepreneurs face liquidity constraints in business formation. The principle challenge is that wealth is correlated with unobserved ability, and adequate instruments are often difficult to identify. This paper uses the son's birth order as an instrument for household wealth. I exploit the data available in the Korean Labor and Income Panel Study, and find evidence of liquidity constraints associated with self-employment in South Korea. The second essay develops and tests a model that explains entry into serial entrepreneurship and the performance of serial entrepreneurs as the result of selection on innate ability. The model supposes that agents establish businesses with imperfect information about their entrepreneurial ability and the profitability of business ideas. Agents continually observe signals with which they update their beliefs, and this process eventually determines their next business choice. Selection on ability induces a positive correlation between entrepreneurial experience (measured by previous business earnings and founding experience) and serial business formation, as well as its subsequent performance. The predictions in the model are tested using panel data from the NLSY79. The analysis permits a distinction to be made between selection on innate ability and learning by doing. Motivated by previous empirical findings that white-collar workers had higher turnover rates than blue-collar workers during firm expansion, the third essay further examines job turnover among workers with or without specific skills. I present a search-matching model, which predicts that when firm growth is driven by technological advance, workers whose skills are specific to the obsolete technology show a higher tendency to separate from their jobs. This hypothesis is tested with data from the PSID. I find supportive evidence that in the context of technological change, having an occupation requiring specific skills, such as computer specialists or engineers, increases the odds of job separation by nearly eight percent. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The HIV virus is known for its ability to exploit numerous genetic and evolutionary mechanisms to ensure its proliferation, among them, high replication, mutation and recombination rates. Sliding MinPD, a recently introduced computational method [1], was used to investigate the patterns of evolution of serially-sampled HIV-1 sequence data from eight patients with a special focus on the emergence of X4 strains. Unlike other phylogenetic methods, Sliding MinPD combines distance-based inference with a nonparametric bootstrap procedure and automated recombination detection to reconstruct the evolutionary history of longitudinal sequence data. We present serial evolutionary networks as a longitudinal representation of the mutational pathways of a viral population in a within-host environment. The longitudinal representation of the evolutionary networks was complemented with charts of clinical markers to facilitate correlation analysis between pertinent clinical information and the evolutionary relationships. Results Analysis based on the predicted networks suggests the following:: significantly stronger recombination signals (p = 0.003) for the inferred ancestors of the X4 strains, recombination events between different lineages and recombination events between putative reservoir virus and those from a later population, an early star-like topology observed for four of the patients who died of AIDS. A significantly higher number of recombinants were predicted at sampling points that corresponded to peaks in the viral load levels (p = 0.0042). Conclusion Our results indicate that serial evolutionary networks of HIV sequences enable systematic statistical analysis of the implicit relations embedded in the topology of the structure and can greatly facilitate identification of patterns of evolution that can lead to specific hypotheses and new insights. The conclusions of applying our method to empirical HIV data support the conventional wisdom of the new generation HIV treatments, that in order to keep the virus in check, viral loads need to be suppressed to almost undetectable levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients. For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data. Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foundations support constitute one of the types of legal entities of private law forged with the purpose of supporting research projects, education and extension and institutional, scientific and technological development of Brazil. Observed as links of the relationship between company, university, and government, foundations supporting emerge in the Brazilian scene from the principle to establish an economic platform of development based on three pillars: science, technology and innovation – ST&I. In applied terms, these ones operate as tools of debureaucratisation making the management between public entities more agile, especially in the academic management in accordance with the approach of Triple Helix. From the exposed, the present study has as purpose understanding how the relation of Triple Helix intervenes in the fund-raising process of Brazilian foundations support. To understand the relations submitted, it was used the interaction models University-Company-Government recommended by Sábato and Botana (1968), the approach of the Triple Helix proposed by Etzkowitz and Leydesdorff (2000), as well as the perspective of the national innovation systems discussed by Freeman (1987, 1995), Nelson (1990, 1993) and Lundvall (1992). The research object of this study consists of 26 state foundations that support research associated with the National Council of the State Foundations of Supporting Research - CONFAP, as well as the 102 foundations in support of IES associated with the National Council of Foundations of Support for Institutions of Higher Education and Scientific and Technological Research – CONFIES, totaling 128 entities. As a research strategy, this study is considered as an applied research with a quantitative approach. Primary research data were collected using the e-mail Survey procedure. Seventy-five observations were collected, which corresponds to 58.59% of the research universe. It is considering the use of the bootstrap method in order to validate the use of the sample in the analysis of results. For data analysis, it was used descriptive statistics and multivariate data analysis techniques: the cluster analysis; the canonical correlation and the binary logistic regression. From the obtained canonical roots, the results indicated that the dependency relationship between the variables of relations (with the actors of the Triple Helix) and the financial resources invested in innovation projects is low, assuming the null hypothesis of this study, that the relations of the Triple Helix do not have interfered positively or negatively in raising funds for investments in innovation projects. On the other hand, the results obtained with the cluster analysis indicate that entities which have greater quantitative and financial amounts of projects are mostly large foundations (over 100 employees), which support up to five IES, publish management reports and use in their capital structure, greater financing of the public department. Finally, it is pertinent to note that the power of the classification of the logistic model obtained in this study showed high predictive capacity (80.0%) providing to the academic community replication in environments of similar analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to map the modern distribution of diatoms and to establish a reliable reference data set for paleoenvironmental reconstruction in the northern North Pacific, a new data set including the relative abundance of diatom species preserved in a total of 422 surface sediments was generated, which covers a broad range of environmental variables characteristic of the subarctic North Pacific, the Sea of Okhotsk and the Bering Sea between 30° and 70°N. The biogeographic distribution patterns as well as the preferences in sea surface temperature of 38 diatom species and species groups are documented. A Q-mode factor analysis yields a three-factor model representing assemblages associated with the Arctic, Subarctic and Subtropical water mass, indicating a close relationship between the diatom composition and the sea surface temperatures. The relative abundance pattern of 38 diatom species and species groups was statistically compared with nine environmental variables, i.e. the summer sea surface temperature and salinity, annual surface nutrient concentration (nitrate, phosphate, silicate), summer and winter mixed layer depth and summer and winter sea ice concentrations. Canonical Correspondence Analysis (CCA) indicates 32 species and species groups have strong correspondence with the pattern of summer sea surface temperature. In addition, the total diatom flux data compiled from ten sediment traps reveal that the seasonal signals preserved in the surface sediments are mostly from spring through autumn. This close relationship between diatom composition and the summer sea surface temperature will be useful in deriving a transfer function in the subarctic North Pacific for the quantitative paleoceanographic and paleoenvironmental studies. The relative abundance of the sea-ice indicator diatoms Fragilariopsis cylindrus and F. oceanica of >20% in the diatom composition is used to represent the winter sea ice edge in the Bering Sea. The northern boundary of the distribution of F. doliolus in the open ocean is suggested to be an indicator of the Subarctic Front, while the abundance of Chaetoceros resting spores may indicate iron input from nearby continents and shelves and induced productivity events in the study area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we investigate the potential of organic-walled dinoflagellate cysts (dinocysts) as tools for quantifying past sea-surface temperatures (SST) in the Southern Ocean. For this purpose, a dinocyst reference dataset has been formed, based on 138 surface sediment samples from different circum-Antarctic environments. The dinocyst assemblages of these samples are composed of phototrophic (gonyaulacoid) and heterotrophic (protoperidinioid) species that provide a broad spectrum of palaeoenvironmental information. The relationship between the environmental parameters in the upper water column and the dinocyst distribution patterns of individual species has been established using the statistical method of Canonical Correspondence Analysis (CCA). Among the variables tested, summer SST appeared to correspond to the maximum variance represented in the dataset. To establish quantitative summer SST reconstructions, a Modern Analogue Technique (MAT) has been performed on data from three Late Quaternary dinocyst records recovered from locations adjacent to prominent oceanic fronts in the Atlantic sector of the Southern Ocean. These dinocyst time series exhibit periodic changes in the dinocyst assemblage during the last two glacial/interglacial-cycles. During glacial conditions the relative abundance of protoperidinioid cysts was highest, whereas interglacial conditions are characterised by generally lower cyst concentrations and increased relative abundance of gonyaulacoid cysts. The MAT palaeotemperature estimates show trends in summer SST changes following the global oxygen isotope signal and a strong correlation with past temperatures of the last 140,000 years based on other proxies. However, by comparing the dinocyst results to quantitative estimates of summer SSTs based on diatoms, radiolarians and foraminifer-derived stable isotope records it can be shown that in several core intervals the dinocyst-based summer SSTs appeared to be extremely high. In these intervals the dinocyst record seems to be highly influenced by selective degradation, leading to unusual temperature ranges and to unrealistic palaeotemperatures. We used the selective degradation index (kt-index) to determine those intervals that have been biased by selective degradation in order to correct the palaeotemperature estimates. We show that after correction the dinocyst based SSTs correspond reasonably well with other palaeotemperature estimates for this region, supporting the great potential of dinoflagellate cysts as a basis for quantitative palaeoenvironmental studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.