996 resultados para Exact computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: Four second-grade students participated in a B-A-B withdrawal single-subject design experiment. The intervention package implemented consisted of three components: self-monitoring, performance feedback, and reinforcers. Participants completed math probes across phases. Accuracy and productivity was recorded and calculated. Results demonstrated the intervention package improved accuracy and productivity for all participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The response of the coccolithophore Emiliania huxleyi to rising CO2 concentrations is well documented for acclimated cultures where cells are exposed to the CO2 treatments for several generations prior to the experiment. The exact number of generations required for acclimation to CO2-induced changes in seawater carbonate chemistry, however, is unknown. Here we show that Emiliania huxleyi's short-term response (26 h) after cultures (grown at 500 µatm) were abruptly exposed to changed CO2 concentrations (~190, 410, 800 and 1500 ?atm) is similar to that obtained with acclimated cultures under comparable conditions in earlier studies. Most importantly, from the lower CO2 levels (190 and 410 ?atm) to 750 and 1500 µatm calcification decreased and organic carbon fixation increased within the first 8 to 14 h after exposing the cultures to changes in carbonate chemistry. This suggests that Emiliania huxleyi rapidly alters the rates of essential metabolical processes in response to changes in seawater carbonate chemistry, establishing a new physiological "state" (acclimation) within a matter of hours. If this relatively rapid response applies to other phytoplankton species, it may simplify interpretation of studies with natural communities (e.g. mesocosm studies and ship-board incubations), where often it is not feasible to allow for a pre-conditioning phase before starting experimental incubations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Postprint

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.

In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.

Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.

Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.

The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.

The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.

All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with the evaporation of non-ideal liquid mixtures using a multicomponent mass transfer approach. It develops the concept of evaporation maps as a convenient way of representing the dynamic composition changes of ternary mixtures during an evaporation process. Evaporation maps represent the residual composition of evaporating ternary non-ideal mixtures over the full range of composition, and are analogous to the commonly-used residue curve maps of simple distillation processes. The evaporation process initially considered in this work involves gas-phase limited evaporation from a liquid or wetted-solid surface, over which a gas flows at known conditions. Evaporation may occur into a pure inert gas, or into one pre-loaded with a known fraction of one of the ternary components. To explore multicomponent masstransfer effects, a model is developed that uses an exact solution to the Maxwell-Stefan equations for mass transfer in the gas film, with a lumped approach applied to the liquid phase. Solutions to the evaporation model take the form of trajectories in temperaturecomposition space, which are then projected onto a ternary diagram to form the map. Novel algorithms are developed for computation of pseudo-azeotropes in the evaporating mixture, and for calculation of the multicomponent wet-bulb temperature at a given liquid composition. A numerical continuation method is used to track the bifurcations which occur in the evaporation maps, where the composition of one component of the pre-loaded gas is the bifurcation parameter. The bifurcation diagrams can in principle be used to determine the required gas composition to produce a specific terminal composition in the liquid. A simple homotopy method is developed to track the locations of the various possible pseudo-azeotropes in the mixture. The stability of pseudo-azeotropes in the gas-phase limited case is examined using a linearized analysis of the governing equations. Algorithms for the calculation of separation boundaries in the evaporation maps are developed using an optimization-based method, as well as a method employing eigenvectors derived from the linearized analysis. The flexure of the wet-bulb temperature surface is explored, and it is shown how evaporation trajectories cross ridges and valleys, so that ridges and valleys of the surface do not coincide with separation boundaries. Finally, the assumption of gas-phase limited mass transfer is relaxed, by employing a model that includes diffusion in the liquid phase. A finite-volume method is used to solve the system of partial differential equations that results. The evaporation trajectories for the distributed model reduce to those of the lumped (gas-phase limited) model as the diffusivity in the liquid increases; under the same gas-phase conditions the permissible terminal compositions of the distributed and lumped models are the same.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a decentralized privacy-preserving protocol for securely casting trust ratings in distributed reputation systems. Our protocol allows n participants to cast their votes in a way that preserves the privacy of individual values against both internal and external attacks. The protocol is coupled with an extensive theoretical analysis in which we formally prove that our protocol is resistant to collusion against as many as n-1 corrupted nodes in the semi-honest model. The behavior of our protocol is tested in a real P2P network by measuring its communication delay and processing overhead. The experimental results uncover the advantages of our protocol over previous works in the area; without sacrificing security, our decentralized protocol is shown to be almost one order of magnitude faster than the previous best protocol for providing anonymous feedback.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collecting data via a questionnaire and analyzing them while preserving respondents’ privacy may increase the number of respondents and the truthfulness of their responses. It may also reduce the systematic differences between respondents and non-respondents. In this paper, we propose a privacy-preserving method for collecting and analyzing survey responses using secure multi-party computation (SMC). The method is secure under the semi-honest adversarial model. The proposed method computes a wide variety of statistics. Total and stratified statistical counts are computed using the secure protocols developed in this paper. Then, additional statistics, such as a contingency table, a chi-square test, an odds ratio, and logistic regression, are computed within the R statistical environment using the statistical counts as building blocks. The method was evaluated on a questionnaire dataset of 3,158 respondents sampled for a medical study and simulated questionnaire datasets of up to 50,000 respondents. The computation time for the statistical analyses linearly scales as the number of respondents increases. The results show that the method is efficient and scalable for practical use. It can also be used for other applications in which categorical data are collected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generalized KP (GKP) equations with an arbitrary nonlinear term model and characterize many nonlinear physical phenomena. The symmetries of GKP equation with an arbitrary nonlinear term are obtained. The condition that must satisfy for existence the symmetries group of GKP is derived and also the obtained symmetries are classified according to different forms of the nonlinear term. The resulting similarity reductions are studied by performing the bifurcation and the phase portrait of GKP and also the corresponding solitary wave solutions of GKP
equation are constructed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work examines analytically the forced convection in a channel partially filled with a porous material and subjected to constant wall heat flux. The Darcy–Brinkman–Forchheimer model is used to represent the fluid transport through the porous material. The local thermal non-equilibrium, two-equation model is further employed as the solid and fluid heat transport equations. Two fundamental models (models A and B) represent the thermal boundary conditions at the interface between the porous medium and the clear region. The governing equations of the problem are manipulated, and for each interface model, exact solutions, for the solid and fluid temperature fields, are developed. These solutions incorporate the porous material thickness, Biot number, fluid to solid thermal conductivity ratio and Darcy number as parameters. The results can be readily used to validate numerical simulations. They are, further, applicable to the analysis of enhanced heat transfer, using porous materials, in heat exchangers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bond formation between an oxide surface and oxygen, which is of importance for numerous surface reactions including catalytic reactions, is investigated within the framework of hybrid density functional theory that includes nonlocal Fock exchange. We show that there exists a linear correlation between the adsorption energies of oxygen on LaMO3 (M = Sc–Cu) surfaces obtained using a hybrid functional (e.g., Heyd–Scuseria–Ernzerhof) and those obtained using a semilocal density functional (e.g., Perdew–Burke–Ernzerhof) through the magnetic properties of the bulk phase as determined with a hybrid functional. The energetics of the spin-polarized surfaces follows the same trend as corresponding bulk systems, which can be treated at a much lower computational cost. The difference in adsorption energy due to magnetism is linearly correlated to the magnetization energy of bulk, that is, the energy difference between the spin-polarized and the non-spin-polarized solutions. Hence, one can estimate the correction ...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this thesis is to discuss the determination of homological invariants of polynomial ideals. Thereby we consider different coordinate systems and analyze their meaning for the computation of certain invariants. In particular, we provide an algorithm that transforms any ideal into strongly stable position if char k = 0. With a slight modification, this algorithm can also be used to achieve a stable or quasi-stable position. If our field has positive characteristic, the Borel-fixed position is the maximum we can obtain with our method. Further, we present some applications of Pommaret bases, where we focus on how to directly read off invariants from this basis. In the second half of this dissertation we take a closer look at another homological invariant, namely the (absolute) reduction number. It is a known fact that one immediately receives the reduction number from the basis of the generic initial ideal. However, we show that it is not possible to formulate an algorithm – based on analyzing only the leading ideal – that transforms an ideal into a position, which allows us to directly receive this invariant from the leading ideal. So in general we can not read off the reduction number of a Pommaret basis. This result motivates a deeper investigation of which properties a coordinate system must possess so that we can determine the reduction number easily, i.e. by analyzing the leading ideal. This approach leads to the introduction of some generalized versions of the mentioned stable positions, such as the weakly D-stable or weakly D-minimal stable position. The latter represents a coordinate system that allows to determine the reduction number without any further computations. Finally, we introduce the notion of β-maximal position, which provides lots of interesting algebraic properties. In particular, this position is in combination with weakly D-stable sufficient for the weakly D-minimal stable position and so possesses a connection to the reduction number.