987 resultados para EFFECTIVE-MASS APPROXIMATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limitations have been detected in a recently published method for macroion valence determination by an ultracentrifugal procedure for quantifying the Dorman distribution of small ions in macroion solutions dialyzed against buffer supplemented with chromate as an indicator ion. The limitation reflects an implicit assumption that sedimentation velocity affords an unequivocal means of separating effects of chromate binding from those reflecting the Dorman redistribution of small ions. Although the assumed absence of significant Dorman redistribution of small ions across the sedimenting macroion boundary seemingly holds for some systems, this approximation is demonstrably invalid for others. Despite preliminary signs of promise, the ultracentrifugal procedure does not afford a simple, readily applied solution to the problem of unequivocal macroion valence determination. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagrammatic strong-coupling perturbation theory (SCPT) for correlated electron systems is developed for intersite Coulomb interaction and for a nonorthogonal basis set. The construction is based on iterations of exact closed equations for many - electron Green functions (GFs) for Hubbard operators in terms of functional derivatives with respect to external sources. The graphs, which do not contain the contributions from the fluctuations of the local population numbers of the ion states, play a special role: a one-to-one correspondence is found between the subset of such graphs for the many - electron GFs and the complete set of Feynman graphs of weak-coupling perturbation theory (WCPT) for single-electron GFs. This fact is used for formulation of the approximation of renormalized Fermions (ARF) in which the many-electron quasi-particles behave analogously to normal Fermions. Then, by analyzing: (a) Sham's equation, which connects the self-energy and the exchange- correlation potential in density functional theory (DFT); and (b) the Galitskii and Migdal expressions for the total energy, written within WCPT and within ARF SCPT, a way we suggest a method to improve the description of the systems with correlated electrons within the local density approximation (LDA) to DFT. The formulation, in terms of renormalized Fermions LIDA (RF LDA), is obtained by introducing the spectral weights of the many electron GFs into the definitions of the charge density, the overlap matrices, effective mixing and hopping matrix elements, into existing electronic structure codes, whereas the weights themselves have to be found from an additional set of equations. Compared with LDA+U and self-interaction correction (SIC) methods, RF LDA has the advantage of taking into account the transfer of spectral weights, and, when formulated in terms of GFs, also allows for consideration of excitations and nonzero temperature. Going beyond the ARF SCPT, as well as RF LIDA, and taking into account the fluctuations of ion population numbers would require writing completely new codes for ab initio calculations. The application of RF LDA for ab initio band structure calculations for rare earth metals is presented in part 11 of this study (this issue). (c) 2005 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first investigation of this study is concerned with the reasonableness of the assumptions related to diffusion of water vapour in concrete and with the development of a diffusivity equation for heated concrete. It has been demonstrated that diffusion of water vapour does occur in concrete at all temperatures and that the type of diffusion is concrete is Knudsen diffusion. Neglecting diffusion leads to underestimating the pressure. It results in a maximum pore pressure of less than 1 MPa. It has also been shown that the assumption that diffusion in concrete is molecular is unreasonable even when the tortuosity is considered. Molecular diffusivity leads to overestimating the pressure. It results in a maximum pore pressure of 2.7 MPa of which the vapour pressure is 1.5 MPa while the air pressure is 1.2 MPa. Also, the first diffusivity equation, appropriately named 'concrete diffusivity', has been developed specifically for concrete that determines the effective diffusivity of any gas in concrete at any temperature. In thick walls and columns exposed to fire, concrete diffusivity leads to a maximum pore pressures of 1.5 and 2.2 MPa (along diagonals), respectively, that are almost entirely due to water vapour pressure. Also, spalling is exacerbated, and thus higher pressures may occur, in thin heated sections, since there is less of a cool reservoir towards which vapour can migrate. Furthermore, the reduction of the cool reservoir is affected not only by the thickness, but also by the time of exposure to fire and by the type of exposure, i.e. whether the concrete member is exposed to fire from one or more sides. The second investigation is concerned with examining the effects of thickness and exposure time and type. It has been demonstrated that the build up of pore pressure is low in thick members, since there is a substantial cool zone towards which water vapour can migrate. Thus, if surface and/or explosive spalling occur on a thick member, then such spalling must be due to high thermal stresses, but corner spalling is likely to be pore pressure spalling. However, depending on the exposure time and type, the pore pressures can be more than twice those occurring in thick members and thought to be the maximum that can occur so far, and thus the enhanced propensity of pore pressure spalling occurring on thin sections heated on opposite sides has been conclusively demonstrated to be due to the lack of a cool zone towards which moisture can migrate. Expressions were developed for the determination of the maximum pore pressures that can occur in different concrete walls and columns exposed to fire and of the corresponding times of exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An equivalent step index fibre with a silica core and air cladding is used to model photonic crystal fibres with large air holes. We model this fibre for linear polarisation (we focus on the lowest few transverse modes of the electromagnetic field). The equivalent step index radius is obtained by equating the lowest two eigenvalues of the model to those calculated numerically for the photonic crystal fibres. The step index parameters thus obtained can then be used to calculate nonlinear parameters like the nonlinear effective area of a photonic crystal fibre or to model nonlinear few-mode interactions using an existing model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bubbling fluidized bed technology is one of the most effective mean for interaction between solid and gas flow, mainly due to its good mixing and high heat and mass transfer rate. It has been widely used at a commercial scale for drying of grains such as in pharmaceutical, fertilizers and food industries. When applied to drying of non-pours moist solid particles, the water is drawn-off driven by the difference in water concentration between the solid phase and the fluidizing gas. In most cases, the fluidizing gas or drying agent is air. Despite of the simplicity of its operation, the design of a bubbling fluidized bed dryer requires an understanding of the combined complexity in hydrodynamics and the mass transfer mechanism. On the other hand, reliable mass transfer coefficient equations are also required to satisfy the growing interest in mathematical modelling and simulation, for accurate prediction of the process kinetics. This chapter presents an overview of the various mechanisms contributing to particulate drying in a bubbling fluidized bed and the mass transfer coefficient corresponding to each mechanism. In addition, a case study on measuring the overall mass transfer coefficient is discussed. These measurements are then used for the validation of mass transfer coefficient correlations and for assessing the various assumptions used in developing these correlations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of mass spectrometry techniques to detect protein oxidation, which contributes to signalling and inflammation, is important. Label-free approaches have the advantage of reduced sample manipulation, but are challenging in complex samples owing to undirected analysis of large data sets using statistical search engines. To identify oxidised proteins in biological samples, we previously developed a targeted approach involving precursor ion scanning for diagnostic MS3 ions from oxidised residues. Here, we tested this approach for other oxidations, and compared it with an alternative approach involving the use of extracted ion chromatograms (XICs) generated from high-resolution MSMS data using very narrow mass windows. This accurate mass XIC data methodology was effective at identifying nitrotyrosine, chlorotyrosine, and oxidative deamination of lysine, and for tyrosine oxidations highlighted more modified peptide species than precursor ion scanning or statistical database searches. Although some false positive peaks still occurred in the XICs, these could be identified by comparative assessment of the peak intensities. The method has the advantage that a number of different modifications can be analysed simultaneously in a single LC-MSMS run. This article is part of a Special Issue entitled: Posttranslational Protein modifications in biology and Medicine. Biological significance: The use of accurate mass extracted product ion chromatograms to detect oxidised peptides could improve the identification of oxidatively damaged proteins in inflammatory conditions. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach for effective implementation of greedy selection methodologies, to approximate an image partitioned into blocks, is proposed. The method is specially designed for approximating partitions on a transformed image. It evolves by selecting, at each iteration step, i) the elements for approximating each of the blocks partitioning the image and ii) the hierarchized sequence in which the blocks are approximated to reach the required global condition on sparsity. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach for effective implementation of greedy selection methodologies, to approximate an image partitioned into blocks, is proposed. The method is specially designed for approximating partitions on a transformed image. It evolves by selecting, at each iteration step, i) the elements for approximating each of the blocks partitioning the image and ii) the hierarchized sequence in which the blocks are approximated to reach the required global condition on sparsity. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a model eigenvalue problem (EVP) in 1D, with periodic or semi–periodic boundary conditions (BCs). The discretization of this type of EVP by consistent mass finite element methods (FEMs) leads to the generalized matrix EVP Kc = λ M c, where K and M are real, symmetric matrices, with a certain (skew–)circulant structure. In this paper we fix our attention to the use of a quadratic FE–mesh. Explicit expressions for the eigenvalues of the resulting algebraic EVP are established. This leads to an explicit form for the approximation error in terms of the mesh parameter, which confirms the theoretical error estimates, obtained in [2].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of body image and ability to achieve intimacy are connected to body weight, yet remain largely unexplored and have not been evaluated by gender. The underlying purpose of this research was to determine if avoidant attitudes and perceptions of one's body may hold implications toward its use in intimate interactions, and if an above average body weight would tend to increase this avoidance. The National Health and Nutrition Examination Survey (NHANES, 1999-2002) finds that 64.5% of US adults are overweight, with 61.9% of women and 67.2% of men. The increasing prevalence of overweight and obesity in men and women shows no reverse trend, nor have prevention and treatment proven effective in the long term. The researcher gathered self-reported age, gender, height and weight data from 55 male and 58 female subjects (determined by a prospective power analysis with a desired medium effect size (r=.30) to determine body mass index (BMI), determining a mean age of 21.6 years and mean BMI of 25.6. Survey instruments consisted of two scales that are germane to the variables being examined. They were (1) Descutner and Thelen of the University of Missouri‘s (1991) Fear-of-Intimacy scale; and (2) Rosen, Srebnik, Saltzberg, and Wendt's (1991) Body Image Avoidance Questionnaire. Results indicated that as body mass index increases, fear of intimacy increases (p<0.05) and that as body mass index increases, body image avoidance increases (p<0.05). The relationship that as body image avoidance increases, fear of intimacy increases was not supported, but approached significance at (p<0.07). No differences in these relationships were determined between gender groups. For age, the only observed relationship was that of a difference between scores for age groups [18 to 22 (group 1) and ages 23 to 34 (group 2)] for the relationship of body image avoidance and fear of intimacy (p<0.02). The results suggest that the relationship of body image avoidance and fear of intimacy, as well as age, bear consideration toward the escalating prevalence of overweight and obesity. An integrative approach to body weight that addresses issues of body image and intimacy may prove effective in prevention and treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of body image and ability to achieve intimacy are connected to body weight, yet remain largely unexplored and have not been evaluated by gender. The underlying purpose of this research was to determine if avoidant attitudes and perceptions of one’s body may hold implications toward its use in intimate interactions, and if an above average body weight would tend to increase this avoidance. The National Health and Nutrition Examination Survey (NHANES, 1999-2002) finds that 64.5% of US adults are overweight, with 61.9% of women and 67.2% of men. The increasing prevalence of overweight and obesity in men and women shows no reverse trend, nor have prevention and treatment proven effective in the long term. The researcher gathered self-reported age, gender, height and weight data from 55 male and 58 female subjects (determined by a prospective power analysis with a desired medium effect size (r =.30) to determine body mass index (BMI), determining a mean age of 21.6 years and mean BMI of 25.6. Survey instruments consisted of two scales that are germane to the variables being examined. They were (1) Descutner and Thelen of the University of Missouri’s (1991) Fear-of-Intimacy scale and (2) Rosen, Srebnik, Saltzberg, and Wendt’s (1991) Body Image Avoidance Questionnaire. Results indicated that as body mass index increases, fear of intimacy increases (p<0.05) and that as body mass index increases, body image avoidance increases (p<0.05). The relationship that as body image avoidance increases, fear of intimacy increases was not supported, but approached significance at (p<0.07). No differences in these relationships were determined between gender groups. For age, the only observed relationship was that of a difference between scores for age groups [18 to 22 (group 1) and ages 23 to 34 (group 2)] for the relationship of body image avoidance and fear of intimacy (p<0.02). The results suggest that the relationship of body image avoidance and fear of intimacy, as well as age, bear consideration toward the escalating prevalence of overweight and obesity. An integrative approach to body weight that addresses issues of body image and intimacy may prove effective in prevention and treatment.