940 resultados para approximate entropy
Resumo:
Magnetic zeolite NaA with different Fe3O4 loadings was prepared by hydrothermal synthesis based on metakaolin and Fe3O4. The effect of added Fe3O4 on the removal of ammonium by zeolite NaA was investigated by varying the Fe3O4 loading, pH, adsorption temperature, initial concentration, adsorption time. Langmuir, Freundlich, and pseudo-second-order modeling were used to describe the nature and mechanism of ammonium ion exchange using both zeolite and magnetic zeolite. Thermodynamic parameters such as change in Gibbs free energy, enthalpy and entropy were calculated. The results show that all the selected factors affect the ammonium ion exchange by zeolite and magnetic zeolite, however, the added Fe3O4 apparently does not affect the ion exchange performance of zeolite to the ammonium ion. Freundlich model provides a better description of the adsorption process than Langmuir model. Moreover, kinetic analysis indicates the exchange of ammonium on the two materials follows a pseudosecond-order model. Thermodynamic analysis makes it clear that the adsorption process of ammonium is spontaneous and exothermic. Regardless of kinetic or thermodynamic analysis, all the results suggest that no considerable effect on the adsorption of the ammonium ion by zeolite is found after the addition of Fe3O4. According to the results, magnetic zeolite NaA can be used for the removal of ammonium due to the good adsorption performance and easy separation method from aqueous solution.
Resumo:
Raman spectrum of callaghanite, Cu2Mg2(CO3)(OH)6⋅2H2O, was studied and compared with published Raman spectra of azurite, malachite and hydromagnesite. Stretching and bending vibrations of carbonate and hydroxyl units and water molecules were tentatively assigned. Approximate O–H…O hydrogen bond lengths were inferred from the spectra. Because of the high content of hydroxyl ions in the crystal structure in comparison with low content of carbonate units, callaghanite should be better classified as a carbonatohydroxide than a hydroxycarbonate.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Resumo:
Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.
Resumo:
In this paper, the spectral approximations are used to compute the fractional integral and the Caputo derivative. The effective recursive formulae based on the Legendre, Chebyshev and Jacobi polynomials are developed to approximate the fractional integral. And the succinct scheme for approximating the Caputo derivative is also derived. The collocation method is proposed to solve the fractional initial value problems and boundary value problems. Numerical examples are also provided to illustrate the effectiveness of the derived methods.
Resumo:
Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.
Resumo:
Introduction: The human patellar tendon is highly adaptive to changes in habitual loading but little is known about its acute mechanical response to exercise. This research evaluated the immediate transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise. Methods: Twelve healthy adult males (mean age 34.0+/-12.1 years, height 1.75+/-0.09 m and weight 76.7+/-12.3 kg) free of knee pain participated in the research. A 10-5 MHz linear-array transducer was used to acquire standardised sagittal sonograms of the right patellar tendon immediately prior to and following 90 repetitions of a double-leg parallel-squat exercise performed against a resistance of 175% bodyweight. Tendon thickness was determined 20-mm distal to the pole of the patellar and transverse Hencky strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness and expressed as a percentage. Measures of tendon echotexture (echogenicity and entropy) were also calculated from subsequent gray-scale profiles. Results: Quadriceps exercise resulted in an immediate decrease in patellar tendon thickness (P<.05), equating to a transverse strain of -22.5+/-3.4%, and was accompanied by increased tendon echogenicity (P<.05) and decreased entropy (P<.05). The transverse strain response of the patellar tendon was significantly correlated with both tendon echogenicity (r = -0.58, P<.05) and entropy following exercise (r=0.73, P<.05), while older age was associated with greater entropy of the patellar tendon prior to exercise (r=0.79, P<.05) and a reduced transverse strain response (r=0.61, P<.05) following exercise. Conclusions: This study is the first to show that quadriceps exercise invokes structural alignment and fluid movement within the matrix that are manifest by changes in echotexture and transverse strain in the patellar tendon., (C)2012The American College of Sports Medicine
Resumo:
In a recent paper, Gordon, Muratov, and Shvartsman studied a partial differential equation (PDE) model describing radially symmetric diffusion and degradation in two and three dimensions. They paid particular attention to the local accumulation time (LAT), also known in the literature as the mean action time, which is a spatially dependent timescale that can be used to provide an estimate of the time required for the transient solution to effectively reach steady state. They presented exact results for three-dimensional applications and gave approximate results for the two-dimensional analogue. Here we make two generalizations of Gordon, Muratov, and Shvartsman’s work: (i) we present an exact expression for the LAT in any dimension and (ii) we present an exact expression for the variance of the distribution. The variance provides useful information regarding the spread about the mean that is not captured by the LAT. We conclude by describing further extensions of the model that were not considered by Gordon,Muratov, and Shvartsman. We have found that exact expressions for the LAT can also be derived for these important extensions...
Resumo:
Cold-formed steel lipped channels are commonly used in LSF wall construction as load bearing studs with plasterboards on both sides. Under fire conditions, cold-formed thin-walled steel sections heat up quickly resulting in fast reduction in their strength and stiffness. Usually the LSF wall panels are subjected to fire from one side which will cause thermal bowing, neutral axis shift and magnification effects due to the development of non-uniform temperature distributions across the stud. This will induce an additional bending moment in the stud and hence the studs in LSF wall panels should be designed as a beam column considering both the applied axial compression load and the additional bending moment. Traditionally the fire resistance rating of these wall panels is based on approximate prescriptive methods. Very often they are limited to standard wall configurations used by the industry. Therefore a detailed research study is needed to develop fire design rules to predict the failure load and hence the failure time of LSF wall panels subject to non-uniform temperature distributions. This paper presents the details of an investigation to develop suitable fire design rules for LSF wall studs under non-uniform elevated temperature distributions. Applications of the previously developed fire design rules based on AISI design manual and Eurocode 3 Parts 1.2 and 1.3 to LSF wall studs were investigated in detail and new simplified fire design rules based on AS/NZS 4600 and Eurocode 3 Part 1.3 were proposed in the current study with suitable allowances for the interaction effects of compression and bending actions. The accuracy of the proposed fire design rules was verified by using the results from full scale fire tests and extensive numerical studies.
Resumo:
Traditionally the fire resistance rating of LSF wall systems is based on approximate prescriptive methods developed using limited fire tests. Therefore a detailed research study into the performance of load bearing LSF wall systems under standard fire conditions was undertaken to develop improved fire design rules. It used the extensive fire performance results of eight different LSF wall systems from a series of full scale fire tests and numerical studies for this purpose. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed in this study with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the test and FEA results for different wall configurations, steel grades, thicknesses and load ratios. This paper presents the details and results of this study including the improved fire design rules for predicting the load capacity of LSF wall studs and the failure times of LSF walls under standard fire conditions.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
In this letter the core-core-valence Auger transitions of an atomic impurity, both in bulk or adsorbed on a jellium-like surface, are computed within a DFT framework. The Auger rates calculated by the Fermi golden rule are compared with those determined by an approximate and simpler expression. This is based on the local density of states (LDOS) with a core hole present, in a region around the impurity nucleus. Different atoms, Na and Mg, solids, Al and Ag, and several impurity locations are considered. We obtain an excellent agreement between KL1V and KL23V rates worked out with the two approaches. The radius of the sphere in which we calculate the LDOS is the relevant parameter of the simpler approach. Its value only depends on the atomic species regardless of the location of the impurity and the type of substrate. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The count-min sketch is a useful data structure for recording and estimating the frequency of string occurrences, such as passwords, in sub-linear space with high accuracy. However, it cannot be used to draw conclusions on groups of strings that are similar, for example close in Hamming distance. This paper introduces a variant of the count-min sketch which allows for estimating counts within a specified Hamming distance of the queried string. This variant can be used to prevent users from choosing popular passwords, like the original sketch, but it also allows for a more efficient method of analysing password statistics.
Resumo:
Purpose: Flat-detector, cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. Methods: The rich sources of prior information in IGRT are incorporated into a hidden Markov random field (MRF) model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk (OAR). The voxel labels are estimated using the iterated conditional modes (ICM) algorithm. Results: The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom (CIRS, Inc. model 062). The mean voxel-wise misclassification rate was 6.2%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. Conclusions: By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.