962 resultados para decomposition rank
Resumo:
This paper reports single pulse shock tube and ab initio studies on thermal decomposition of 2-fluoro and 2-chloroethanol at T=1000–1200 K. Both molecules have HX (X = F/Cl) and H2O molecular elimination channels. The CH3CHO formed by HX elimination is chemically active and undergoes secondary decomposition resulting in the formation of CH4, C2H6, and C2H4. A detailed kinetic simulation indicates that the formation of C2H4 could not be quantitatively explained as arising exclusively from secondary CH3CHO decomposition. Contributions from primary radical processes need to be considered to explain C2H4 quantitatively. Ab initio calculations on HX and H2O elimination reactions from the haloethanols at HF, MP2, and DFT levels with various basis sets up to 6/311++G**are reported. It is pointed out that due to strong correlations between A and Eα, comparison of these two parameters between experimental and theoretical results could be misleading.
Resumo:
Gadolinium iron garnet was milled in a high energy ball mill to study its magnetic properties in the nanocrystalline regime. XRD reveals the decomposition of the garnet phase into Gd-orthoferrite and Gd2O3 on milling. The variation of saturation magnetization and coercivity with milling is attributed to a possible shift in the compensation temperature on grain size reduction and an increase in the orthoferrite content. The Mössbauer spectrum at 16 K is characteristic of the magnetically ordered state corresponding to GdIG, GdFeO3 and α-Fe2O3 whereas at room temperature it is a superparamagnetic doublet.
Resumo:
Current analytical work on the effect of convection and viscoelasticity on the early and late stages of spinodal decomposition is briefly described. In the early stages, the effect of viscoelastic stresses was analysed using a simple Maxwell model for the stress, which was incorporated in the Langevin equation for the momentum field. The viscoelastic stresses are found to enhance the rate of decomposition. In the late stages, the pattern formed depends on the relative composition of the two species. Droplet spinodal decomposition occurs when the concentration of one of the species is small. Convective transport does not have a significant effect on the growth of a single droplet, but it does result in an attractive interaction between non - Brownian droplets which could lead to coalescence. The effect of convective transport for the growth of random interfaces in a near symmetric quench was analysed using an 'area distribution function', which gives the distribution of surface area of the interface in curvature space. It was found that the curvature of the interface decreases proportional to t in the late stages of spinodal decomposition, and the surface area also decreases proportional to t.
Resumo:
We associate a sheaf model to a class of Hilbert modules satisfying a natural finiteness condition. It is obtained as the dual to a linear system of Hermitian vector spaces (in the sense of Grothendieck). A refined notion of curvature is derived from this construction leading to a new unitary invariant for the Hilbert module. A division problem with bounds, originating in Douady's privilege, is related to this framework. A series of concrete computations illustrate the abstract concepts of the paper.
Resumo:
In this paper, the well-known Adomian Decomposition Method (ADM) is modified to solve the fracture laminated multi-directional problems. The results are compared with the existing analytical/exact or experimental method. The already known existing ADM is modified to improve the accuracy and convergence. Thus, the modified method is named as Modified Adomian Decomposition Method (MADM). The results fromMADM are found to converge very quickly, simple to apply for fracture(singularity) problems and are more accurate compared to experimental and analytical methods. MADM is quite efficient and is practically well-suited for use in these problems. Several examples are given to check the reliability of the present method. In the present paper, the principle of the decomposition method is described, and its advantages form the analyses of fracture of laminated uni-directional composites.
Resumo:
Nonlinear equations in mathematical physics and engineering are solved by linearizing the equations and forming various iterative procedures, then executing the numerical simulation. For strongly nonlinear problems, the solution obtained in the iterative process can diverge due to numerical instability. As a result, the application of numerical simulation for strongly nonlinear problems is limited. Helicopter aeroelasticity involves the solution of systems of nonlinear equations in a computationally expensive environment. Reliable solution methods which do not need Jacobian calculation at each iteration are needed for this problem. In this paper, a comparative study is done by incorporating different methods for solving the nonlinear equations in helicopter trim. Three different methods based on calculating the Jacobian at the initial guess are investigated. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for LVCSR systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication.In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on a 1138 word vocabulary RM1 task using Sphinx 3.7 system show that, for a typical case the matrix multiplication approach leads to overall speedup of 46%. Both the low-rank approximation methods increase the speedup to around 60%, with the former method increasing the word error rate (WER) from 3.2% to 6.6%, while the latter increases the WER from 3.2% to 3.5%.
Resumo:
Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.
Resumo:
Let be a noncompact symmetric space of higher rank. We consider two types of averages of functions: one, over level sets of the heat kernel on and the other, over geodesic spheres. We prove injectivity results for functions in which extend the results in Pati and Sitaram (Sankya Ser A 62:419-424, 2000).
Resumo:
We propose a novel numerical method based on a generalized eigenvalue decomposition for solving the diffusion equation governing the correlation diffusion of photons in turbid media. Medical imaging modalities such as diffuse correlation tomography and ultrasound-modulated optical tomography have the (elliptic) diffusion equation parameterized by a time variable as the forward model. Hitherto, for the computation of the correlation function, the diffusion equation is solved repeatedly over the time parameter. We show that the use of a certain time-independent generalized eigenfunction basis results in the decoupling of the spatial and time dependence of the correlation function, thus allowing greater computational efficiency in arriving at the forward solution. Besides presenting the mathematical analysis of the generalized eigenvalue problem on the basis of spectral theory, we put forth the numerical results that compare the proposed numerical method with the standard technique for solving the diffusion equation.
Resumo:
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
We present computer simulation study of two-dimensional infrared spectroscopy (2D-IR) of water confined in reverse micelles (RMs) of various sizes. The present study is motivated by the need to understand the altered dynamics of confined water by performing layerwise decomposition of water, with an aim to quantify the relative contributions of different layers water molecules to the calculated 2D-IR spectrum. The 0-1 transition spectra clearly show substantial elongation, due to in-homogeneous broadening and incomplete spectral diffusion, along the diagonal in the surface water layer of different sized RMs. Fitting of the frequency fluctuation correlation functions reveal that the motion of the surface water molecules is sub-diffusive and indicate the constrained nature of their dynamics. This is further supported by two peak nature of the angular analogue of van Hove correlation function. With increasing system size, the water molecules become more diffusive in nature and spectral diffusion almost completes in the central layer of the larger size RMs. Comparisons between experiments and simulations establish the correspondence between the spectral decomposition available in experiments with the spatial decomposition available in simulations. Simulations also allow a quantitative exploration of the relative role of water, sodium ions, and sulfonate head groups in vibrational dephasing. Interestingly, the negative cross correlation between force on oxygen and hydrogen of O-H bond in bulk water significantly decreases in the surface layer of each RM. This negative cross correlation gradually increases in the central water pool with increasing RMs size and this is found to be partly responsible for the faster relaxation rate of water in the central pool. (C) 2013 AIP Publishing LLC.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.
Resumo:
Thermal decomposition of propargyl alcohol (C3H3OH), a molecule of interest in interstellar chemistry and combustion, was investigated using a single pulse shock tube in the temperature ranging from 953 to 1262 K. The products identified include acetylene, propyne, vinylacetylene, propynal, propenal, and benzene. The experimentally observed overall rate constant for thermal decomposition of propargyl alcohol was found to be k = 10((10.17 +/- 0.36)) exp(-39.70 +/- 1.83)/RT) s(-1) Ab initio theoretical calculations were carried out to understand the potential energy surfaces involved in the primary and secondary steps of propargyl alcohol thermal decomposition. Transition state theory was used to predict the rate constants, which were then used and refined in a kinetic simulation of the product profile. The first step in the decomposition is C-O bond dissociation, leading to the formation of two important radicals in combustion, OH and propargyl. This has been used to study the reverse OH propargyl radical reaction, about which there appears to be no prior work. Depending on the site of attack, this reaction leads to propargyl alcohol or propenal, one of the major products at temperatures below 1200 K. A detailed mechanism has been derived to explain all the observed products.