939 resultados para Legendre polynomial
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
The widespread deployment of commercial-scale cellulosic ethanol currently hinges on developing and evaluating scalable processes whilst broadening feedstock options. This study investigates whole Eucalyptus grandis trees as a potential feedstock and demonstrates dilute acid pre-treatment (with steam explosion) followed by pre-saccharification simultaneous saccharification fermentation process (PSSF) as a suitable, scalable strategy for the production of bioethanol. Biomass was pre-treated in dilute H2SO4 at laboratory scale (0.1 kg) and pilot scale (10 kg) to evaluate the effect of combined severity factor (CSF) on pre-treatment effectiveness. Subsequently, pilot-scale pre-treated residues (15 wt.%) were converted to ethanol in a PSSF process at 2 L and 300 L scales. Good polynomial correlations (n = 2) of CSF with hemicellulose removal and glucan digestibility with a minimum R2 of 0.91 were recorded. The laboratory-scale 72 h glucan digestibility and glucose yield was 68.0% and 51.3%, respectively, from biomass pre-treated at 190 °C /15 min/ 4.8 wt.% H2SO4. Pilot-scale pre-treatment (180 °C/ 15 min/2.4 wt.% H2SO4 followed by steam explosion) delivered higher glucan digestibility (71.8%) and glucose yield (63.6%). However, the ethanol yields using PSSF were calculated at 82.5 and 113 kg/ton of dry biomass for the pilot and the laboratory scales, respectively. © 2016 Society of Chemical Industry and John Wiley & Sons, Ltd
Resumo:
The purpose of this article is to report the experience of design and testing of orifice plate-based flow measuring systems for evaluation of air leakages in components of air conditioning systems. Two of the flow measuring stations were designed with a beta value of 0.405 and 0.418. The third was a dual path unit with orifice plates of beta value 0.613 and 0.525. The flow rates covered with all the four were from 4-94 l/s and the range of Reynolds numbers is from 5600 to 76,000. The coefficients of discharge were evaluated and compared with the Stolz equation. Measured C-d values are generally higher than those obtained from the equation, the deviations being larger in the low Reynolds number region. Further, it is observed that a second-degree polynomial is inadequate to relate the pressure drop and flow rate. The lower Reynolds number limits set by standards appear to be somewhat conservative.
Resumo:
The objective is to present the formulation of numerically integrated modified virtual crack closure integral technique for concentrically and eccentrically stiffened panels for computation of strain-energy release rate and stress intensity factor based on linear elastic fracture mechanics principles. Fracture analysis of cracked stiffened panels under combined tensile, bending, and shear loads has been conducted by employing the stiffened plate/shell finite element model, MQL9S2. This model can be used to analyze plates with arbitrarily located concentric/eccentric stiffeners, without increasing the total number of degrees of freedom, of the plate element. Parametric studies on fracture analysis of stiffened plates under combined tensile and moment loads have been conducted. Based on the results of parametric,studies, polynomial curve fitting has been carried out to get best-fit equations corresponding to each of the stiffener positions. These equations can be used for computation of stress intensity factor for cracked stiffened plates subjected to tensile and moment loads for a given plate size, stiffener configuration, and stiffener position without conducting finite element analysis.
Resumo:
The possibility of applying two approximate methods for determining the salient features of response of undamped non-linear spring mass systems subjected to a step input, is examined. The results obtained on the basis of these approximate methods are compared with the exact results that are available for some particular types of spring characteristics. The extension of the approximate methods for non-linear systems with general polynomial restoring force characteristics is indicated.
Resumo:
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
Resumo:
This paper deals with the approximate solutions of non-linear autonomous systems by the application of ultraspherical polynomials. From the differential equations for amplitude and phase, set up by the method of variation of parameters, the approximate solutions are obtained by a generalized averaging technique based on the ultraspherical polynomial expansions. The method is illustrated with examples and the results are compared with the digital and analog computer solutions. There is a close agreement between the analytical and exact results.
Resumo:
The paper deals with a linearization technique in non-linear oscillations for systems which are governed by second-order non-linear ordinary differential equations. The method is based on approximation of the non-linear function by a linear function such that the error is least in the weighted mean square sense. The method has been applied to cubic, sine, hyperbolic sine, and odd polynomial types of non-linearities and the results obtained are more accurate than those given by existing linearization methods.
An approximate analysis of non-linear non-conservative systems subjected to step function excitation
Resumo:
This paper deals with the approximate analysis of the step response of non-linear nonconservative systems by the application of ultraspherical polynomials. From the differential equations for amplitude and phase, set up by the method of variation of parameters, the approximate solutions are obtained by a generalized averaging technique based on ultraspherical polynomial expansions. The Krylov-Bogoliubov results are given by a particular set of these polynomials. The method has been applied to study the step response of a cubic spring mass system in presence of viscous, material, quadratic, and mixed types of damping. The approximate results are compared with the digital and analogue computer solutions and a close agreement has been found between the analytical and the exact results.
Resumo:
The force constants of H2 and Li2 are evaluated employing their extended Hartree-Fock wavefunctions by a polynomial fit of their force curves. It is suggested that, based on incomplete multiconfiguration Hartree-Fock wavefunctions, force constants calculated from the energy derivatives are numerically more accurate than those obtained from the derivatives of the Hellmann-Feynman forces. It is observed that electrons relax during the nuclear vibrations in such a fashion as to facilitate the nuclear motions.
Resumo:
The growth rates of the hydrodynamic modes in the homogeneous sheared state of a granular material are determined by solving the Boltzmann equation. The steady velocity distribution is considered to be the product of the Maxwell Boltzmann distribution and a Hermite polynomial expansion in the velocity components; this form is inserted into them Boltzmann equation and solved to obtain the coeificients of the terms in the expansion. The solution is obtained using an expansion in the parameter epsilon =(1 - e)(1/2), and terms correct to epsilon(4) are retained to obtain an approximate solution; the error due to the neglect of higher terms is estimated at about 5% for e = 0.7. A small perturbation is placed on the distribution function in the form of a Hermite polynomial expansion for the velocity variations and a Fourier expansion in the spatial coordinates: this is inserted into the Boltzmann equation and the growth rate of the Fourier modes is determined. It is found that in the hydrodynamic limit, the growth rates of the hydrodynamic modes in the flow direction have unusual characteristics. The growth rate of the momentum diffusion mode is positive, indicating that density variations are unstable in the limit k--> 0, and the growth rate increases proportional to kslash} k kslash}(2/3) in the limit k --> 0 (in contrast to the k(2) increase in elastic systems), where k is the wave vector in the flow direction. The real and imaginary parts of the growth rate corresponding to the propagating also increase proportional to kslash k kslash(2/3) (in contrast to the k(2) and k increase in elastic systems). The energy mode is damped due to inelastic collisions between particles. The scaling of the growth rates of the hydrodynamic modes with the wave vector I in the gradient direction is similar to that in elastic systems. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).
Resumo:
The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.