93 resultados para Hilbert schemes of points Poincaré polynomial Betti numbers Goettsche formula
Resumo:
A new framework is proposed in this work to solve multidimensional population balance equations (PBEs) using the method of discretization. A continuous PBE is considered as a statement of evolution of one evolving property of particles and conservation of their n internal attributes. Discretization must therefore preserve n + I properties of particles. Continuously distributed population is represented on discrete fixed pivots as in the fixed pivot technique of Kumar and Ramkrishna [1996a. On the solution of population balance equation by discretization-I A fixed pivot technique. Chemical Engineering Science 51(8), 1311-1332] for 1-d PBEs, but instead of the earlier extensions of this technique proposed in the literature which preserve 2(n) properties of non-pivot particles, the new framework requires n + I properties to be preserved. This opens up the use of triangular and tetrahedral elements to solve 2-d and 3-d PBEs, instead of the rectangles and cuboids that are suggested in the literature. Capabilities of computational fluid dynamics and other packages available for generating complex meshes can also be harnessed. The numerical results obtained indeed show the effectiveness of the new framework. It also brings out the hitherto unknown role of directionality of the grid in controlling the accuracy of the numerical solution of multidimensional PBEs. The numerical results obtained show that the quality of the numerical solution can be improved significantly just by altering the directionality of the grid, which does not require any increase in the number of points, or any refinement of the grid, or even redistribution of pivots in space. Directionality of a grid can be altered simply by regrouping of pivots.
Resumo:
Skew correction of complex document images is a difficult task. We propose an edge-based connected component approach for robust skew correction of documents with complex layout and content. The algorithm essentially consists of two steps - an 'initialization' step to determine the image orientation from the centroids of the connected components and a 'search' step to find the actual skew of the image. During initialization, we choose two different sets of points regularly spaced across the the image, one from the left to right and the other from top to bottom. The image orientation is determined from the slope between the two succesive nearest neighbors of each of the points in the chosen set. The search step finds succesive nearest neighbors that satisfy the parameters obtained in the initialization step. The final skew is determined from the slopes obtained in the 'search' step. Unlike other connected component based methods, the proposed method does not require any binarization step that generally precedes connected component analysis. The method works well for scanned documents with complex layout of any skew with a precision of 0.5 degrees.
Resumo:
The objective is to present the formulation of numerically integrated modified virtual crack closure integral technique for concentrically and eccentrically stiffened panels for computation of strain-energy release rate and stress intensity factor based on linear elastic fracture mechanics principles. Fracture analysis of cracked stiffened panels under combined tensile, bending, and shear loads has been conducted by employing the stiffened plate/shell finite element model, MQL9S2. This model can be used to analyze plates with arbitrarily located concentric/eccentric stiffeners, without increasing the total number of degrees of freedom, of the plate element. Parametric studies on fracture analysis of stiffened plates under combined tensile and moment loads have been conducted. Based on the results of parametric,studies, polynomial curve fitting has been carried out to get best-fit equations corresponding to each of the stiffener positions. These equations can be used for computation of stress intensity factor for cracked stiffened plates subjected to tensile and moment loads for a given plate size, stiffener configuration, and stiffener position without conducting finite element analysis.
Resumo:
Approximate solutions of the B-G-K model equation are obtained for the structure of a plane shock, using various moment methods and a least squares technique. Comparison with available exact solution shows that while none of the methods is uniformly satisfactory, some of them can provide accurate values for the density slope shock thickness delta n . A detailed error analysis provides explanations for this result. An asymptotic analysis of delta n for largeMach numbers shows that it scales with theMaxwell mean free path on the hot side of the shock, and that their ratio is relatively insensitive to the viscosity law for the gas.
Resumo:
The growth rates of the hydrodynamic modes in the homogeneous sheared state of a granular material are determined by solving the Boltzmann equation. The steady velocity distribution is considered to be the product of the Maxwell Boltzmann distribution and a Hermite polynomial expansion in the velocity components; this form is inserted into them Boltzmann equation and solved to obtain the coeificients of the terms in the expansion. The solution is obtained using an expansion in the parameter epsilon =(1 - e)(1/2), and terms correct to epsilon(4) are retained to obtain an approximate solution; the error due to the neglect of higher terms is estimated at about 5% for e = 0.7. A small perturbation is placed on the distribution function in the form of a Hermite polynomial expansion for the velocity variations and a Fourier expansion in the spatial coordinates: this is inserted into the Boltzmann equation and the growth rate of the Fourier modes is determined. It is found that in the hydrodynamic limit, the growth rates of the hydrodynamic modes in the flow direction have unusual characteristics. The growth rate of the momentum diffusion mode is positive, indicating that density variations are unstable in the limit k--> 0, and the growth rate increases proportional to kslash} k kslash}(2/3) in the limit k --> 0 (in contrast to the k(2) increase in elastic systems), where k is the wave vector in the flow direction. The real and imaginary parts of the growth rate corresponding to the propagating also increase proportional to kslash k kslash(2/3) (in contrast to the k(2) and k increase in elastic systems). The energy mode is damped due to inelastic collisions between particles. The scaling of the growth rates of the hydrodynamic modes with the wave vector I in the gradient direction is similar to that in elastic systems. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this note we demonstrate the use of top polarization in the study of t (t) over bar resonances at the LHC, in the possible case where the dynamics implies a non-zero top polarization. As a probe of top polarization we construct an asymmetry in the decay-lepton azimuthal angle distribution (corresponding to the sign of cos phi(l)) in the laboratory. The asymmetry is non-vanishing even for a symmetric collider like the LHC, where a positive z axis is not uniquely defined. The angular distribution of the leptons has the advantage of being a faithful top-spin analyzer, unaffected by possible anomalous tbW couplings, to linear order. We study, for purposes of demonstration, the case of a Z' as might exist in the little Higgs models. We identify kinematic cuts which ensure that our asymmetry reflects the polarization in sign and magnitude. We investigate possibilities at the LHC with two energy options: root s = 14TeV and root s = 7TeV, as well as at the Tevatron. At the LHC the model predicts net top quark polarization of the order of a few per cent for M-Z' similar or equal to 1200GeV, being as high as 10% for a smaller mass of the Z' of 700GeV and for the largest allowed coupling in the model, the values being higher for the 7TeV option. These polarizations translate to a deviation from the standard-model value of azimuthal asymmetry of up to about 4% (7%) for 14 (7) TeV LHC, whereas for the Tevatron, values as high as 12% are attained. For the 14TeV LHC with an integrated luminosity of 10 fb(-1), these numbers translate into a 3 sigma sensitivity over a large part of the range 500 less than or similar to M-Z' less than or similar to 1500GeV.
Resumo:
Electron energy loss spectra (EELS) of Cr, Mo and W hexacarbonyls in the vapour phase are reported. Most of the bands observed are similar to those in optical spectra, but the two high energy transitions in the 9·8–11·2 eV region are reported here for the first time. Based on the orbital energies from the ultraviolet photoelectron spectra and the electronic transition energies from EELS and earlier optical studies, the molecular energy level schemes of these molecules are constructed.
Resumo:
We have developed a theory for an electrochemical way of measuring the statistical properties of a nonfractally rough electrode. We obtained the expression for the current transient on a rough electrode which shows three times regions: short and long time limits and the transition region between them. The expressions for these time ranges are exploited to extract morphological information about the surface roughness. In the short and long time regimes, we extract information regarding various morphological features like the roughness factor, average roughness, curvature, correlation length, dimensionality of roughness, and polynomial approximation for the correlation function. The formulas for the surface structure factors (the measure of surface roughness) of rough surfaces in terms of measured reversible and diffusion-limited current transients are also obtained. Finally, we explore the feasibility of making such measurements.
Resumo:
We present a complete solution to the problem of coherent-mode decomposition of the most general anisotropic Gaussian Schell-model (AGSM) beams, which constitute a ten-parameter family. Our approach is based on symmetry considerations. Concepts and techniques familiar from the context of quantum mechanics in the two-dimensional plane are used to exploit the Sp(4, R) dynamical symmetry underlying the AGSM problem. We take advantage of the fact that the symplectic group of first-order optical system acts unitarily through the metaplectic operators on the Hilbert space of wave amplitudes over the transverse plane, and, using the Iwasawa decomposition for the metaplectic operator and the classic theorem of Williamson on the normal forms of positive definite symmetric matrices under linear canonical transformations, we demonstrate the unitary equivalence of the AGSM problem to a separable problem earlier studied by Li and Wolf [Opt. Lett. 7, 256 (1982)] and Gori and Guattari [Opt. Commun. 48, 7 (1983)]. This conn ction enables one to write down, almost by inspection, the coherent-mode decomposition of the general AGSM beam. A universal feature of the eigenvalue spectrum of the AGSM family is noted.
Resumo:
Real gas effects dominate the hypersonic flow fields encountered by modem day hypersonic space vehicles. Measurement of aerodynamic data for the design applications of such aerospace vehicles calls for special kinds of wind tunnels capable of faithfully simulating real gas effects. A shock tunnel is an established facility commonly used along with special instrumentation for acquiring the data for this purpose within a short time period. The hypersonic shock tunnel (HST1), established at the Indian Institute of Science (IISc) in the early 1970s, has been extensively used to measure the aerodynamic data of various bodies of interest at hypersonic Mach numbers in the range 4 to 13. Details of some important measurements made during the period 1975-1995 along with the performance capabilities of the HST1 are presented in this review. In view of the re-emergence of interest in hypersonics across the globe in recent times, the present review highlights the Suitability of the hypersonic shock tunnel at the IISc for future space application studies in India.
Resumo:
A method based on the minimal-spanning tree is extended to a collection of points in three dimensions. Two parameters, the average edge length and its standard deviation characterize the disorder. The structural phase diagram for a monatomic system of particles and the characteristic values for the uniform random distribution of points have been obtained. The method is applied to hard spheres and Lennard-Jones systems. These systems occupy distinct regions in the structural phase diagram. The structure of the Lennard-Jones system approaches that of the defective close-packed arrangements at low temperatures whereas in the liquid regime, it deviates from the close-packed configuration.
Resumo:
Real-time simulation of deformable solids is essential for some applications such as biological organ simulations for surgical simulators. In this work, deformable solids are approximated to be linear elastic, and an easy and straight forward numerical technique, the Finite Point Method (FPM), is used to model three dimensional linear elastostatics. Graphics Processing Unit (GPU) is used to accelerate computations. Results show that the Finite Point Method, together with GPU, can compute three dimensional linear elastostatic responses of solids at rates suitable for real-time graphics, for solids represented by reasonable number of points.
Resumo:
The capacity region of a two-user Gaussian Multiple Access Channel (GMAC) with complex finite input alphabets and continuous output alphabet is studied. When both the users are equipped with the same code alphabet, it is shown that, rotation of one of the user’s alphabets by an appropriate angle can make the new pair of alphabets not only uniquely decodable, but will result in enlargement of the capacity region. For this set-up, we identify the primary problem to be finding appropriate angle(s) of rotation between the alphabets such that the capacity region is maximally enlarged. It is shown that the angle of rotation which provides maximum enlargement of the capacity region also minimizes the union bound on the probability of error of the sumalphabet and vice-verse. The optimum angle(s) of rotation varies with the SNR. Through simulations, optimal angle(s) of rotation that gives maximum enlargement of the capacity region of GMAC with some well known alphabets such as M-QAM and M-PSK for some M are presented for several values of SNR. It is shown that for large number of points in the alphabets, capacity gains due to rotations progressively reduce. As the number of points N tends to infinity, our results match the results in the literature wherein the capacity region of the Gaussian code alphabet doesn’t change with rotation for any SNR.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.