77 resultados para Vector spaces -- Problems, exercises, etc.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
A numerical procedure, based on the parametric differentiation and implicit finite difference scheme, has been developed for a class of problems in the boundary-layer theory for saddle-point regions. Here, the results are presented for the case of a three-dimensional stagnation-point flow with massive blowing. The method compares very well with other methods for particular cases (zero or small mass blowing). Results emphasize that the present numerical procedure is well suited for the solution of saddle-point flows with massive blowing, which could not be solved by other methods.
Resumo:
Testing for mutagenicity and carcinogenicity has become an integral part of the toxicological evaluation of drugs and chemicals. Standard carcinogenicity tests in vivo require both large numbers of animals and prolonged experiments. To circumvent these problems, several rapid tests have been developed for preliminary screening of mutagens and carcinogens in vitro. Ames and his associates, the first to develop a mutation test, used mutant strains of Salmonella typhimurium [1]. Mutation tests with Escherichia coli, Bacillus subtilis, Neurospora crassa and Saccharomyces cerevisiae, and DNA-repair tests with E. coli and B. subtilis, have been developed. Cytogenetic assays, in vivo as well as in vitro, in both plant and animal systems, are also used to detect potential mutagens and carcinogens. Transfection is inhibited by base mutation, cleavage of DNA, loss of cohesive ends, interaction with histones, spermidine, nalidixic acid, etc. [3]. The efficiency of transfection is affected by temperature, DNA structure and the condition of the competence of the recipient cells [3]. Transfection assays with phages MS: RNA and ~i, x 174-DNA have been reported [15]. A fast and easy transfection assay using colitis bacteriophage DNA is reported in this communication.
Resumo:
We investigate the use of a two stage transform vector quantizer (TSTVQ) for coding of line spectral frequency (LSF) parameters in wideband speech coding. The first stage quantizer of TSTVQ, provides better matching of source distribution and the second stage quantizer provides additional coding gain through using an individual cluster specific decorrelating transform and variance normalization. Further coding gain is shown to be achieved by exploiting the slow time-varying nature of speech spectra and thus using inter-frame cluster continuity (ICC) property in the first stage of TSTVQ method. The proposed method saves 3-4 bits and reduces the computational complexity by 58-66%, compared to the traditional split vector quantizer (SVQ), but at the expense of 1.5-2.5 times of memory.
Resumo:
A methodology for determining spacecraft attitude and autonomously calibrating star camera, both independent of each other, is presented in this paper. Unlike most of the attitude determination algorithms where attitude of the satellite depend on the camera calibrating parameters (like principal point offset, focal length etc.), the proposed method has the advantage of computing spacecraft attitude independently of camera calibrating parameters except lens distortion. In the proposed method both attitude estimation and star camera calibration is done together independent of each other by directly utilizing the star coordinate in image plane and corresponding star vector in inertial coordinate frame. Satellite attitude, camera principal point offset, focal length (in pixel), lens distortion coefficient are found by a simple two step method. In the first step, all parameters (except lens distortion) are estimated using a closed-form solution based on a distortion free camera model. In the second step lens distortion coefficient is estimated by linear least squares method using the solution of the first step to be used in the camera model that incorporates distortion. These steps are applied in an iterative manner to refine the estimated parameters. The whole procedure is faster enough for onboard implementation.
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
An error-free computational approach is employed for finding the integer solution to a system of linear equations, using finite-field arithmetic. This approach is also extended to find the optimum solution for linear inequalities such as those arising in interval linear programming probloms.
Resumo:
Using the framework of a new relaxation system, which converts a nonlinear viscous conservation law into a system of linear convection-diffusion equations with nonlinear source terms, a finite variable difference method is developed for nonlinear hyperbolic-parabolic equations. The basic idea is to formulate a finite volume method with an optimum spatial difference, using the Locally Exact Numerical Scheme (LENS), leading to a Finite Variable Difference Method as introduced by Sakai [Katsuhiro Sakai, A new finite variable difference method with application to locally exact numerical scheme, journal of Computational Physics, 124 (1996) pp. 301-308.], for the linear convection-diffusion equations obtained by using a relaxation system. Source terms are treated with the well-balanced scheme of Jin [Shi Jin, A steady-state capturing method for hyperbolic systems with geometrical source terms, Mathematical Modeling Numerical Analysis, 35 (4) (2001) pp. 631-645]. Bench-mark test problems for scalar and vector conservation laws in one and two dimensions are solved using this new algorithm and the results demonstrate the efficiency of the scheme in capturing the flow features accurately.
Resumo:
The remarkable advances made in recombinant DNA technology over the last two decades have paved way for the use of gene transfer to treat human diseases. Several protocols have been developed for the introduction and expression of genes in humans, but the clinical efficacy has not been conclusively demonstrated in any of them. The eventual success of gene therapy for genetic and acquired disorders depends on the development of better gene transfer vectors for sustained, long term expression of foreign genes as well as a better understanding of the pathophysiology of human diseases, it is heartening to note that some of the gene therapy protocols have found other applications such as the genetic immunization or DNA vaccines, which is being heralded as the third vaccine revolution, Gene therapy is yet to become a dream come true, but the light is seen at the end of the tunnel.
Resumo:
This paper describes a simple technique for the fermentation of untreated or partly-treated leafy biomass in a digester of novel design without incurring the normal problems of feeding, floating and scum formation of feed, etc. The solid phase fermentation studied consists of a bed of biomass frequently sprinkled with an aqueous bacterial inoculum and recycling the leachate to conserve moisture and improve the bacterial dispersion in the bed. The decomposition of the leaf biomass and water hyacinth substrates used in this study was rapid, taking 45 and 30 days for the production of 250 and 235 l biogas per kg total solids (TS) respectively, for the above mentioned substrates at a daily sprinkled volume of 26 ml cm−2 of bed per day sprinkled at 12 h intervals. Very little volatile fatty acid (VFA) intermediates accumulated in the liquid sprinkled, suggesting acidogenesis to be rate-limiting in this process. From the pattern of VFA and gas produced it is concluded that most of the biogas produced is from the biomass bed, thus making the operation of a separate methanogenic reactor unnecessary.
Resumo:
Site-specific geotechnical data are always random and variable in space. In the present study, a procedure for quantifying the variability in geotechnical characterization and design parameters is discussed using the site-specific cone tip resistance data (qc) obtained from static cone penetration test (SCPT). The parameters for the spatial variability modeling of geotechnical parameters i.e. (i) existing trend function in the in situ qc data; (ii) second moment statistics i.e. analysis of mean, variance, and auto-correlation structure of the soil strength and stiffness parameters; and (iii) inputs from the spatial correlation analysis, are utilized in the numerical modeling procedures using the finite difference numerical code FLAC 5.0. The influence of consideration of spatially variable soil parameters on the reliability-based geotechnical deign is studied for the two cases i.e. (a) bearing capacity analysis of a shallow foundation resting on a clayey soil, and (b) analysis of stability and deformation pattern of a cohesive-frictional soil slope. The study highlights the procedure for conducting a site-specific study using field test data such as SCPT in geotechnical analysis and demonstrates that a few additional computations involving soil variability provide a better insight into the role of variability in designs.
Resumo:
In this note, the fallacy in the method given by Sharma and Swarup, in their paper on time minimising transportation problem, to determine the setS hkof all nonbasic cells which when introduced into the basis, either would eliminate a given basic cell (h, k) from the basis or reduce the amountx hkis pointed out.
Resumo:
The subspace intersection method (SIM) provides unbiased bearing estimates of multiple acoustic sources in a range-independent shallow ocean using a one-dimensional search without prior knowledge of source ranges and depths. The original formulation of this method is based on deployment of a horizontal linear array of hydrophones which measure acoustic pressure. In this paper, we extend SIM to an array of acoustic vector sensors which measure pressure as well as all components of particle velocity. Use of vector sensors reduces the minimum number of sensors required by a factor of 4, and also eliminates the constraint that the intersensor spacing should not exceed half wavelength. The additional information provided by the vector sensors leads to performance enhancement in the form of lower estimation error and higher resolution.
An FETI-preconditioned conjuerate gradient method for large-scale stochastic finite element problems
Resumo:
In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.
Resumo:
We propose a self-regularized pseudo-time marching strategy for ill-posed, nonlinear inverse problems involving recovery of system parameters given partial and noisy measurements of system response. While various regularized Newton methods are popularly employed to solve these problems, resulting solutions are known to sensitively depend upon the noise intensity in the data and on regularization parameters, an optimal choice for which remains a tricky issue. Through limited numerical experiments on a couple of parameter re-construction problems, one involving the identification of a truss bridge and the other related to imaging soft-tissue organs for early detection of cancer, we demonstrate the superior features of the pseudo-time marching schemes.