907 resultados para fixed point method
Resumo:
The steady MHD mixed convection flow of a viscoelastic fluid in the vicinity of two-dimensional stagnation point with magnetic field has been investigated under the assumption that the fluid obeys the upper-convected Maxwell (UCM) model. Boundary layer theory is used to simplify the equations of motion. induced magnetic field and energy which results in three coupled non-linear ordinary differential equations which are well-posed. These equations have been solved by using finite difference method. The results indicate the reduction in the surface velocity gradient, surface heat transfer and displacement thickness with the increase in the elasticity number. These trends are opposite to those reported in the literature for a second-grade fluid. The surface velocity gradient and heat transfer are enhanced by the magnetic and buoyancy parameters. The surface heat transfer increases with the Prandtl number, but the surface velocity gradient decreases.
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
Randomness in the source condition other than the heterogeneity in the system parameters can also be a major source of uncertainty in the concentration field. Hence, a more general form of the problem formulation is necessary to consider randomness in both source condition and system parameters. When the source varies with time, the unsteady problem, can be solved using the unit response function. In the case of random system parameters, the response function becomes a random function and depends on the randomness in the system parameters. In the present study, the source is modelled as a random discrete process with either a fixed interval or a random interval (the Poisson process). In this study, an attempt is made to assess the relative effects of various types of source uncertainties on the probabilistic behaviour of the concentration in a porous medium while the system parameters are also modelled as random fields. Analytical expressions of mean and covariance of concentration due to random discrete source are derived in terms of mean and covariance of unit response function. The probabilistic behaviour of the random response function is obtained by using a perturbation-based stochastic finite element method (SFEM), which performs well for mild heterogeneity. The proposed method is applied for analysing both the 1-D as well as the 3-D solute transport problems. The results obtained with SFEM are compared with the Monte Carlo simulation for 1-D problems.
Resumo:
An analysis is performed to study the unsteady combined forced and free convection flow (mixed convection flow) of a viscous incompressible electrically conducting fluid in the vicinity of an axisymmetric stagnation point adjacent to a heated vertical surface. The unsteadiness in the flow and temperature fields is due to the free stream velocity, which varies arbitrarily with time. Both constant wall temperature and constant heat flux conditions are considered in this analysis. By using suitable transformations, the Navier-Stokes and energy equations with four independent variables (x, y, z, t) are reduced to a system of partial differential equations with two independent variables (eta, tau). These transformations also uncouple the momentum and energy equations resulting in a primary axisymmetric flow, in an energy equation dependent on the primary flow and in a buoyancy-induced secondary flow dependent on both primary flow and energy. The resulting system of partial differential equations has been solved numerically by using both implicit finite-difference scheme and differential-difference method. An interesting result is that for a decelerating free stream velocity, flow reversal occurs in the primary flow after certain instant of time and the magnetic field delays or prevents the flow reversal. The surface heat transfer and the surface shear stress in the primary flow increase with the magnetic field, but the surface shear stress in the buoyancy-induced secondary flow decreases. Further the heat transfer increases with the Prandtl number, but the surface shear stress in the secondary flow decreases.
Resumo:
It has been shown that the conventional practice of designing a compensated hot wire amplifier with a fixed ceiling to floor ratio results in considerable and unnecessary increase in noise level at compensation settings other than optimum (which is at the maximum compensation at the highest frequency of interest). The optimum ceiling to floor ratio has been estimated to be between 1.5-2.0 ωmaxM. Application of the above considerations to an amplifier in which the ceiling to floor ratio is optimized at each compensation setting (for a given amplifier band-width), shows the usefulness of the method in improving the signal to noise ratio.
Resumo:
In this study we explore the concurrent, combined use of three research methods, statistical corpus analysis and two psycholinguistic experiments (a forced-choice and an acceptability rating task), using verbal synonymy in Finnish as a case in point. In addition to supporting conclusions from earlier studies concerning the relationships between corpus-based and ex- perimental data (e. g., Featherston 2005), we show that each method adds to our understanding of the studied phenomenon, in a way which could not be achieved through any single method by itself. Most importantly, whereas relative rareness in a corpus is associated with dispreference in selection, such infrequency does not categorically always entail substantially lower acceptability. Furthermore, we show that forced-choice and acceptability rating tasks pertain to distinct linguistic processes, with category-wise in- commensurable scales of measurement, and should therefore be merged with caution, if at all.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.
Resumo:
The effect of surface mass transfer velocities having normal, principal and transverse direction components (�vectored� suction and injection) on the steady, laminar, compressible boundary layer at a three-dimensional stagnation point has been investigated both for nodal and saddle points of attachment. The similarity solutions of the boundary layer equations were obtained numerically by the method of parametric differentiation. The principal and transverse direction surface mass transfer velocities significantly affect the skin friction (both in the principal and transverse directions) and the heat transfer. Also the inadequacy of assuming a linear viscosity-temperature relation at low-wall temperatures is shown.
Resumo:
Drug induced liver injury is one of the frequent reasons for the drug removal from the market. During the recent years there has been a pressure to develop more cost efficient, faster and easier ways to investigate drug-induced toxicity in order to recognize hepatotoxic drugs in the earlier phases of drug development. High Content Screening (HCS) instrument is an automated microscope equipped with image analysis software. It makes the image analysis faster and decreases the risk for an error caused by a person by analyzing the images always in the same way. Because the amount of drug and time needed in the analysis are smaller and multiple parameters can be analyzed from the same cells, the method should be more sensitive, effective and cheaper than the conventional assays in cytotoxicity testing. Liver cells are rich in mitochondria and many drugs target their toxicity to hepatocyte mitochondria. Mitochondria produce the majority of the ATP in the cell through oxidative phosphorylation. They maintain biochemical homeostasis in the cell and participate in cell death. Mitochondria is divided into two compartments by inner and outer mitochondrial membranes. The oxidative phosphorylation happens in the inner mitochondrial membrane. A part of the respiratory chain, a protein called cytochrome c, activates caspase cascades when released. This leads to apoptosis. The aim of this study was to implement, optimize and compare mitochondrial toxicity HCS assays in live cells and fixed cells in two cellular models: human HepG2 hepatoma cell line and rat primary hepatocytes. Three different hepato- and mitochondriatoxic drugs (staurosporine, rotenone and tolcapone) were used. Cells were treated with the drugs, incubated with the fluorescent probes and then the images were analyzed using Cellomics ArrayScan VTI reader. Finally the results obtained after optimizing methods were compared to each other and to the results of the conventional cytotoxicity assays, ATP and LDH measurements. After optimization the live cell method and rat primary hepatocytes were selected to be used in the experiments. Staurosporine was the most toxic of the three drugs and caused most damage to the cells most quickly. Rotenone was not that toxic, but the results were more reproducible and thus it would serve as a good positive control in the screening. Tolcapone was the least toxic. So far the conventional analysis of cytotoxicity worked better than the HCS methods. More optimization needs to be done to get the HCS method more sensitive. This was not possible in this study due to time limit.
Measurement of acceleration while walking as an automated method for gait assessment in dairy cattle
Resumo:
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=−0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72g; SED=0.02) and rear (1.62 vs. 1.67g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94g; SED=0.03) were lower when cows walked on rubber compared with concrete. Despite the improvements in gait score that occurred when cows walked on rubber, the asymmetry of variance of acceleration of the front leg was higher (15.2 vs. 10.4%; SED=2.0). The difference in walking speed between concrete and rubber correlated with the difference in the mean acceleration and the difference in the variance of acceleration of the legs and back (r ≥0.6). Three-dimensional accelerometers seem to be a promising tool for lameness detection on farm and to study walking surfaces, especially when attached to a leg.
Resumo:
A novel optical method is proposed and demonstrated, for real-time dimension estimation of thin opaque cylindrical objects. The methodology relies on free-space Fraunhofer diffraction principle. The central region, of such tailored diffraction pattern obtained under suitable choice of illumination conditions, comprises of a pair of `equal intensity maxima', whose separation remains constant and independent of the diameter of the diffracting object. An analysis of `the intensity distribution in this region' reveals the following. At a point symmetrically located between the said maxima, the light intensity varies characteristically with diameter of the diffracting object, exhibiting a relatively stronger intensity modulation under spherical wave illumination than under a plane wave illumination. The analysis reveals further, that the said intensity variation with diameter is controllable by the illumination conditions. Exploiting these `hitherto unexplored' features, the present communication reports for the first time, a reliable method of estimating diameter of thin opaque cylindrical objects in real-time, with nanometer resolution from single point intensity measurement. Based on the proposed methodology, results of few simulation and experimental investigations carried-out on metallic wires with diameters spanning the range of 5 to 50 mu m, are presented. The results show that proposed method is well-suited for high resolution on-line monitoring of ultrathin wire diameters, extensively used in micro-mechanics and semiconductor industries, where the conventional diffraction-based methods fail to produce accurate results.
Resumo:
Our ability to infer the protein quaternary structure automatically from atom and lattice information is inadequate, especially for weak complexes, and heteromeric quaternary structures. Several approaches exist, but they have limited performance. Here, we present a new scheme to infer protein quaternary structure from lattice and protein information, with all-around coverage for strong, weak and very weak affinity homomeric and heteromeric complexes. The scheme combines naive Bayes classifier and point group symmetry under Boolean framework to detect quaternary structures in crystal lattice. It consistently produces >= 90% coverage across diverse benchmarking data sets, including a notably superior 95% coverage for recognition heteromeric complexes, compared with 53% on the same data set by current state-of-the-art method. The detailed study of a limited number of prediction-failed cases offers interesting insights into the intriguing nature of protein contacts in lattice. The findings have implications for accurate inference of quaternary states of proteins, especially weak affinity complexes.
Resumo:
Many physical problems can be modeled by scalar, first-order, nonlinear, hyperbolic, partial differential equations (PDEs). The solutions to these PDEs often contain shock and rarefaction waves, where the solution becomes discontinuous or has a discontinuous derivative. One can encounter difficulties using traditional finite difference methods to solve these equations. In this paper, we introduce a numerical method for solving first-order scalar wave equations. The method involves solving ordinary differential equations (ODEs) to advance the solution along the characteristics and to propagate the characteristics in time. Shocks are created when characteristics cross, and the shocks are then propagated by applying analytical jump conditions. New characteristics are inserted in spreading rarefaction fans. New characteristics are also inserted when values on adjacent characteristics lie on opposite sides of an inflection point of a nonconvex flux function, Solutions along characteristics are propagated using a standard fourth-order Runge-Kutta ODE solver. Shocks waves are kept perfectly sharp. In addition, shock locations and velocities are determined without analyzing smeared profiles or taking numerical derivatives. In order to test the numerical method, we study analytically a particular class of nonlinear hyperbolic PDEs, deriving closed form solutions for certain special initial data. We also find bounded, smooth, self-similar solutions using group theoretic methods. The numerical method is validated against these analytical results. In addition, we compare the errors in our method with those using the Lax-Wendroff method for both convex and nonconvex flux functions. Finally, we apply the method to solve a PDE with a convex flux function describing the development of a thin liquid film on a horizontally rotating disk and a PDE with a nonconvex flux function, arising in a problem concerning flow in an underground reservoir.
Resumo:
To evaluate the parameters in the two-parameter fracture model, i.e. the critical stress intensity factor and critical crack tip opening displacement for the fracture of plain concrete in Mode 1 for the given test configuration and geometry, considerable computational effort is necessary. A simple graphical method has been proposed using normalized fracture parameters for the three-point bend (3PB) notched specimen and the double-edged notched (DEN) specimen. A similar graphical method is proposed to compute the maximum load carrying capacity of a specimen, using the critical fracture parameters both for 3PB and DEN configurations.