979 resultados para Dimensional Accuracy
Resumo:
We apply the method of multiple scales (MMS) to a well-known model of regenerative cutting vibrations in the large delay regime. By ``large'' we mean the delay is much larger than the timescale of typical cutting tool oscillations. The MMS up to second order, recently developed for such systems, is applied here to study tool dynamics in the large delay regime. The second order analysis is found to be much more accurate than the first order analysis. Numerical integration of the MMS slow flow is much faster than for the original equation, yet shows excellent accuracy in that plotted solutions of moderate amplitudes are visually near-indistinguishable. The advantages of the present analysis are that infinite dimensional dynamics is retained in the slow flow, while the more usual center manifold reduction gives a planar phase space; lower-dimensional dynamical features, such as Hopf bifurcations and families of periodic solutions, are also captured by the MMS; the strong sensitivity of the slow modulation dynamics to small changes in parameter values, peculiar to such systems with large delays, is seen clearly; and though certain parameters are treated as small (or, reciprocally, large), the analysis is not restricted to infinitesimal distances from the Hopf bifurcation.
Resumo:
Details of an efficient optimal closed-loop guidance algorithm for a three-dimensional launch are presented with simulation results. Two types of orbital injections, with either true anomaly or argument of perigee being free at injection, are considered. The resulting steering-angle profile under the assumption of uniform gravity lies in a canted plane which transforms a three-dimensional problem into an equivalent two-dimensional one. Effects of thrust are estimated using a series in a recursive way. Encke's method is used to predict the trajectory during powered flight and then to compute the changes due to actual gravity using two gravity-related vectors. Guidance parameters are evaluated using the linear differential correction method. Optimality of the algorithm is tested against a standard ground-based trajectory optimization package. The performance of the algorithm is tested for accuracy, robustness, and efficiency for a sun-synchronous mission involving guidance for a multistage vehicle that requires large pitch and yaw maneuver. To demonstrate applicability of the algorithm to a range of missions, injection into a geostationary transfer orbit is also considered. The performance of the present algorithm is found to be much better than others.
Resumo:
Many previous studies regarding the estimation of mechanical properties of single walled carbon nanotubes (SWCNTs) report that, the modulus of SWCNTs is chirality, length and diameter dependent. Here, this dependence is quantitatively described in terms of high accuracy curve fit equations. These equations allow us to estimate the modulus of long SWCNTs (lengths of about 100-120 nm) if the value at the prescribed low lengths (lengths of about 5-10 nm) is known. This is supposed to save huge computational time and expense. Also, based on the observed length dependent behavior of SWCNT initial modulus, we predict that, SWCNT mechanical properties such as Young's modulus, secant modulus, maximum tensile strength, failure strength, maximum tensile strain and failure strain might also exhibit the length dependent behavior along with chirality and length dependence. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new application of two dimensional Principal Component Analysis (2DPCA) to the problem of online character recognition in Tamil Script. A novel set of features employing polynomial fits and quartiles in combination with conventional features are derived for each sample point of the Tamil character obtained after smoothing and resampling. These are stacked to form a matrix, using which a covariance matrix is constructed. A subset of the eigenvectors of the covariance matrix is employed to get the features in the reduced sub space. Each character is modeled as a separate subspace and a modified form of the Mahalanobis distance is derived to classify a given test character. Results indicate that the recognition accuracy using the 2DPCA scheme shows an approximate 3% improvement over the conventional PCA technique.
Resumo:
Three-dimensional effects are a primary source of discrepancy between the measured values of automotive muffler performance and those predicted by the plane wave theory at higher frequencies. The basically exact method of (truncated) eigenfunction expansions for simple expansion chambers involves very complicated algebra, and the numerical finite element method requires large computation time and core storage. A simple numerical method is presented in this paper. It makes use of compatibility conditions for acoustic pressure and particle velocity at a number of equally spaced points in the planes of the junctions (or area discontinuities) to generate the required number of algebraic equations for evaluation of the relative amplitudes of the various modes (eigenfunctions), the total number of which is proportional to the area ratio. The method is demonstrated for evaluation of the four-pole parameters of rigid-walled, simple expansion chambers of rectangular as well as circular cross-section for the case of a stationary medium. Computed values of transmission loss are compared with those computed by means of the plane wave theory, in order to highlight the onset (cutting-on) of various higher order modes and the effect thereof on transmission loss of the muffler. These are also compared with predictions of the finite element methods (FEM) and the exact methods involving eigenfunction expansions, in order to demonstrate the accuracy of the simple method presented here.
Resumo:
As an example of a front propagation, we study the propagation of a three-dimensional nonlinear wavefront into a polytropic gas in a uniform state and at rest. The successive positions and geometry of the wavefront are obtained by solving the conservation form of equations of a weakly nonlinear ray theory. The proposed set of equations forms a weakly hyperbolic system of seven conservation laws with an additional vector constraint, each of whose components is a divergence-free condition. This constraint is an involution for the system of conservation laws, and it is termed a geometric solenoidal constraint. The analysis of a Cauchy problem for the linearized system shows that when this constraint is satisfied initially, the solution does not exhibit any Jordan mode. For the numerical simulation of the conservation laws we employ a high resolution central scheme. The second order accuracy of the scheme is achieved by using MUSCL-type reconstructions and Runge-Kutta time discretizations. A constrained transport-type technique is used to enforce the geometric solenoidal constraint. The results of several numerical experiments are presented, which confirm the efficiency and robustness of the proposed numerical method and the control of the Jordan mode.
Resumo:
In this article, an extension to the total variation diminishing finite volume formulation of the lattice Boltzmann equation method on unstructured meshes was presented. The quadratic least squares procedure is used for the estimation of first-order and second-order spatial gradients of the particle distribution functions. The distribution functions were extrapolated quadratically to the virtual upwind node. The time integration was performed using the fourth-order RungeKutta procedure. A grid convergence study was performed in order to demonstrate the order of accuracy of the present scheme. The formulation was validated for the benchmark two-dimensional, laminar, and unsteady flow past a single circular cylinder. These computations were then investigated for the low Mach number simulations. Further validation was performed for flow past two circular cylinders arranged in tandem and side-by-side. Results of these simulations were extensively compared with the previous numerical data. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant. (c) 2012 Optical Society of America
Resumo:
In this work, first a Fortran code is developed for three dimensional linear elastostatics using constant boundary elements; the code is based on a MATLAB code developed by the author earlier. Next, the code is parallelized using BLACS, MPI, and ScaLAPACK. Later, the parallelized code is used to demonstrate the usefulness of the Boundary Element Method (BEM) as applied to the realtime computational simulation of biological organs, while focusing on the speed and accuracy offered by BEM. A computer cluster is used in this part of the work. The commercial software package ANSYS is used to obtain the `exact' solution against which the solution from BEM is compared; analytical solutions, wherever available, are also used to establish the accuracy of BEM. A pig liver is the biological organ considered. Next, instead of the computer cluster, a Graphics Processing Unit (GPU) is used as the parallel hardware. Results indicate that BEM is an interesting choice for the simulation of biological organs. Although the use of BEM for the simulation of biological organs is not new, the results presented in the present study are not found elsewhere in the literature. Also, a serial MATLAB code, and both serial and parallel versions of a Fortran code, which can solve three dimensional (3D) linear elastostatic problems using constant boundary elements, are provided as supplementary files that can be freely downloaded.
Resumo:
A material model, whose framework is parallel spring-bundles oriented in 3-D space, is proposed. Based on a discussion of the discrete schemes and optimum discretization of the solid angles, a 3-D network cell consisted of one-dimensional components is developed with its geometrical and physical parameters calibrated. It is proved that the 3-D network model is able to exactly simulate materials with arbitrary Poisson ratio from 0 to 1/2, breaking through the limit that the previous models in the literature are only suitable for materials with Poisson ratio from 0 to 1/3. A simplified model is also proposed to realize high computation accuracy within low computation cost. Examples demonstrate that the 3-D network model has particular superiority in the simulation of short-fiber reinforced composites.
Resumo:
The relationships between indentation responses and Young's modulus of an indented material were investigated by employing dimensional analysis and finite element method. Three representative tip bluntness geometries were introduced to describe the shape of a real Berkovich indenter. It was demonstrated that for each of these bluntness geometries, a set of approximate indentation relationships correlating the ratio of nominal hardness/reduced Young's modulus H (n) /E (r) and the ratio of elastic work/total work W (e)/W can be derived. Consequently, a method for Young's modulus measurement combined with its accuracy estimation was established on basis of these relationships. The effectiveness of this approach was verified by performing nanoindentation tests on S45C carbon steel and 6061 aluminum alloy and microindentation tests on aluminum single crystal, GCr15 bearing steel and fused silica.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
It was expected that there are a coil (289 similar to 325) and two a helix (alpha(1)368 similar to 373, alpha(2)381 similar to 388) structures in p53 protein C-terminal region based on its mRNA secondary structure template and Chou-Fasman's protein secondary structure principle of prediction. The result was conformed by the other four methods of protein secondary structure prediction that are based on the multiple sequence alignment (accuracy = 73.20%). Combine with the 31 amino acids crystal structure of the oligomerization, the three dimensional conformation of p53 C-terminal 108 residues was built using the SGI INDIGO(2) computer. This structure further expounds the relationship among those biological function domains of p53 C- terminus at three-dimensional level.
Resumo:
Image-based (i.e., photo/videogrammetry) and time-of-flight-based (i.e., laser scanning) technologies are typically used to collect spatial data of infrastructure. In order to help architecture, engineering, and construction (AEC) industries make cost-effective decisions in selecting between these two technologies with respect to their settings, this paper makes an attempt to measure the accuracy, quality, time efficiency, and cost of applying image-based and time-of-flight-based technologies to conduct as-built 3D reconstruction of infrastructure. In this paper, a novel comparison method is proposed, and preliminary experiments are conducted. The results reveal that if the accuracy and quality level desired for a particular application is not high (i.e., error < 10 cm, and completeness rate > 80%), image-based technologies constitute a good alternative for time-of-flight-based technologies and significantly reduce the time and cost needed for collecting the data on site.