921 resultados para PARAMETER-PRESERVING ANTIFERROMAGNET
Resumo:
During the motion of one dimensional flexible objects such as ropes, chains, etc., the assumption of constant length is realistic. Moreover,their motion appears to be naturally minimizing some abstract distance measure, wherein the disturbance at one end gradually dies down along the curve defining the object. This paper presents purely kinematic strategies for deriving length-preserving transformations of flexible objects that minimize appropriate ‘motion’. The strategies involve sequential and overall optimization of the motion derived using variational calculus. Numerical simulations are performed for the motion of a planar curve and results show stable converging behavior for single-step infinitesimal and finite perturbations 1 as well as multi-step perturbations. Additionally, our generalized approach provides different intuitive motions for various problem-specific measures of motion, one of which is shown to converge to the conventional tractrix-based solution. Simulation results for arbitrary shapes and excitations are also included.
Resumo:
Welding parameters like welding speed, rotation speed, plunge depth, shoulder diameter etc., influence the weld zone properties, microstructure of friction stir welds, and forming behavior of welded sheets in a synergistic fashion. The main aims of the present work are to (1) analyze the effect of welding speed, rotation speed, plunge depth, and shoulder diameter on the formation of internal defects during friction stir welding (FSW), (2) study the effect on axial force and torque during welding, (c) optimize the welding parameters for producing internal defect-free welds, and (d) propose and validate a simple criterion to identify defect-free weld formation. The base material used for FSW throughout the work is Al 6061T6 having a thickness value of 2.1 mm. Only butt welding of sheets is aimed in the present work. It is observed from the present analysis that higher welding speed, higher rotation speed, and higher plunge depth are preferred for producing a weld without internal defects. All the shoulder diameters used for FSW in the present work produced defect-free welds. The axial force and torque are not constant and a large variation is seen with respect to FSW parameters that produced defective welds. In the case of defect-free weld formation, the axial force and torque are relatively constant. A simple criterion, (a,tau/a,p)(defective) > (a,tau/a,p)(defect free) and (a,F/a,p)(defective) > (a,F/a,p)(defect free), is proposed with this observation for identifying the onset of defect-free weld formation. Here F is axial force, tau is torque, and p is welding speed or tool rotation speed or plunge depth. The same criterion is validated with respect to Al 5xxx base material. Even in this case, the axial force and torque remained constant while producing defect-free welds.
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
There is a drop in the flutter boundary of an aeroelastic system placed in a transonic flow due to compressibility effects and is known as the transonic dip. Viscous effects can shift the lo-cation of the shock and depending on the shock strength the boundary layer may separate leading to changes in the flutter speed. An unsteady Euler flow solver coupled with the structural dynamic equations is used to understand the effect of shock on the transonic dip. The effect of various system parameters such as mass ratio, location of the center of mass, position of the elastic axis, ratio of uncoupled natural frequencies in heave and pitch are also studied. Steady turbulent flow results are presented to demonstrate the effect of viscosity on the location and strength of the shock.
Resumo:
Growing consumer expectations continue to fuel further advancements in vehicle ride comfort analysis including development of a comprehensive tool capable of aiding the understanding of ride comfort. To date, most of the work on biodynamic responses of human body in the context of ride comfort mainly concentrates on driver or a designated occupant and therefore leaves the scope for further work on ride comfort analysis covering a larger number of occupants with detailed modeling of their body segments. In the present study, governing equations of a 13-DOF (degrees-of-freedom) lumped parameter model (LPM) of a full car with seats (7-DOF without seats) and a 7-DOF occupant model, a linear version of an earlier non-linear occupant model, are presented. One or more occupant models can be coupled with the vehicle model resulting into a maximum of 48-DOF LPM for a car with five occupants. These multi-occupant models can be formulated in a modular manner and solved efficiently using MATLAB/SIMULINK for a given transient road input. The vehicle model and the occupant model are independently verified by favorably comparing computed dynamic responses with published data. A number of cases with different dispositions of occupants in a small car are analyzed using the current modular approach thereby underscoring its potential for efficient ride quality assessment and design of suspension systems.
Resumo:
The objective of the current study is to evaluate the fidelity of load cell reading during impact testing in a drop-weight impactor using lumped parameter modeling. For the most common configuration of a moving impactor-load cell system in which dynamic load is transferred from the impactor head to the load cell, a quantitative assessment is made of the possible discrepancy that can result in load cell response. A 3-DOF (degrees-of-freedom) LPM (lumped parameter model) is considered to represent a given impact testing set-up. In this model, a test specimen in the form of a steel hat section similar to front rails of cars is represented by a nonlinear spring while the load cell is assumed to behave in a linear manner due to its high stiffness. Assuming a given load-displacement response obtained in an actual test as the true behavior of the specimen, the numerical solution of the governing differential equations following an implicit time integration scheme is shown to yield an excellent reproduction of the mechanical behavior of the specimen thereby confirming the accuracy of the numerical approach. The spring representing the load cell, however,predicts a response that qualitatively matches the assumed load-displacement response of the test specimen with a perceptibly lower magnitude of load.
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
We investigate the isentropic index along the saturated vapor line as a correlating parameter with quantities both in the saturated liquid phase and the saturated vapor phase. The relation is established via temperatures such as T-hgmax and T* where the saturated vapor enthalpy and the product of saturation temperature and saturated liquid density attain a maximum, respectively. We obtain that the saturated vapor isentropic index is correlated with these temperatures but also with the saturated liquid Gruneisen parameters at T-hgmax. and T*.
Resumo:
Finite volume methods traditionally employ dimension by dimension extension of the one-dimensional reconstruction and averaging procedures to achieve spatial discretization of the governing partial differential equations on a structured Cartesian mesh in multiple dimensions. This simple approach based on tensor product stencils introduces an undesirable grid orientation dependence in the computed solution. The resulting anisotropic errors lead to a disparity in the calculations that is most prominent between directions parallel and diagonal to the grid lines. In this work we develop isotropic finite volume discretization schemes which minimize such grid orientation effects in multidimensional calculations by eliminating the directional bias in the lowest order term in the truncation error. Explicit isotropic expressions that relate the cell face averaged line and surface integrals of a function and its derivatives to the given cell area and volume averages are derived in two and three dimensions, respectively. It is found that a family of isotropic approximations with a free parameter can be derived by combining isotropic schemes based on next-nearest and next-next-nearest neighbors in three dimensions. Use of these isotropic expressions alone in a standard finite volume framework, however, is found to be insufficient in enforcing rotational invariance when the flux vector is nonlinear and/or spatially non-uniform. The rotationally invariant terms which lead to a loss of isotropy in such cases are explicitly identified and recast in a differential form. Various forms of flux correction terms which allow for a full recovery of rotational invariance in the lowest order truncation error terms, while preserving the formal order of accuracy and discrete conservation of the original finite volume method, are developed. Numerical tests in two and three dimensions attest the superior directional attributes of the proposed isotropic finite volume method. Prominent anisotropic errors, such as spurious asymmetric distortions on a circular reaction-diffusion wave that feature in the conventional finite volume implementation are effectively suppressed through isotropic finite volume discretization. Furthermore, for a given spatial resolution, a striking improvement in the prediction of kinetic energy decay rate corresponding to a general two-dimensional incompressible flow field is observed with the use of an isotropic finite volume method instead of the conventional discretization. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Two Chrastil type expressions have been developed to model the solubility of supercritical fluids/gases in liquids. The three parameter expressions proposed correlates the solubility as a function of temperature, pressure and density. The equation can also be used to check the self-consistency of the experimental data of liquid phase compositions for supercritical fluid-liquid equilibria. Fifty three different binary systems (carbon-dioxide + liquid) with around 2700 data points encompassing a wide range of compounds like esters, alcohols, carboxylic acids and ionic liquids were successfully modeled for a wide range of temperatures and pressures. Besides the test for self-consistency, based on the data at one temperature, the model can be used to predict the solubility of supercritical fluids in liquids at different temperatures. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
A finite difference method for a time-dependent singularly perturbed convection-diffusion-reaction problem involving two small parameters in one space dimension is considered. We use the classical implicit Euler method for time discretization and upwind scheme on the Shishkin-Bakhvalov mesh for spatial discretization. The method is analysed for convergence and is shown to be uniform with respect to both the perturbation parameters. The use of the Shishkin-Bakhvalov mesh gives first-order convergence unlike the Shishkin mesh where convergence is deteriorated due to the presence of a logarithmic factor. Numerical results are presented to validate the theoretical estimates obtained.
Bayesian parameter identification in dynamic state space models using modified measurement equations
Resumo:
When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. The present study proposes strategies based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples cover laboratory studies on a time variant dynamical system and a bending-torsion coupled, geometrically non-linear building frame under earthquake support motions. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.