66 resultados para value-based sales


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the buckling analysis of orthotropic nanoplates such as graphene using the two-variable refined plate theory and nonlocal small-scale effects. The two-variable refined plate theory takes account of transverse shear effects and parabolic distribution of the transverse shear strains through the thickness of the plate, hence it is unnecessary to use shear correction factors. Nonlocal governing equations of motion for the monolayer graphene are derived from the principle of virtual displacements. The closed-form solution for buckling load of a simply supported rectangular orthotropic nanoplate subjected to in-plane loading has been obtained by using the Navier's method. Numerical results obtained by the present theory are compared with first-order shear deformation theory for various shear correction factors. It has been proven that the nondimensional buckling load of the orthotropic nanoplate is always smaller than that of the isotropic nanoplate. It is also shown that small-scale effects contribute significantly to the mechanical behavior of orthotropic graphene sheets and cannot be neglected. Further, buckling load decreases with the increase of the nonlocal scale parameter value. The effects of the mode number, compression ratio and aspect ratio on the buckling load of the orthotropic nanoplate are also captured and discussed in detail. The results presented in this work may provide useful guidance for design and development of orthotropic graphene based nanodevices that make use of the buckling properties of orthotropic nanoplates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current paper suggests a new procedure for designing helmets for head impact protection for users such as motorcycle riders. According to the approach followed here, a helmet is mounted on a featureless Hybrid 3 headform that is used in assessing vehicles for compliance to the FMVSS 201 regulation in the USA for upper interior head impact safety. The requirement adopted in the latter standard, i.e. not exceeding a threshold HIC(d) limit of 1000, is applied in the present study as a likely criterion for adjudging the efficacy of helmets. An impact velocity of 6 m/s (13.5 mph) for the helmet-headform system striking a rigid target can probably be acceptable for ascertaining a helmet's effectiveness as a countermeasure for minimizing the risk of severe head injury. The proposed procedure is demonstrated with the help of a validated LS-DYNA model of a featureless Hybrid 3 headform in conjunction with a helmet model comprising an outer polypropylene shell to the inner surface of which is bonded a protective polyurethane foam padding of a given thickness. Based on simulation results of impact on a rigid surface, it appears that a minimum foam padding thickness of 40 mm is necessary for obtaining an acceptable value of HIC(d).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an artificial feed forward neural network (FFNN) approach for the assessment of power system voltage stability. A novel approach based on the input-output relation between real and reactive power, as well as voltage vectors for generators and load buses is used to train the neural net (NN). The input properties of the feed forward network are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The neural network is trained for the L-index output as the target vector for each of the system loads. Two separate trained NN, corresponding to normal loading and contingency, are investigated on the 367 node practical power system network. The performance of the trained artificial neural network (ANN) is also investigated on the system under various voltage stability assessment conditions. As compared to the computationally intensive benchmark conventional software, near accurate results in the value of L-index and thus the voltage profile were obtained. Proposed algorithm is fast, robust and accurate and can be used online for predicting the L-indices of all the power system buses. The proposed ANN approach is also shown to be effective and computationally feasible in voltage stability assessment as well as potential enhancements within an overall energy management system in order to determining local and global stability indices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notched three point bend (TPB) specimens made with plain concrete and cement mortar were tested under crack mouth opening displacement (CMOD) control at a rate of 0.0004 mm/s and simultaneously acoustic emissions (AE) released were recorded during the experiments. Amplitude distribution analysis of AE released during concrete was carried out to study the development of fracture process in concrete and mortar specimens. The slope of the log-linear frequency-amplitude distribution of AE is known as the AE based b-value. The AE based b-value was computed in terms of physical process of time varying applied load using cumulative frequency distribution (Gutenberg-Richter relationship) and discrete frequency distribution (Aki's method) of AE released during concrete fracture. AE characteristics of plain concrete and cement mortar were studied and discussed and it was observed that the AE based b-value analysis serves as a tool to identify the damage in concrete structural members. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design optimisation of a helicopter rotor blade is performed. The objective is to reduce helicopter vibration and constraints are put on frequencies and aeroelastic stability. The ply angles of the D-spar and skin of the composite rotor blade with NACA 0015 aerofoil section are considered as design variables. Polynomial response surfaces and space filling experimental designs are used to generate surrogate models of the objective function with respect to cross-section properties. The stacking sequence corresponding to the optimal cross-section is found using a real-coded genetic algorithm. Ply angle discretisation of 1 degrees, 15 degrees, 30 degrees and 45 degrees are used. The mean value of the objective function is used to find the optimal blade designs and the resulting designs are tested for variance. The optimal designs show a vibration reduction of 26% to 33% from the baseline design. A substantial reduction in vibration and an aeroelastically stable blade is obtained even after accounting for composite material uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a hybrid synthetic protocol, the solvated metal atom dispersion (SMAD) method, for the synthesis and stabilization of monodisperse amorphous cobalt nanoparticles. By employing an optimized ratio of a weakly coordinating solvent and a capping agent monodisperse colloidal cobalt nanoparticles (2 +/- 0.5 nm) have been prepared by the SMAD method. However, the as-prepared samples were found to be oxidatively unstable which was elucidated by their magnetic studies. Oxidative stability in our case was achieved via a pyrolysis process that led to the decomposition of the organic solvent and the capping agent resulting in the formation of carbon encapsulated cobalt nanoparticles which was confirmed by Raman spectroscopy. Controlled annealing at different temperatures led to the phase transformation of metallic cobalt from the hcp to fcc phase. The magnetic behaviour varies with the phase and the particle size; especially, the coercivity of nanoparticles exhibited strong dependence on the phase transformation of cobalt. The high saturation magnetization close to that of the bulk value was achieved in the case of the annealed samples. In addition to detailed structural and morphological characterization, the results of thermal and magnetic studies are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave propagation in graphene sheet embedded in elastic medium (polymer matrix) has been a topic of great interest in nanomechanics of graphene sheets, where the equivalent continuum models are widely used. In this manuscript, we examined this issue by incorporating the nonlocal theory into the classical plate model. The influence of the nonlocal scale effects has been investigated in detail. The results are qualitatively different from those obtained based on the local/classical plate theory and thus, are important for the development of monolayer graphene-based nanodevices. In the present work, the graphene sheet is modeled as an isotropic plate of one-atom thick. The chemical bonds are assumed to be formed between the graphene sheet and the elastic medium. The polymer matrix is described by a Pasternak foundation model, which accounts for both normal pressure and the transverse shear deformation of the surrounding elastic medium. When the shear effects are neglected, the model reduces to Winkler foundation model. The normal pressure or Winkler elastic foundation parameter is approximated as a series of closely spaced, mutually independent, vertical linear elastic springs where the foundation modulus is assumed equivalent to stiffness of the springs. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of flexural wave propagation model is also derived and the results of the wave dispersion analysis are shown for both local and nonlocal elasticity calculations. From this analysis we show that the elastic matrix highly affects the flexural wave mode and it rapidly increases the frequency band gap of flexural mode. The flexural wavenumbers obtained from nonlocal elasticity calculations are higher than the local elasticity calculations. The corresponding wave group speeds are smaller in nonlocal calculation as compared to local elasticity calculation. The effect of y-directional wavenumber (eta(q)) on the spectrum and dispersion relations of the graphene embedded in polymer matrix is also observed. We also show that the cut-off frequencies of flexural wave mode depends not only on the y-direction wavenumber but also on nonlocal scaling parameter (e(0)a). The effect of eta(q) and e(0)a on the cut-off frequency variation is also captured for the cases of with and without elastic matrix effect. For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(0)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this article. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper primarily intends to develop a GIS (geographical information system)-based data mining approach for optimally selecting the locations and determining installed capacities for setting up distributed biomass power generation systems in the context of decentralized energy planning for rural regions. The optimal locations within a cluster of villages are obtained by matching the installed capacity needed with the demand for power, minimizing the cost of transportation of biomass from dispersed sources to power generation system, and cost of distribution of electricity from the power generation system to demand centers or villages. The methodology was validated by using it for developing an optimal plan for implementing distributed biomass-based power systems for meeting the rural electricity needs of Tumkur district in India consisting of 2700 villages. The approach uses a k-medoid clustering algorithm to divide the total region into clusters of villages and locate biomass power generation systems at the medoids. The optimal value of k is determined iteratively by running the algorithm for the entire search space for different values of k along with demand-supply matching constraints. The optimal value of the k is chosen such that it minimizes the total cost of system installation, costs of transportation of biomass, and transmission and distribution. A smaller region, consisting of 293 villages was selected to study the sensitivity of the results to varying demand and supply parameters. The results of clustering are represented on a GIS map for the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The equivalence of triangle-comparison-based pulse width modulation (TCPWM) and space vector based PWM (SVPWM) during linear modulation is well-known. This paper analyses triangle-comparison based PWM techniques (TCPWM) such as sine-triangle PWM (SPWM) and common-mode voltage injection PWM during overmodulation from a space vector point of view. The average voltage vector produced by TCPWM during overmodulation is studied in the stationary (a-b) reference frame. This is compared and contrasted with the average voltage vector corresponding to the well-known standard two-zone algorithm for space vector modulated inverters. It is shown that the two-zone overmodulation algorithm itself can be derived from the variation of average voltage vector with TCPWM. The average voltage vector is further studied in a synchronously revolving (d-q) reference frame. The RMS value of low-order voltage ripple can be estimated, and can be used to compare harmonic distortion due to different PWM methods during overmodulation. The measured values of the total harmonic distortion (THD) in the line currents are presented at various fundamental frequencies. The relative values of measured current THD pertaining to different PWM methods tally with those of analytically evaluated RMS voltage ripple.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.