83 resultados para Comprehensive Osteoarthritis Test
Resumo:
Purpose - This paper aims to validate a comprehensive aeroelastic analysis for a helicopter rotor with the higher harmonic control aeroacoustic rotor test (HART-II) wind tunnel test data. Design/methodology/approach - Aeroelastic analysis of helicopter rotor with elastic blades based on finite element method in space and time and capable of considering higher harmonic control inputs is carried out. Moderate deflection and coriolis nonlinearities are included in the analysis. The rotor aerodynamics are represented using free wake and unsteady aerodynamic models. Findings - Good correlation between analysis and HART-II wind tunnel test data is obtained for blade natural frequencies across a range of rotating speeds. The basic physics of the blade mode shapes are also well captured. In particular, the fundamental flap, lag and torsion modes compare very well. The blade response compares well with HART-II result and other high-fidelity aeroelastic code predictions for flap and torsion mode. For the lead-lag response, the present analysis prediction is somewhat better than other aeroelastic analyses. Research limitations/implications - Predicted blade response trend with higher harmonic pitch control agreed well with the wind tunnel test data, but usually contained a constant offset in the mean values of lead-lag and elastic torsion response. Improvements in the modeling of the aerodynamic environment around the rotor can help reduce this gap between the experimental and numerical results. Practical implications - Correlation of predicted aeroelastic response with wind tunnel test data is a vital step towards validating any helicopter aeroelastic analysis. Such efforts lend confidence in using the numerical analysis to understand the actual physical behavior of the helicopter system. Also, validated numerical analyses can take the place of time-consuming and expensive wind tunnel tests during the initial stage of the design process. Originality/value - While the basic physics appears to be well captured by the aeroelastic analysis, there is need for improvement in the aerodynamic modeling which appears to be the source of the gap between numerical predictions and HART-II wind tunnel experiments.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
Using Terzaghi's degree of consolidation, U, and the time factor, T, relationship, if M-U1 and M-U2 (M-U1 not equal M-U2) are slopes of the U-root T curve at any two time factors T-U1 and T-U2, then it can be shown that a unique relationship exists between T-U2/T-U1, M-U1/M-U2, and TU, (or TU2), and knowing any two of these, the third can be uniquely determined. A chart, called the T chart, has been plotted using these three variables for quickly determining T and U at any experimental time, t, to determine the coefficient of consolidation, c(v), corrected zero settlement, delta(o), and ultimate primary settlement, delta(100). The chart can be used even in those cases where settlement and time, at the instant of load increment, are not known.
Resumo:
Taylor (1948) suggested the method for determination of the settlement, d, corresponding to 90% consolidation utilizing the characteristics of the degree of consolidation, U, versus the square root of the time factor, square root of T, plot. Based on the properties of the slope of U versus square root of T curve, a new method is proposed to determine d corresponding to any U above 70% consolidation for evaluation of the coefficient of consolidation, Cn. The effects of the secondary consolidation on the Cn value at different percentages of consolidation can be studied. Cn, closer to the field values, can be determined in less time as compared to Taylor's method. At any U in between 75 and 95% consolidation, Cn(U) due to the new method lies in between Taylor's Cn and Casagrande's Cn.
Resumo:
The test based on comparison of the characteristic coefficients of the adjancency matrices of the corresponding graphs for detection of isomorphism in kinematic chains has been shown to fail in the case of two pairs of ten-link, simple-jointed chains, one pair corresponding to single-freedom chains and the other pair corresponding to three-freedom chains. An assessment of the merits and demerits of available methods for detection of isomorphism in graphs and kinematic chains is presented, keeping in view the suitability of the methods for use in computerized structural synthesis of kinematic chains. A new test based on the characteristic coefficients of the “degree” matrix of the corresponding graph is proposed for detection of isomorphism in kinematic chains. The new test is found to be successful in the case of a number of examples of graphs where the test based on characteristic coefficients of adjancency matrix fails. It has also been found to be successful in distinguishing the structures of all known simple-jointed kinematic chains in the categories of (a) single-freedom chains with up to 10 links, (b) two-freedom chains with up to 9 links and (c) three-freedom chains with up to 10 links.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
The pollen of Parthenium hysterophorus, an alien weed growing wild in India was found to be a potential source of allergic rhinitis. A clinical survey showed that 34% of the patients suffering from rhinitis and 12% suffering from bronchial asthma gave positive skin-prick test reactions to Parthenium pollen antigen extracts. Parthenium-specific IgE was detected in the sera of sixteen out of twenty-four patients suffering from seasonal rhinitis. There was 66% correlation between skin test and RAST.
Resumo:
Salt-fog tests as per International Electrotechnical Commission (IEC) recommendations were conducted on stationtype insulators with large leakage lengths. Later, tests were conducted to simulate natural conditions. From these tests, it was understood that the pollution flashover would occur because of nonuniform pollution layers causing nonuniform voltage distribution during a natural drying-up period. The leakage current during test conditions was very small and the evidence was that the leakage current did not play any significant role in causing flashovers. In the light of the experimental results, some modification of the test procedure is suggested.
Resumo:
The behaviour of laterally loaded piles is considerably influenced by the uncertainties in soil properties. Hence probabilistic models for assessment of allowable lateral load are necessary. Cone penetration test (CPT) data are often used to determine soil strength parameters, whereby the allowable lateral load of the pile is computed. In the present study, the maximum lateral displacement and moment of the pile are obtained based on the coefficient of subgrade reaction approach, considering the nonlinear soil behaviour in undrained clay. The coefficient of subgrade reaction is related to the undrained shear strength of soil, which can be obtained from CPT data. The soil medium is modelled as a one-dimensional random field along the depth, and it is described by the standard deviation and scale of fluctuation of the undrained shear strength of soil. Inherent soil variability, measurement uncertainty and transformation uncertainty are taken into consideration. The statistics of maximum lateral deflection and moment are obtained using the first-order, second-moment technique. Hasofer-Lind reliability indices for component and system failure criteria, based on the allowable lateral displacement and moment capacity of the pile section, are evaluated. The geotechnical database from the Konaseema site in India is used as a case example. It is shown that the reliability-based design approach for pile foundations, considering the spatial variability of soil, permits a rational choice of allowable lateral loads.
Resumo:
Pin joints in structures are designed for interference, push or clearance fits. That is, the diameter of the pin is made greater than, equal to or less than the hole diameter. Consider an interference fit pin in a plate subjected to a continuously increasing in-plane load.
Resumo:
Scan circuit generally causes excessive switching activity compared to normal circuit operation. The higher switching activity in turn causes higher peak power supply current which results into supply, voltage droop and eventually yield loss. This paper proposes an efficient methodology for test vector re-ordering to achieve minimum peak power supported by the given test vector set. The proposed methodology also minimizes average power under the minimum peak power constraint. A methodology to further reduce the peak power below the minimum supported peak power, by inclusion of minimum additional vectors is also discussed. The paper defines the lower bound on peak power for a given test set. The results on several benchmarks shows that it can reduce peak power by up to 27%.
Resumo:
Abstrat is not available.
Resumo:
The problem of identification of stiffness, mass and damping properties of linear structural systems, based on multiple sets of measurement data originating from static and dynamic tests is considered. A strategy, within the framework of Kalman filter based dynamic state estimation, is proposed to tackle this problem. The static tests consists of measurement of response of the structure to slowly moving loads, and to static loads whose magnitude are varied incrementally; the dynamic tests involve measurement of a few elements of the frequency response function (FRF) matrix. These measurements are taken to be contaminated by additive Gaussian noise. An artificial independent variable τ, that simultaneously parameterizes the point of application of the moving load, the magnitude of the incrementally varied static load and the driving frequency in the FRFs, is introduced. The state vector is taken to consist of system parameters to be identified. The fact that these parameters are independent of the variable τ is taken to constitute the set of ‘process’ equations. The measurement equations are derived based on the mechanics of the problem and, quantities, such as displacements and/or strains, are taken to be measured. A recursive algorithm that employs a linearization strategy based on Neumann’s expansion of structural static and dynamic stiffness matrices, and, which provides posterior estimates of the mean and covariance of the unknown system parameters, is developed. The satisfactory performance of the proposed approach is illustrated by considering the problem of the identification of the dynamic properties of an inhomogeneous beam and the axial rigidities of members of a truss structure.
Resumo:
The present, paper deals with the CAE-based study Of impact of jacketed projectiles on single- and multi-layered metal armour plates using LS-DYNA. The validation of finite element modelling procedure is mainly based on the mesh convergence study using both shell and solid elements for representing single-layered mild steel target plates. It, is shown that the proper choice of mesh density and the strain rate-dependent material properties are essential for all accurate prediction of projectile residual velocity. The modelling requirements are initially arrived at by correlating against test residual velocities for single-layered mild steel plates of different depths at impact velocities in the ran.-c of approximately 800-870 m/s. The efficacy of correlation is adjudged, in terms of a 'correlation index', defined in the paper: for which values close to unity are desirable. The experience gained for single-layered plates is next; used in simulating projectile impacts on multi-layered mild steel target plates and once again a high degree of correlation with experimental residual velocities is observed. The study is repeated for single- and multi-layered aluminium target plates with a similar level of success in test residual velocity prediction. TO the authors' best knowledge, the present comprehensive study shows in particular for the first time that, with a. proper modelling approach, LS-DYNA can be used with a great degree of confidence in designing perforation-resistant single and multi-layered metallic armour plates.