897 resultados para Load rejection test data
Resumo:
Particulate composites based on polymer matrices generally contain fillers, especially those that are abundantly available and are cheaper. The inclusion of these, besides improving the properties, makes the system costwise viable, In the present study, fly ash was tried as a filler in epoxy. The filler particle surfaces were modified using three chemical surface treatment techniques in order to elicit the effect of adhesion at the interface on the mechanical properties of these composites. The compatibilizing of the filler with the use of a silane coupling agent yielded the best compression strength values. Scanning Electron Microscopy (SEM) has been used to characterize and supplement the mechanical test data.
Resumo:
Reaction wheel assemblies (RWAs) are momentum exchange devices used in fine pointing control of spacecrafts. Even though the spinning rotor of the reaction wheel is precisely balanced to minimize emitted vibration due to static and dynamic imbalances, precision instrument payloads placed in the neighborhood can always be severely impacted by residual vibration forces emitted by reaction wheel assemblies. The reduction of the vibration level at sensitive payloads can be achieved by placing the RWA on appropriate mountings. A low frequency flexible space platform consisting of folded continuous beams has been designed to serve as a mount for isolating a disturbance source in precision payloads equipped spacecrafts. Analytical and experimental investigations have been carried out to test the usefulness of the low frequency flexible platform as a vibration isolator for RWAs. Measurements and tests have been conducted at varying wheel speeds, to quantify and characterize the amount of isolation obtained from the reaction wheel generated vibration. These tests are further extended to other variants of similar design in order to bring out the best isolation for given disturbance loads. Both time and frequency domain analysis of test data show that the flexible beam platform as a mount for reaction wheels is quite effective and can be used in spacecrafts for passive vibration control. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Accurate estimation of mass transport parameters is necessary for overall design and evaluation processes of the waste disposal facilities. The mass transport parameters, such as effective diffusion coefficient, retardation factor and diffusion accessible porosity, are estimated from observed diffusion data by inverse analysis. Recently, particle swarm optimization (PSO) algorithm has been used to develop inverse model for estimating these parameters that alleviated existing limitations in the inverse analysis. However, PSO solver yields different solutions in successive runs because of the stochastic nature of the algorithm and also because of the presence of multiple optimum solutions. Thus the estimated mean solution from independent runs is significantly different from the best solution. In this paper, two variants of the PSO algorithms are proposed to improve the performance of the inverse analysis. The proposed algorithms use perturbation equation for the gbest particle to gain information around gbest region on the search space and catfish particles in alternative iterations to improve exploration capabilities. Performance comparison of developed solvers on synthetic test data for two different diffusion problems reveals that one of the proposed solvers, CPPSO, significantly improves overall performance with improved best, worst and mean fitness values. The developed solver is further used to estimate transport parameters from 12 sets of experimentally observed diffusion data obtained from three diffusion problems and compared with published values from the literature. The proposed solver is quick, simple and robust on different diffusion problems. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Accurate supersymmetric spectra are required to confront data from direct and indirect searches of supersymmetry. SuSeFLAV is a numerical tool capable of computing supersymmetric spectra precisely for various supersymmetric breaking scenarios applicable even in the presence of flavor violation. The program solves MSSM RGEs with complete 3 x 3 flavor mixing at 2-loop level and one loop finite threshold corrections to all MSSM parameters by incorporating radiative electroweak symmetry breaking conditions. The program also incorporates the Type-I seesaw mechanism with three massive right handed neutrinos at user defined mass scales and mixing. It also computes branching ratios of flavor violating processes such as l(j) -> l(i)gamma, l(j) -> 3 l(i), b -> s gamma and supersymmetric contributions to flavor conserving quantities such as (g(mu) - 2). A large choice of executables suitable for various operations of the program are provided. Program summary Program title: SuSeFLAV Catalogue identifier: AEOD_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEOD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 76552 No. of bytes in distributed program, including test data, etc.: 582787 Distribution format: tar.gz Programming language: Fortran 95. Computer: Personal Computer, Work-Station. Operating system: Linux, Unix. Classification: 11.6. Nature of problem: Determination of masses and mixing of supersymmetric particles within the context of MSSM with conserved R-parity with and without the presence of Type-I seesaw. Inter-generational mixing is considered while calculating the mass spectrum. Supersymmetry breaking parameters are taken as inputs at a high scale specified by the mechanism of supersymmetry breaking. RG equations including full inter-generational mixing are then used to evolve these parameters up to the electroweak breaking scale. The low energy supersymmetric spectrum is calculated at the scale where successful radiative electroweak symmetry breaking occurs. At weak scale standard model fermion masses, gauge couplings are determined including the supersymmetric radiative corrections. Once the spectrum is computed, the program proceeds to various lepton flavor violating observables (e.g., BR(mu -> e gamma), BR(tau -> mu gamma) etc.) at the weak scale. Solution method: Two loop RGEs with full 3 x 3 flavor mixing for all supersymmetry breaking parameters are used to compute the low energy supersymmetric mass spectrum. An adaptive step size Runge-Kutta method is used to solve the RGEs numerically between the high scale and the electroweak breaking scale. Iterative procedure is employed to get the consistent radiative electroweak symmetry breaking condition. The masses of the supersymmetric particles are computed at 1-loop order. The third generation SM particles and the gauge couplings are evaluated at the 1-loop order including supersymmetric corrections. A further iteration of the full program is employed such that the SM masses and couplings are consistent with the supersymmetric particle spectrum. Additional comments: Several executables are presented for the user. Running time: 0.2 s on a Intel(R) Core(TM) i5 CPU 650 with 3.20 GHz. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
This paper highlights the seismic microzonation carried out for a nuclear power plant site. Nuclear power plants are considered to be one of the most important and critical structures designed to withstand all natural disasters. Seismic microzonation is a process of demarcating a region into individual areas having different levels of various seismic hazards. This will help in identifying regions having high seismic hazard which is vital for engineering design and land-use planning. The main objective of this paper is to carry out the seismic microzonation of a nuclear power plant site situated in the east coast of South India, based on the spatial distribution of the hazard index value. The hazard index represents the consolidated effect of all major earthquake hazards and hazard influencing parameters. The present work will provide new directions for assessing the seismic hazards of new power plant sites in the country. Major seismic hazards considered for the evaluation of the hazard index are (1) intensity of ground shaking at bedrock, (2) site amplification, (3) liquefaction potential and (4) the predominant frequency of the earthquake motion at the surface. The intensity of ground shaking in terms of peak horizontal acceleration (PHA) was estimated for the study area using both deterministic and probabilistic approaches with logic tree methodology. The site characterization of the study area has been carried out using the multichannel analysis of surface waves test and available borehole data. One-dimensional ground response analysis was carried out at major locations within the study area for evaluating PHA and spectral accelerations at the ground surface. Based on the standard penetration test data, deterministic as well as probabilistic liquefaction hazard analysis has been carried out for the entire study area. Finally, all the major earthquake hazards estimated above, and other significant parameters representing local geology were integrated using the analytic hierarchy process and hazard index map for the study area was prepared. Maps showing the spatial variation of seismic hazards (intensity of ground shaking, liquefaction potential and predominant frequency) and hazard index are presented in this work.
Resumo:
Two of the aims of laboratory one-dimensional consolidation tests are prediction of the end of primary settlement, and determination of the coefficient of consolidation of soils required for the time rate of consolidation analysis from time-compression data. Of the many methods documented in the literature to achieve these aims, Asaoka's method is a simple and useful tool, and yet the most neglected one since its inception in the geotechnical engineering literature more than three decades ago. This paper appraises Asaoka's method, originally proposed for the field prediction of ultimate settlement, from the perspective of laboratory consolidation analysis along with recent developments. It is shown through experimental illustrations that Asaoka's method is simpler than the conventional and popular methods, and makes a satisfactory prediction of both the end of primary compression and the coefficient of consolidation from laboratory one-dimensional consolidation test data.
Resumo:
In this work, a methodology to achieve ordinary-, medium-, and high-strength self-consolidating concrete (SCC) with and without mineral additions is proposed. The inclusion of Class F fly ash increases the density of SCC but retards the hydration rate, resulting in substantial strength gain only after 28 days. This delayed strength gain due to the use of fly ash has been considered in the mixture design model. The accuracy of the proposed mixture design model is validated with the present test data and mixture and strength data obtained from diverse sources reported in the literature.
Resumo:
Damage mechanisms in unidirectional (UD) and bi-directional (BD) woven carbon fiber reinforced polymer (CFRP) laminates subjected to four point flexure, both in static and fatigue loadings, were studied. The damage progression in composites was monitored by observing the slopes of the load vs. deflection data that represent the stiffness of the given specimen geometry over a number of cycles. It was observed that the unidirectional composites exhibit gradual loss in stiffness whereas the bidirectional woven composites show a relatively quicker loss during stage II of fatigue damage progression. Both, the static and the fatigue failures in unidirectional carbon fiber reinforced polymer composites originates due to generation of cracks on compression face while in bidirectional woven composites the damage ensues from both the compression and the tensile faces. These observations are supported by a detailed fractographic analysis.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
The Monte- Carlo method is used to simulate the surface fatigue crack growth rate for offshore structural steel E36-Z35, and to determine the distributions and relevance of the parameters in the Paris equation. By this method, the time and cost of fatigue crack propagation testing can be reduced. The application of the method is demonstrated by use of four sets of fatigue crack propagation data for offshore structural steel E36-Z35. A comparison of the test data with the theoretical prediction for surface crack growth rate shows the application of the simulation method to the fatigue crack propagation tests is successful.
Resumo:
A mathematical model for the rain infiltration in the rock-soil slop has been established and solved by using the finite element method. The unsteady water infiltrating process has been simulated to get water content both in the homogeneous and heterogeneous media. The simulated results show that the rock blocks in the rock-soil slop can cause the wetting front moving fast. If the rain intensity is increased, the saturated region will be formed quickly while other conditions are the same. If the rain intensity keeps a constant, it is possible to accelerate the generation of the saturated region by properly increasing the vertical filtration rate of the rock-soil slop. However, if the vertical filtration rate is so far greater than the rain intensity, it will be difficult to form the saturated region in the rock-soil slop. The numerical method was verified by comparing the calculation results with the field test data.
Resumo:
Numerous microcracks propagation in one metal matrix composite, Al/SiCp under impact loading was investigated. The test data was got with a specially designed impact experimental approach. The analysis to the density, nucleating locations and distributions of the microcracks as well as microstructure effects of the original composite was received particular emphasis. The types of microcracks or debonding nucleated in the tested composite were dependent on the stress level and its duration. Distributions of the microcracks were depended on that of microstructures of the tested composite while total number of microcracks in unit area and unit duration, was controlled by the stress levels. Also, why the velocity was much lower than theoretical estimations for elastic solids and why the microcracks propagating velocities increased with the stress levels' increasing in current experiments were analysed and explained.
Resumo:
A significant cost in obtaining acoustic training data is the generation of accurate transcriptions. For some sources close-caption data is available. This allows the use of lightly-supervised training techniques. However, for some sources and languages close-caption is not available. In these cases unsupervised training techniques must be used. This paper examines the use of unsupervised techniques for discriminative training. In unsupervised training automatic transcriptions from a recognition system are used for training. As these transcriptions may be errorful data selection may be useful. Two forms of selection are described, one to remove non-target language shows, the other to remove segments with low confidence. Experiments were carried out on a Mandarin transcriptions task. Two types of test data were considered, Broadcast News (BN) and Broadcast Conversations (BC). Results show that the gains from unsupervised discriminative training are highly dependent on the accuracy of the automatic transcriptions. © 2007 IEEE.