986 resultados para Structural testing
Resumo:
Due to frequent accidental damage to prestressed concrete (P/C) bridges caused by impact from overheight vehicles, a project was initiated to evaluate the strength and load distribution characteristics of damaged P/C bridges. A comprehensive literature review was conducted. It was concluded that only a few references pertain to the assessment and repair of damaged P/C beams. No reference was found that involves testing of a damaged bridge(s) as well as the damaged beams following their removal. Structural testing of two bridges was conducted in the field. The first bridge tested, damaged by accidental impact, was the westbound (WB) I-680 bridge in Beebeetown, Iowa. This bridge had significant damage to the first and second beams consisting of extensive loss of section and the exposure of numerous strands. The second bridge, the adjacent eastbound (EB) structure, was used as a baseline of the behavior of an undamaged bridge. Load testing concluded that a redistribution of load away from the damaged beams of the WB bridge was occurring. Subsequent to these tests, the damaged beams in the WB bridge were replaced and the bridge retested. The repaired WB bridge behaved, for the most part, like the undamaged EB bridge indicating that the beam replacement restored the original live load distribution patterns. A large-scale bridge model constructed for a previous project was tested to study the changes in behavior due to incrementally applied damage consisting initially of only concrete removal and then concrete removal and strand damage. A total of 180 tests were conducted with the general conclusion that for exterior beam damage, the bridge load distribution characteristics were relatively unchanged until significant portions of the bottom flange were removed along with several strands. A large amount of the total applied moment to the exterior beam was redistributed to the interior beam of the model. Four isolated P/C beams were tested, two removed from the Beebeetown bridge and two from the aforementioned bridge model. For the Beebeetown beams, the first beam, Beam 1W, was tested in an "as removed" condition to obtain the baseline characteristics of a damaged beam. The second beam, Beam 2W, was retrofit with carbon fiber reinforced polymer (CFRP) longitudinal plates and transverse stirrups to strengthen the section. The strengthened Beam was 12% stronger than Beam 1W. Beams 1 and 2 from the bridge model were also tested. Beam 1 was not damaged and served as the baseline behavior of a "new" beam while Beam 2 was damaged and repaired again using CFRP plates. Prior to debonding of the plates from the beam, the behavior of both Beams 1 and 2 was similar. The retrofit beam attained a capacity greater than a theoretically undamaged beam prior to plate debonding. Analytical models were created for the undamaged and damaged center spans of the WB bridge; stiffened plate and refined grillage models were used. Both models were accurate at predicting the deflections in the tested bridge and should be similarly accurate in modeling other P/C bridges. The moment fractions per beam were computed using both models for the undamaged and damaged bridges. The damaged model indicates a significant decrease in moment in the damaged beams and a redistribution of load to the adjacent curb and rail as well as to the undamaged beam lines.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013).Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program andgroup variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults.We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Área de Especialização em Automação e Sistemas
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Composites are fast becoming a cost effective option when considering the design of engineering structures in a broad range of applications. If the strength to weight benefits of these material systems can be exploited and challenges in developing lower cost manufacturing methods overcome, then the advanced composite systems will play a bigger role in the diverse range of sectors outside the aerospace industry where they have been used for decades.
This paper presents physical testing results that showcase the advantages of GRP (Glass Reinforced Plastics), such as the ability to endure loading with minimal deformation. The testing involved is a cross comparison of GRP grating vs. GRP encapsulated foam core. Resulting data gained within this paper will then be coupled with design optimization (utilising model simulation) to bring forward layup alterations to meet the specified load classifications involved.
Resumo:
We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and per-capita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically.
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.
Resumo:
This paper investigates whether or not multivariate cointegrated process with structural change can describe the Brazilian term structure of interest rate data from 1995 to 2006. In this work the break point and the number of cointegrated vector are assumed to be known. The estimated model has four regimes. Only three of them are statistically different. The first starts at the beginning of the sample and goes until September of 1997. The second starts at October of 1997 until December of 1998. The third starts at January of 1999 and goes until the end of the sample. It is used monthly data. Models that allows for some similarities across the regimes are also estimated and tested. The models are estimated using the Generalized Reduced-Rank Regressions developed by Hansen (2003). All imposed restrictions can be tested using likelihood ratio test with standard asymptotic 1 qui-squared distribution. The results of the paper show evidence in favor of the long run implications of the expectation hypothesis for Brazil.
Resumo:
We describe the design, manufacturing, and testing results of a Nb3Sn superconducting coil in which TiAIV alloys were used instead of stainless steel to reduce the magnetization contribution caused by the heat treatment for the A-15 Nb-3 Sn phase formation that affects the magnetic field homogeneity. Prior to the coil manufacturing several structural materials were studied and evaluated in terms of their mechanical and magnetic properties in as-worked, welded, and heat-treated conditions. The manufacturing process employed the wind-and-react technique followed by vacuum-pressure impregnation(VPI) at 1 MPa atm. The critical steps of the manufacturing process, besides the heat treatment and impregnation, are the wire splicing and joint manufacturing in which copper posts supported by Si3N4 ceramic were used. The coil was tested with and without a background NbTi coil and the results have shown performance exceeding the design quench current confirming the successful coil construction.