5 resultados para Structural damage detection

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Does a brain store thoughts and memories the way a computer saves its files? How can a single hit or a fall erase all those memories? Brain Mapping and traumatic brain injuries (TBIs) have become widely researched fields today. Many researchers have been studying TBIs caused to adult American football players however youth athletes have been rarely considered for these studies, contradicting to the fact that American football enrolls highest number of collegiate and high-school children than adults. This research is an attempt to contribute to the field of youth TBIs. Earlier studies have related head kinematics (linear and angular accelerations) to TBIs. However, fewer studies have dealt with brain kinetics (impact pressures and stresses) occurring during head-on collisions. The National Operating Committee on Standards for Athletic Equipment (NOCSAE) drop tests were conducted for linear impact accelerations and the Head Impact Contact Pressures (HICP) calculated from them were applied to a validated FE model. The results showed lateral region of the head as the most vulnerable region to damage from any drop height or impact distance followed by posterior region. The TBI tolerance levels in terms of Von-Mises and Maximum Principal Stresses deduced for lateral impact were 30 MPa and 18 MPa respectively. These levels were corresponding to 2.625 feet drop height. The drop heights beyond this value will result in TBI causing stress concentrations in human head without any detectable structural damage to the brain tissue. This data can be utilized for designing helmets that provide cushioning to brain along with providing a resistance to shear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inductive-capacitive (LC) resonant circuit sensors are low-cost, wireless, durable, simple to fabricate and battery-less. Consequently, they are well suited to sensing applications in harsh environments or in situations where large numbers of sensors are needed. They are also advantageous in applications where access to the sensor is limited or impossible or when sensors are needed on a disposable basis. Due to their many advantages, LC sensors have been used for sensing a variety of parameters including humidity, temperature, chemical concentrations, pH, stress/pressure, strain, food quality and even biological growth. However, current versions of the LC sensor technology are limited to sensing only one parameter. The purpose of this work is to develop new types of LC sensor systems that are simpler to fabricate (hence lower cost) or capable of monitoring multiple parameters simultaneously. One design presented in this work, referred to as the multi-element LC sensor, is able to measure multiple parameters simultaneously using a second capacitive element. Compared to conventional LC sensors, this design can sense multiple parameters with a higher detection range than two independent sensors while maintaining the same overall sensor footprint. In addition, the two-element sensor does not suffer from interference issues normally encountered while implementing two LC sensors in close proximity. Another design, the single-spiral inductive-capacitive sensor, utilizes the parasitic capacitance of a coil or spring structure to form a single layer LC resonant circuit. Unlike conventional LC sensors, this design is truly planar, thus simplifying its fabrication process and reducing sensor cost. Due to the simplicity of this sensor layout it will be easier and more cost-effective for embedding in common building or packaging materials during manufacturing processes, thereby adding functionality to current products (such as drywall sheets) while having a minor impact on overall unit cost. These modifications to the LC sensor design significantly improve the functionality and commercial feasibility of this technology, especially for applications where a large array of sensors or multiple sensing parameters are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.