11 resultados para adjacent buildings, spatial ground motions, seismic response, soil-structure interaction, power spectrum density function

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount and type of ground cover is an important characteristic to measure when collecting soil disturbance monitoring data after a timber harvest. Estimates of ground cover and bare soil can be used for tracking changes in invasive species, plant growth and regeneration, woody debris loadings, and the risk of surface water runoff and soil erosion. A new method of assessing ground cover and soil disturbance was recently published by the U.S. Forest Service, the Forest Soil Disturbance Monitoring Protocol (FSDMP). This protocol uses the frequency of cover types in small circular (15cm) plots to compare ground surface in pre- and post-harvest condition. While both frequency and percent cover are common methods of describing vegetation, frequency has rarely been used to measure ground surface cover. In this study, three methods for assessing ground cover percent (step-point, 15cm dia. circular and 1x5m visual plot estimates) were compared to the FSDMP frequency method. Results show that the FSDMP method provides significantly higher estimates of ground surface condition for most soil cover types, except coarse wood. The three cover methods had similar estimates for most cover values. The FSDMP method also produced the highest value when bare soil estimates were used to model erosion risk. In a person-hour analysis, estimating ground cover percent in 15cm dia. plots required the least sampling time, and provided standard errors similar to the other cover estimates even at low sampling intensities (n=18). If ground cover estimates are desired in soil monitoring, then a small plot size (15cm dia. circle), or a step-point method can provide a more accurate estimate in less time than the current FSDMP method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable development has only recently started examining the existing infrastructure, and a key aspect of this is hazard mitigation. To examine buildings under a sustainable perspective requires an understanding of a building's life-cycle environmental costs, including the consideration of associated environmental impacts induced by earthquake damage. Damage repair costs lead to additional material and energy consumption, leading to harmful environmental impacts. Merging results obtained from a seismic evaluation and life-cycle analysis for buildings will give a novel outlook on sustainable design decisions. To evaluate the environmental impacts caused by buildings, long-term impacts accrued throughout a building's lifetime and impacts associated with damage repair need to be quantified. A method and literature review for completing this examination has been developed and is discussed. Using software Athena and HAZUS-MH, this study evaluated the performance of steel and concrete buildings considering their life-cycle assessments and earthquake resistance. It was determined that code design-level greatly effects a building repair and damage estimations. This study presented two case study buildings and found specific results that were obtained using several premade assumptions. Future research recommendations were provided to make this methodology more useful in real-world applications. Examining cost and environmental impacts that a building has through, a cradle-to-grave analysis and seismic damage assessment will help reduce material consumption and construction activities from taking place before and after an earthquake event happens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the original ocean-bottom time-lapse seismic studies was performed at the Teal South oil field in the Gulf of Mexico during the late 1990’s. This work reexamines some aspects of previous work using modern analysis techniques to provide improved quantitative interpretations. Using three-dimensional volume visualization of legacy data and the two phases of post-production time-lapse data, I provide additional insight into the fluid migration pathways and the pressure communication between different reservoirs, separated by faults. This work supports a conclusion from previous studies that production from one reservoir caused regional pressure decline that in turn resulted in liberation of gas from multiple surrounding unproduced reservoirs. I also provide an explanation for unusual time-lapse changes in amplitude-versus-offset (AVO) data related to the compaction of the producing reservoir which, in turn, changed an isotropic medium to an anisotropic medium. In the first part of this work, I examine regional changes in seismic response due to the production of oil and gas from one reservoir. The previous studies primarily used two post-production ocean-bottom surveys (Phase I and Phase II), and not the legacy streamer data, due to the unavailability of legacy prestack data and very different acquisition parameters. In order to incorporate the legacy data in the present study, all three poststack data sets were cross-equalized and examined using instantaneous amplitude and energy volumes. This approach appears quite effective and helps to suppress changes unrelated to production while emphasizing those large-amplitude changes that are related to production in this noisy (by current standards) suite of data. I examine the multiple data sets first by using the instantaneous amplitude and energy attributes, and then also examine specific apparent time-lapse changes through direct comparisons of seismic traces. In so doing, I identify time-delays that, when corrected for, indicate water encroachment at the base of the producing reservoir. I also identify specific sites of leakage from various unproduced reservoirs, the result of regional pressure blowdown as explained in previous studies; those earlier studies, however, were unable to identify direct evidence of fluid movement. Of particular interest is the identification of one site where oil apparently leaked from one reservoir into a “new” reservoir that did not originally contain oil, but was ideally suited as a trap for fluids leaking from the neighboring spill-point. With continued pressure drop, oil in the new reservoir increased as more oil entered into the reservoir and expanded, liberating gas from solution. Because of the limited volume available for oil and gas in that temporary trap, oil and gas also escaped from it into the surrounding formation. I also note that some of the reservoirs demonstrate time-lapse changes only in the “gas cap” and not in the oil zone, even though gas must be coming out of solution everywhere in the reservoir. This is explained by interplay between pore-fluid modulus reduction by gas saturation decrease and dry-frame modulus increase by frame stiffening. In the second part of this work, I examine various rock-physics models in an attempt to quantitatively account for frame-stiffening that results from reduced pore-fluid pressure in the producing reservoir, searching for a model that would predict the unusual AVO features observed in the time-lapse prestack and stacked data at Teal South. While several rock-physics models are successful at predicting the time-lapse response for initial production, most fail to match the observations for continued production between Phase I and Phase II. Because the reservoir was initially overpressured and unconsolidated, reservoir compaction was likely significant, and is probably accomplished largely by uniaxial strain in the vertical direction; this implies that an anisotropic model may be required. Using Walton’s model for anisotropic unconsolidated sand, I successfully model the time-lapse changes for all phases of production. This observation may be of interest for application to other unconsolidated overpressured reservoirs under production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantifying belowground dynamics is critical to our understanding of plant and ecosystem function and belowground carbon cycling, yet currently available tools for complex belowground image analyses are insufficient. We introduce novel techniques combining digital image processing tools and geographic information systems (GIS) analysis to permit semi-automated analysis of complex root and soil dynamics. We illustrate methodologies with imagery from microcosms, minirhizotrons, and a rhizotron, in upland and peatland soils. We provide guidelines for correct image capture, a method that automatically stitches together numerous minirhizotron images into one seamless image, and image analysis using image segmentation and classification in SPRING or change analysis in ArcMap. These methods facilitate spatial and temporal root and soil interaction studies, providing a framework to expand a more comprehensive understanding of belowground dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes equations offers an alternative to experimental analysis of fluid-structure interaction (FSI). We would save a lot of time and effort and help cut back on costs, if we are able to accurately model systems by these numerical solutions. These advantages are even more obvious when considering huge structures like bridges, high rise buildings or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the Kinematic Laplacian Equation (KLE) to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ordinary differential equations (ODE) time integration schemes, allowing us to tackle each problem as a separate module. The current algortihm for the KLE uses an unstructured quadrilateral mesh, formed by dividing each triangle of an unstructured triangular mesh into three quadrilaterals for spatial discretization. This research deals with determining a suitable measure of mesh quality based on the physics of the problems being tackled. This is followed by exploring methods to improve the quality of quadrilateral elements obtained from the triangles and thereby improving the overall mesh quality. A series of numerical experiments were designed and conducted for this purpose and the results obtained were tested on different geometries with varying degrees of mesh density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents a detailed study in exploring quantum correlations of lights in macroscopic environments. We have explored quantum correlations of single photons, weak coherent states, and polarization-correlated/polarization-entangled photons in macroscopic environments. These included macroscopic mirrors, macroscopic photon number, spatially separated observers, noisy photons source and propagation medium with loss or disturbances. We proposed a measurement scheme for observing quantum correlations and entanglement in the spatial properties of two macroscopic mirrors using single photons spatial compass state. We explored the phase space distribution features of spatial compass states, such as chessboard pattern by using the Wigner function. The displacement and tilt correlations of the two mirrors were manifested through the propensities of the compass states. This technique can be used to extract Einstein-Podolsky-Rosen correlations (EPR) of the two mirrors. We then formulated the discrete-like property of the propensity Pb(m,n), which can be used to explore environmental perturbed quantum jumps of the EPR correlations in phase space. With single photons spatial compass state, the variances in position and momentum are much smaller than standard quantum limit when using a Gaussian TEM00 beam. We observed intrinsic quantum correlations of weak coherent states between two parties through balanced homodyne detection. Our scheme can be used as a supplement to decoy-state BB84 protocol and differential phase-shift QKD protocol. We prepared four types of bipartite correlations ±cos2(θ12) that shared between two parties. We also demonstrated bits correlations between two parties separated by 10 km optical fiber. The bits information will be protected by the large quantum phase fluctuation of weak coherent states, adding another physical layer of security to these protocols for quantum key distribution. Using 10 m of highly nonlinear fiber (HNLF) at 77 K, we observed coincidence to accidental-coincidence ratio of 130±5 for correlated photon-pair and Two-Photon Interference visibility >98% entangled photon-pair. We also verified the non-local behavior of polarization-entangled photon pair by violating Clauser-Horne-Shimony-Holt Bell’s inequality by more than 12 standard deviations. With the HNLF at 300 K (77 K), photon-pair production rate about factor 3(2) higher than a 300 m dispersion-shifted fiber is observed. Then, we studied quantum correlation and interference of photon-pairs; with one photon of the photon-air experiencing multiple scattering in a random medium. We observed that depolarization noise photon in multiple scattering degrading the purity of photon-pair, and the existence of Raman noise photon in a photon-pair source will contribute to the depolarization affect. We found that quantum correlation of polarization-entangled photon-pair is better preserved than polarization-correlated photon-pair as one photon of the photon-pair scattered through a random medium. Our findings showed that high purity polarization-entangled photon-pair is better candidate for long distance quantum key distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a methodology for measuring thermal properties in situ, with a special focus on obtaining properties of layered stack-ups commonly used in armored vehicle components. The technique involves attaching a thermal source to the surface of a component, measuring the heat flux transferred between the source and the component, and measuring the surface temperature response. The material properties of the component can subsequently be determined from measurement of the transient heat flux and temperature response at the surface alone. Experiments involving multilayered specimens show that the surface temperature response to a sinusoidal heat flux forcing function is also sinusoidal. A frequency domain analysis shows that sinusoidal thermal excitation produces a gain and phase shift behavior typical of linear systems. Additionally, this analysis shows that the material properties of sub-surface layers affect the frequency response function at the surface of a particular stack-up. The methodology involves coupling a thermal simulation tool with an optimization algorithm to determine the material properties from temperature and heat flux measurement data. Use of a sinusoidal forcing function not only provides a mechanism to perform the frequency domain analysis described above, but sinusoids also have the practical benefit of reducing the need for instrumentation of the backside of the component. Heat losses can be minimized by alternately injecting and extracting heat on the front surface, as long as sufficiently high frequencies are used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patterns of increasing leaf mass per area (LMA), area-based leaf nitrogen (Narea), and carbon isotope composition (δ13C) with increasing height in the canopy have been attributed to light gradients or hydraulic limitation in tall trees. Theoretical optimal distributions of LMA and Narea that scale with light maximize canopy photosynthesis; however, sub-optimal distributions are often observed due to hydraulic constraints on leaf development. Using observational, experimental, and modeling approaches, we investigated the response of leaf functional traits (LMA, density, thickness, and leaf nitrogen), leaf carbon isotope composition (δ13C), and cellular structure to light availability, height, and leaf water potential (Ψl) in an Acer saccharum forest to tease apart the influence of light and hydraulic limitations. LMA, leaf and palisade layer thickness, and leaf density were greater at greater light availability but similar heights, highlighting the strong control of light on leaf morphology and cellular structure. Experimental shading decreased both LMA and area-based leaf nitrogen (Narea) and revealed that LMA and Narea were more strongly correlated with height earlier in the growing season and with light later in the growing season. The supply of CO2 to leaves at higher heights appeared to be constrained by stomatal sensitivity to vapor pressure deficit (VPD) or midday leaf water potential, as indicated by increasing δ13C and VPD and decreasing midday Ψl with height. Model simulations showed that daily canopy photosynthesis was biased during the early growing season when seasonality was not accounted for, and was biased throughout the growing season when vertical gradients in LMA and Narea were not accounted for. Overall, our results suggest that leaves acclimate to light soon after leaf expansion, through an accumulation of leaf carbon, thickening of palisade layers and increased LMA, and reduction in stomatal sensitivity to Ψl or VPD. This period of light acclimation in leaves appears to optimize leaf function over time, despite height-related constraints early in the growing season. Our results imply that vertical gradients in leaf functional traits and leaf acclimation to light should be incorporated in canopy function models in order to refine estimates of canopy photosynthesis.