996 resultados para Michigan Tech
Resumo:
The time course of lake recovery after a reduction in external loading of nutrients is often controlled by conditions in the sediment. Remediation of eutrophication is hindered by the presence of legacy organic carbon deposits, that exert a demand on the terminal electron acceptors of the lake and contribute to problems such as internal nutrient recycling, absence of sediment macrofauna, and flux of toxic metal species into the water column. Being able to quantify the timing of a lake’s response requires determination of the magnitude and lability, i.e., the susceptibility to biodegradation, of the organic carbon within the legacy deposit. This characterization is problematic for organic carbon in sediments because of the presence of different fractions of carbon, which vary from highly labile to refractory. The lability of carbon under varied conditions was tested with a bioassay approach. It was found that the majority of the organic material found in the sediments is conditionally-labile, where mineralization potential is dependent on prevailing conditions. High labilities were noted under oxygenated conditions and a favorable temperature of 30 °C. Lability decreased when oxygen was removed, and was further reduced when the temperature was dropped to the hypolimnetic average of 8° C . These results indicate that reversible preservation mechanisms exist in the sediment, and are able to protect otherwise labile material from being mineralized under in situ conditions. The concept of an active sediment layer, a region in the sediments in which diagenetic reactions occur (with nothing occurring below it), was examined through three lines of evidence. Initially, porewater profiles of oxygen, nitrate, sulfate/total sulfide, ETSA (Electron Transport System Activity- the activity of oxygen, nitrate, iron/manganese, and sulfate), and methane were considered. It was found through examination of the porewater profiles that the edge of diagenesis occurred around 15-20 cm. Secondly, historical and contemporary TOC profiles were compared to find the point at which the profiles were coincident, indicating the depth at which no change has occurred over the (13 year) interval between core collections. This analysis suggested that no diagenesis has occurred in Onondaga Lake sediment below a depth of 15 cm. Finally, the time to 99% mineralization, the t99, was viewed by using a literature estimate of the kinetic rate constant for diagenesis. A t99 of 34 years, or approximately 30 cm of sediment depth, resulted for the slowly decaying carbon fraction. Based on these three lines of evidence , an active sediment layer of 15-20 cm is proposed for Onondaga Lake, corresponding to a time since deposition of 15-20 years. While a large legacy deposit of conditionally-labile organic material remains in the sediments of Onondaga Lake, it becomes clear that preservation, mechanisms that act to shield labile organic carbon from being degraded, protects this material from being mineralized and exerting a demand on the terminal electron acceptors of the lake. This has major implications for management of the lake, as it defines the time course of lake recovery following a reduction in nutrient loading.
Resumo:
This study’s objective was to answer three research questions related to students’ knowledge and attitudes about water quality and availability issues. It is important to understand what knowledge students have about environmental problems such as these, because today’s students will become the problem solvers of the future. If environmental problems, such as those related to water quality, are ever going to be solved, students must be environmentally literate. Several methods of data collection were used. Surveys were given to both Bolivian and Jackson High School students in order to comparison their initial knowledge and attitudes about water quality issues. To study the effects of instruction, a unit of instruction about water quality issues was then taught to the Jackson High School students to see what impact it would have on their knowledge. In addition, the learning of two different groups of Jackson High School students was compared—one group of general education students and a second group of students that were learning in an inclusion classroom and included special education students and struggling learners form the general education population. Student and teacher journals, a unit test, and postsurvey responses were included in the data set. Results suggested that when comparing Bolivian students and Jackson High School students, Jackson High School students were more knowledgeable concerning clean water infrastructure and its importance, despite the fact that these issues were less relevant to their lives than for their Bolivian counterparts. Although overall, the data suggested that all the Jackson High students showed evidence that the instruction impacted their knowledge, the advanced Biology students appeared to show stronger gains than their peers in an inclusion classroom.
Resumo:
Measurements of NOx within the snowpack at Summit, Greenland were carried out from June 2008 to July 2010, using a novel system to sample firn air with minimal disruption of the snowpack. These long-term measurements were motivated by the need of improving the representation of air-snow interactions in global models. Results indicated that the NOx budget within the snowpack was on the order of 550 pptv as maximum, and was constituted primarily for NO2. NOx production was observed within the first 50 cm of the snowpack during the sunlight season between February and August. Presence of NOx at larger depths was attributed to high speed wind and vertical transport processes. Production of NO correlated with the seasonal incoming radiation profile, while NO2 maximum was observed in April. These measurements constitute the larger data set of NOx within the firn and will improve the representation of processes driving snow photochemistry at Summit.
Resumo:
The delivery of oxygen, nutrients, and the removal of waste are essential for cellular survival. Culture systems for 3D bone tissue engineering have addressed this issue by utilizing perfusion flow bioreactors that stimulate osteogenic activity through the delivery of oxygen and nutrients by low-shear fluid flow. It is also well established that bone responds to mechanical stimulation, but may desensitize under continuous loading. While perfusion flow and mechanical stimulation are used to increase cellular survival in vitro, 3D tissue-engineered constructs face additional limitations upon in vivo implantation. As it requires significant amounts of time for vascular infiltration by the host, implants are subject to an increased risk of necrosis. One solution is to introduce tissue-engineered bone that has been pre-vascularized through the co-culture of osteoblasts and endothelial cells on 3D constructs. It is unclear from previous studies: 1) how 3D bone tissue constructs will respond to partitioned mechanical stimulation, 2) how gene expression compares in 2D and in 3D, 3) how co-cultures will affect osteoblast activity, and 4) how perfusion flow will affect co-cultures of osteoblasts and endothelial cells. We have used an integrated approach to address these questions by utilizing mechanical stimulation, perfusion flow, and a co-culture technique to increase the success of 3D bone tissue engineering. We measured gene expression of several osteogenic and angiogenic genes in both 2D and 3D (static culture and mechanical stimulation), as well as in 3D cultures subjected to perfusion flow, mechanical stimulation and partitioned mechanical stimulation. Finally, we co-cultured osteoblasts and endothelial cells on 3D scaffolds and subjected them to long-term incubation in either static culture or under perfusion flow to determine changes in gene expression as well as histological measures of osteogenic and angiogenic activity. We discovered that 2D and 3D osteoblast cultures react differently to shear stress, and that partitioning mechanical stimulation does not affect gene expression in our model. Furthermore, our results suggest that perfusion flow may rescue 3D tissue-engineered constructs from hypoxic-like conditions by reducing hypoxia-specific gene expression and increasing histological indices of both osteogenic and angiogenic activity. Future research to elucidate the mechanisms behind these results may contribute to a more mature bone-like structure that integrates more quickly into host tissue, increasing the potential of bone tissue engineering.
Resumo:
The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.
Resumo:
This thesis presents a paleoclimatic/paleoenvironmental study conducted on clastic cave sediments of the Moravian Karst, Czech Republic. The study is based on environmental magnetic techniques, yet a wide range of other scientific methods was used to obtain a clearer picture of the Quaternary climate. My thesis also presents an overview of the significance of cave deposits for paleoclimatic reconstructions, explains basic environmental magnetic techniques and offers background information on the study area – a famous karst region in Central Europe with a rich history. In Kulna Cave magnetic susceptibility variations and in particular variations in pedogenic susceptibility yield a detailed record of the palaeoenvironmental conditions during the Last Glacial Stage. The Kulna long-term climatic trends agree with the deep-sea SPECMAP record, while the short-term oscillations correlate with rapid changes in the North Atlantic sea surface temperatures. Kulna Cave sediments reflect the intensity of pedogenesis controlled by short-term warmer events and precipitation over the mid-continent and provide a link between continental European climate and sea surface temperatures in the North Atlantic during the Last Glacial Stage. Given the number of independent climate proxies determined from the entrance facies of the cave and their high resolution, Kulna is an extremely important site for studying Late Pleistocene climate. In the interior of Spiralka Cave, a five meter high section of fine grained sediments deposited during floods yields information on the climatic and environmental conditions of the last millenium. In the upper 1.5 meters of this profile, mineral magnetic and other non-magnetic data indicate that susceptibility variations are controlled by the concentration of magnetite and its magnetic grain size. Comparison of our susceptibility record to the instrumental record of winter temperature anomalies shows a remarkable correlation. This correlation is explained by coupling of the flooding events, cultivation of land and pedogenetic processes in the cave catchment area. A combination of mineral magnetic and geochemical proxies yields a detail picture of the rapidly evolving climate of the near past and tracks both natural and human induced environmental changes taking place in the broader region.
Resumo:
Volcanoes are the surficial expressions of complex pathways that vent magma and gasses generated deep in the Earth. Geophysical data record at least the partial history of magma and gas movement in the conduit and venting to the atmosphere. This work focuses on developing a more comprehensive understanding of explosive degassing at Fuego volcano, Guatemala through observations and analysis of geophysical data collected in 2005 – 2009. A pattern of eruptive activity was observed during 2005 – 2007 and quantified with seismic and infrasound, satellite thermal and gas measurements, and lava flow lengths. Eruptive styles are related to variable magma flux and accumulation of gas. Explosive degassing was recorded on broadband seismic and infrasound sensors in 2008 and 2009. Explosion energy partitioning between the ground and the atmosphere shows an increase in acoustic energy from 2008 to 2009, indicating a shift toward increased gas pressure in the conduit. Very-long-period (VLP) seismic signals are associated with the strongest explosions recorded in 2009 and waveform modeling in the 10 – 30 s band produces a best-fit source location 300 m west and 300 m below the summit crater. The calculated moment tensor indicates a volumetric source, which is modeled as a dike feeding a SW-dipping (35°) sill. The sill is the dominant component and its projection to the surface nearly intersects the summit crater. The deformation history of the sill is interpreted as: 1) an initial inflation due to pressurization, followed by 2) a rapid deflation as overpressure is explosively release, and finally 3) a reinflation as fresh magma flows into the sill and degasses. Tilt signals are derived from the horizontal components of the seismometer and show repetitive inflation deflation cycles with a 20 minute period coincident with strong explosions. These cycles represent the pressurization of the shallow conduit and explosive venting of overpressure that develops beneath a partially crystallized plug of magma. The energy released during the strong explosions has allowed for imaging of Fuego’s shallow conduit, which appears to have migrated west of the summit crater. In summary, Fuego is becoming more gas charged and its summit centered vent is shifting to the west - serious hazard consequences are likely.
Resumo:
Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.
Resumo:
For countless communities around the world, acquiring access to safe drinking water is a daily challenge which many organizations endeavor to meet. The villages in the interior of Suriname have been the focus of many improved drinking water projects as most communities are without year-round access. Unfortunately, as many as 75% of the systems in Suriname fail within several years of implementation. These communities, scattered along the rivers and throughout the jungle, lack many of the resources required to sustain a centralized water treatment system. However, the centralized system in the village of Bendekonde on the Upper Suriname River has been operational for over 10 years and is often touted by other communities. The Bendekonde system is praised even though the technology does not differ significantly from other failed systems. Many of the water systems that fail in the interior fail due to a lack of resources available to the community to maintain the system. Typically, the more complex a system becomes, so does the demand for additional resources. Alternatives to centralized systems include technologies such as point-of-use water filters, which can greatly reduce the necessity for outside resources. In particular, ceramic point-of-use water filters offer a technology that can be reasonably managed in a low resource setting such as that in the interior of Suriname. This report investigates the appropriateness and effectiveness of ceramic filters constructed with local Suriname clay and compares the treatment effectiveness to that of the Bendekonde system. Results of this study showed that functional filters could be produced from Surinamese clay and that they were more effective, in a controlled laboratory setting, than the field performance of the Bendekonde system for removing total coliform. However, the Bendekonde system was more successful at removing E. coli. In a life-cycle assessment, ceramic water filters manufactured in Suriname and used in homes for a lifespan of 2 years were shown to have lower cumulative energy demand, as well as lower global warming potential than a centralized system similar to that used in Bendekonde.
Resumo:
Standard procedures for forecasting flood risk (Bulletin 17B) assume annual maximum flood (AMF) series are stationary, meaning the distribution of flood flows is not significantly affected by climatic trends/cycles, or anthropogenic activities within the watershed. Historical flood events are therefore considered representative of future flood occurrences, and the risk associated with a given flood magnitude is modeled as constant over time. However, in light of increasing evidence to the contrary, this assumption should be reconsidered, especially as the existence of nonstationarity in AMF series can have significant impacts on planning and management of water resources and relevant infrastructure. Research presented in this thesis quantifies the degree of nonstationarity evident in AMF series for unimpaired watersheds throughout the contiguous U.S., identifies meteorological, climatic, and anthropogenic causes of this nonstationarity, and proposes an extension of the Bulletin 17B methodology which yields forecasts of flood risk that reflect climatic influences on flood magnitude. To appropriately forecast flood risk, it is necessary to consider the driving causes of nonstationarity in AMF series. Herein, large-scale climate patterns—including El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO)—are identified as influencing factors on flood magnitude at numerous stations across the U.S. Strong relationships between flood magnitude and associated precipitation series were also observed for the majority of sites analyzed in the Upper Midwest and Northeastern regions of the U.S. Although relationships between flood magnitude and associated temperature series are not apparent, results do indicate that temperature is highly correlated with the timing of flood peaks. Despite consideration of watersheds classified as unimpaired, analyses also suggest that identified change-points in AMF series are due to dam construction, and other types of regulation and diversion. Although not explored herein, trends in AMF series are also likely to be partially explained by changes in land use and land cover over time. Results obtained herein suggest that improved forecasts of flood risk may be obtained using a simple modification of the Bulletin 17B framework, wherein the mean and standard deviation of the log-transformed flows are modeled as functions of climate indices associated with oceanic-atmospheric patterns (e.g. AMO, ENSO, NAO, and PDO) with lead times between 3 and 9 months. Herein, one-year ahead forecasts of the mean and standard deviation, and subsequently flood risk, are obtained by applying site specific multivariate regression models, which reflect the phase and intensity of a given climate pattern, as well as possible impacts of coupling of the climate cycles. These forecasts of flood risk are compared with forecasts derived using the existing Bulletin 17B model; large differences in the one-year ahead forecasts are observed in some locations. The increased knowledge of the inherent structure of AMF series and an improved understanding of physical and/or climatic causes of nonstationarity gained from this research should serve as insight for the formulation of a physical-casual based statistical model, incorporating both climatic variations and human impacts, for flood risk over longer planning horizons (e.g., 10-, 50, 100-years) necessary for water resources design, planning, and management.
Resumo:
Fieldwork is supportive of students’ natural inquiry abilities. Educational research findings suggest that instructors can foster the growth of thinking skills and promote science literacy by incorporating active learning strategies (McConnel et al, 2003). Huntoon (2001) states that there is a need to determine optimal learning strategies and to document the procedure of assessing those optimal geoscience curricula. This study seeks to determine if Earth Space II, a high school geological field course, can increase students’ knowledge of selected educational objectives. This research also seeks to measure any impact Earth Space II has on students’ attitude towards science. Assessment of the Earth Space II course objectives provided data on the impact of field courses on high school students’ scientific literacy, scientific inquiry skills, and understanding of selected course objectives. Knowledge assessment was done using a multiple choice format test, the Geoscience Concept Inventory, and an open-ended format essay test. Attitude assessment occurred by utilizing a preexisting science attitude survey. Both knowledge assessments items showed a positive effect size from the pretest to the posttest. The essay exam effect size was 17 and the Geoscience Concept Inventory effect size was 0.18. A positive impact on students’ attitude toward science was observed by an increase in the overall mean Likert value from the pre-survey to the post-survey.
Resumo:
Writing centers work with writers; traditionally services have been focused on undergraduates taking composition classes. More recently, centers have started to attract a wider client base including: students taking labs that require writing; graduate students; and ESL students learning the conventions of U.S. communication. There are very few centers, however, which identify themselves as open to working with all members of the campus-community. Michigan Technological University has one such center. In the Michigan Tech writing center, doors are open to “all students, faculty and staff.” While graduate students, post docs, and professors preparing articles for publication have used the center, for the first time in the collective memory of the center UAW staff members requested center appointments in the summer of 2008. These working class employees were in the process of filling out a work related document, the UAW Position Audit, an approximately seven-page form. This form was their one avenue for requesting a review of the job they were doing; the review was the first step in requesting a raise in job level and pay. This study grew out of the realization that implicit literacy expectations between working class United Auto Workers (UAW) staff and professional class staff were complicating the filling out and filing of the position audit form. Professional class supervisors had designed the form as a measure of fairness, in that each UAW employee on campus was responding to the same set of questions about their work. However, the implicit literacy expectations of supervisors were different from those of many of the employees who were to fill out the form. As a result, questions that were meant to be straightforward to answer were in the eyes of the employees filling out the form, complex. Before coming to the writing center UAW staff had spent months writing out responses to the form; they expressed concerns that their responses still would not meet audience expectations. These writers recognized that they did not yet know exactly what the audience was expecting. The results of this study include a framework for planning writing center sessions that facilitate the acquisition of literacy practices which are new to the user. One important realization from this dissertation is that the social nature of literacy must be kept in the forefront when both planning sessions and when educating tutors to lead these sessions. Literacy scholars such as James Paul Gee, Brian Street, and Shirley Brice Heath are used to show that a person can only know those literacy practices that they have previously acquired. In order to acquire new literacy practices, a person must have social opportunities for hands-on practice and mentoring from someone with experience. The writing center can adapt theory and practices from this dissertation that will facilitate sessions for a range of writers wishing to learn “new” literacy practices. This study also calls for specific changes to writing center tutor education.
Resumo:
The patterning of photoactive purple membrane (PM) films onto electronic substrates to create a biologically based light detection device was investigated. This research is part of a larger collaborative effort to develop a miniaturized toxin detection platform. This platform will utilize PM films containing the photoactive protein bacteriorhodopsin to convert light energy to electrical energy. Following an effort to pattern PM films using focused ion beam machining, the photolithography based bacteriorhodopsin patterning technique (PBBPT) was developed. This technique utilizes conventional photolithography techniques to pattern oriented PM films onto flat substrates. After the basic patterning process was developed, studies were conducted that confirmed the photoelectric functionality of the PM films after patterning. Several process variables were studied and optimized in order to increase the pattern quality of the PM films. Optical microscopy, scanning electron microscopy, and interferometric microscopy were used to evaluate the PM films produced by the patterning technique. Patterned PM films with lateral dimensions of 15 μm have been demonstrated using this technique. Unlike other patterning techniques, the PBBPT uses standard photolithographic processes that make its integration with conventional semiconductor fabrication feasible. The final effort of this research involved integrating PM films patterned using the PBBPT with PMOS transistors. An indirect integration of PM films with PMOS transistors was successfully demonstrated. This indirect integration used the voltage produced by a patterned PM film under light exposure to modulate the gate of a PMOS transistor, activating the transistor. Following this success, a study investigating how this PM based light detection system responded to variations in light intensity supplied to the PM film. This work provides a successful proof of concept for a portion of the toxin detection platform currently under development.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.