1000 resultados para Michigan Tech Lode
Resumo:
Due to warmer and drier conditions, wildland fire has been increasing in extent into peatland ecosystems during recent decades. As such, there is an increasing need for broadly applicable tools to detect surface peat moisture, in order to ascertain the susceptibility of peat burning, and the vulnerability of deep peat consumption in the event of a wildfire. In this thesis, a field portable spectroradiometer was used to measure surface reflectance of two Sphagnum moss dominated peatlands. Relationships were developed correlating spectral indices to surface moisture as well as water table position. Spectral convolutions were also applied to the high resolution spectra to represent spectral sensitivity of earth observing sensors. Band ratios previously used to monitor surface moisture with these sensors were assessed. Strong relationships to surface moisture and water table position are evident for both the narrowband indices as well as broadened indices. This study also found a dependence of certain spectral relationships on changes in vegetation cover by leveraging an experimental vegetation manipulation. Results indicate broadened indices employing the 1450-1650 nm region may be less stable under changing vegetation cover than those located in the 1200 nm region.
Resumo:
The capability to detect combustion in a diesel engine has the potential of being an important control feature to meet increasingly stringent emission regulations, develop alternative combustion strategies, and use of biofuels. In this dissertation, block mounted accelerometers were investigated as potential feedback sensors for detecting combustion characteristics in a high-speed, high pressure common rail (HPCR), 1.9L diesel engine. Accelerometers were positioned in multiple placements and orientations on the engine, and engine testing was conducted under motored, single and pilot-main injection conditions. Engine tests were conducted at varying injection timings, engine loads, and engine speeds to observe the resulting time and frequency domain changes of the cylinder pressure and accelerometer signals. The frequency content of the cylinder pressure based signals and the accelerometer signals between 0.5 kHz and 6 kHz indicated a strong correlation with coherence values of nearly 1. The accelerometers were used to produce estimated combustion signals using the Frequency Response Functions (FRF) measured from the frequency domain characteristics of the cylinder pressure signals and the response of the accelerometers attached to the engine block. When compared to the actual combustion signals, the estimated combustion signals produced from the accelerometer response had Root Mean Square Errors (RMSE) between 7% and 25% of the actual signals peak value. Weighting the FRF’s from multiple test conditions along their frequency axis with the coherent output power reduced the median RMSE of the estimated combustion signals and the 95th percentile of RMSE produced from each test condition. The RMSE’s of the magnitude based combustion metrics including peak cylinder pressure, MPG, peak ROHR, and work estimated from the combustion signals produced by the accelerometer responses were between 15% and 50% of their actual value. The MPG measured from the estimated pressure gradient shared a direct relationship to the actual MPG. The location based combustion metrics such as the location of peak values and burn durations were capable of RMSE measurements as low as 0.9°. Overall, accelerometer based combustion sensing system was capable of detecting combustion and providing feedback regarding the in cylinder combustion process
Resumo:
We used active remote sensing technology to characterize forest structure in a northern temperate forest on a landscape- and local-level in the Upper Peninsula of Michigan. Specifically, we used a form of active remote sensing called light detection and ranging (e.g., LiDAR) to aid in the depiction of current forest structural stages and total canopy gap area estimation. On a landscape-level, LiDAR data are shown not only to be a useful tool in characterizing forest structure, in both coniferous and deciduous forest cover types, but also as an effective basis for data-driven surrogates for classification of forest structure. On a local-level, LiDAR data are shown to be a benchmark reference point to evaluate field-based canopy gap area estimations, due to the highly accurate nature of such remotely sensed data. The application of LiDAR remote sensed data can help facilitate current and future sustainable forest management.
Resumo:
Anonymity systems maintain the anonymity of communicating nodes by camouflaging them, either with peer nodes generating dummy traffic or with peer nodes participating in the actual communication process. The probability of any adversary breaking down the anonymity of the communicating nodes is inversely proportional to the number of peer nodes participating in the network. Hence to maintain the anonymity of the communicating nodes, a large number of peer nodes are needed. Lack of peer availability weakens the anonymity of any large scale anonymity system. This work proposes PayOne, an incentive based scheme for promoting peer availability. PayOne aims to increase the peer availability by encouraging nodes to participate in the anonymity system by awarding them with incentives and thereby promoting the anonymity strength. Existing incentive schemes are designed for single path based approaches. There is no incentive scheme for multipath based or epidemic based anonymity systems. This work has been specifically designed for epidemic protocols and has been implemented over MuON, one of the latest entries to the area of multicasting based anonymity systems. MuON is a peer-to-peer based anonymity system which uses epidemic protocol for data dissemination. Existing incentive schemes involve paying every intermediate node that is involved in the communication between the initiator and the receiver. These schemes are not appropriate for epidemic based anonymity systems due to the incurred overhead. PayOne differs from the existing schemes because it involves paying a single intermediate node that participates in the network. The intermediate node can be any random node that participates in the communication and does not necessarily need to lie in the communication path between the initiator and the receiver. The light-weight characteristics of PayOne make it viable for large-scale epidemic based anonymity systems.
Resumo:
Waste effluents from the forest products industry are sources of lignocellulosic biomass that can be converted to ethanol by yeast after pretreatment. However, the challenge of improving ethanol yields from a mixed pentose and hexose fermentation of a potentially inhibitory hydrolysate still remains. Hardboard manufacturing process wastewater (HPW) was evaluated at a potential feedstream for lignocellulosic ethanol production by native xylose-fermenting yeast. After screening of xylose-fermenting yeasts, Scheffersomyces stipitis CBS 6054 was selected as the ideal organism for conversion of the HPW hydrolysate material. The individual and synergistic effects of inhibitory compounds present in the hydrolysate were evaluated using response surface methodology. It was concluded that organic acids have an additive negative effect on fermentations. Fermentation conditions were also optimized in terms of aeration and pH. Methods for improving productivity and achieving higher ethanol yields were investigated. Adaptation to the conditions present in the hydrolysate through repeated cell sub-culturing was used. The objectives of this present study were to adapt S. stipitis CBS6054 to a dilute-acid pretreated lignocellulosic containing waste stream; compare the physiological, metabolic, and proteomic profiles of the adapted strain to its parent; quantify changes in protein expression/regulation, metabolite abundance, and enzyme activity; and determine the biochemical and molecular mechanism of adaptation. The adapted culture showed improvement in both substrate utilization and ethanol yields compared to the unadapted parent strain. The adapted strain also represented a growth phenotype compared to its unadapted parent based on its physiological and proteomic profiles. Several potential targets that could be responsible for strain improvement were identified. These targets could have implications for metabolic engineering of strains for improved ethanol production from lignocellulosic feedstocks. Although this work focuses specifically on the conversion of HPW to ethanol, the methods developed can be used for any feedstock/product systems that employ a microbial conversion step. The benefit of this research is that the organisms will the optimized for a company's specific system.
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
In recent years there has been a tremendous amount of research in the area of nanotechnology. History tells us that the commercialization of technologies will always be accompanied by both positive and negative effects for society and the environment. Products containing nanomaterials are already available in the market, and yet there is still not much information regarding the potential negative effects that these products may cause. The work presented in this dissertation describes a holistic approach to address different dimensions of nanotechnology sustainability. Life cycle analysis (LCA) was used to study the potential usage of polyethylene filled with nanomaterials to manufacture automobile body panels. Results showed that the nanocomposite does not provide an environmental benefit over traditional steel panels. A new methodology based on design of experiments (DOE) techniques, coupled with LCA, was implemented to investigate the impact of inventory uncertainties. Results showed that data variability does not have a significant effect on the prediction of the environmental impacts. Material profiles for input materials did have a highly significant effect on the overall impact. Energy consumption and material characterization were identified as two mainstreams where additional research is needed in order to predict the overall impact of nanomaterials more effectively. A study was undertaken to gain insights into the behavior of small particles in contact with a surface exposed to air flow to determine particle lift-off from the surface. A mapping strategy was implemented that allows for the identification of conditions for particle liftoff based on particle size and separation distance from the wall. Main results showed that particles smaller than 0:1mm will not become airborne under shear flow unless the separation distance is greater than 15 nm. Results may be used to minimize exposure to airborne materials. Societal implications that may occur in the workplace were researched. This research task explored different topics including health, ethics, and worker perception with the aim of identifying the base knowledge available in the literature. Recommendations are given for different scenarios to describe how workers and employers could minimize the unwanted effects of nanotechnology production.
Resumo:
It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.
Resumo:
This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.
Resumo:
Scaphoid is one of the 8 carpal bones found adjacent to the thumb supported proximally by Radius bone. During the free fall, on outstretched hand, the impact load gets transferred to the scaphoid at its free anterior end. Unique arrangement of other carpal bones in the palm is also one of the reasons for the load to get transferred to scaphoid. About half of the total load acting upon carpal bone gets transferred to scaphoid at its distal pole. There are about 10 to 12 clinically observed fracture pattern in the scaphoid due to free fall. The aim of the study is to determine the orientation of the load, magnitude of the load and the corresponding fracture pattern. This study includes both static and dynamic finite element models validated by experiments. The scaphoid model has been prepared from CT scans of a 27 year old person. The 2D slices of the CT scans have been converted to 3D model by using MIMICS software. There are four cases of loading studied which are considered to occur clinically more frequently. In case (i) the load is applied at the posterior end at distal pole whereas in case (ii), (iii) and (iv), the load is applied at anterior end at different directions. The model is given a fixed boundary condition at the region which is supported by Radius bone during the impact. Same loading and boundary conditions have been used in both static and dynamic explicit finite element analysis. The site of fracture initiation and path of fracture propagation have been identified by using max principal stress / gradient and max principal strain / gradient criterion respectively in static and dynamic explicit finite element analysis. Static and dynamic impact experiments were performed on the polyurethane foam specimens to validate the finite element results. Experimental results such as load at fracture, site of fracture initiation and path of fracture propagation have been compared with the results of finite element analysis. Four different types of fracture patterns observed in clinical studies have been identified in this study.
Resumo:
Epoxies find variety of applications and during these applications they get exposed to different conditions like elevated temperatures, hydrothermal, chemical, etc. It has been observed that properties of epoxies do get affected substantially if exposed to these conditions for extended period of time and because of the variety of applications, researchers found it necessary to study their effects on the thermal, mechanical, physical and chemical properties. However in this report the focus is on studying effects of physical aging on mechanical properties of EPON 862 with DETDA as its curing agent, where physical aging is aging is the condition which occurs due to exposure to elevated temperatures. A fair amount of computational work has been performed on EPON 862- DETDA to study the effects of physical aging, however very little known work has been done experimentally to study these effects. Young’s modulus, hardness, failure strength, strain to failure, density and glass transition are the properties which have been obtained using various experimental methods - tensile testing, nanoindentation and differential scanning calorimetry. Experimental work on other epoxies have shown no increase or very slight increase in the Young’s modulus and hardness with increased aging time, also decrease in failure strength and strain to failure and through this work on EPON 862- DETDA we can observe similar trends.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.