996 resultados para Michigan Tech


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Anonymity systems maintain the anonymity of communicating nodes by camouflaging them, either with peer nodes generating dummy traffic or with peer nodes participating in the actual communication process. The probability of any adversary breaking down the anonymity of the communicating nodes is inversely proportional to the number of peer nodes participating in the network. Hence to maintain the anonymity of the communicating nodes, a large number of peer nodes are needed. Lack of peer availability weakens the anonymity of any large scale anonymity system. This work proposes PayOne, an incentive based scheme for promoting peer availability. PayOne aims to increase the peer availability by encouraging nodes to participate in the anonymity system by awarding them with incentives and thereby promoting the anonymity strength. Existing incentive schemes are designed for single path based approaches. There is no incentive scheme for multipath based or epidemic based anonymity systems. This work has been specifically designed for epidemic protocols and has been implemented over MuON, one of the latest entries to the area of multicasting based anonymity systems. MuON is a peer-to-peer based anonymity system which uses epidemic protocol for data dissemination. Existing incentive schemes involve paying every intermediate node that is involved in the communication between the initiator and the receiver. These schemes are not appropriate for epidemic based anonymity systems due to the incurred overhead. PayOne differs from the existing schemes because it involves paying a single intermediate node that participates in the network. The intermediate node can be any random node that participates in the communication and does not necessarily need to lie in the communication path between the initiator and the receiver. The light-weight characteristics of PayOne make it viable for large-scale epidemic based anonymity systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Waste effluents from the forest products industry are sources of lignocellulosic biomass that can be converted to ethanol by yeast after pretreatment. However, the challenge of improving ethanol yields from a mixed pentose and hexose fermentation of a potentially inhibitory hydrolysate still remains. Hardboard manufacturing process wastewater (HPW) was evaluated at a potential feedstream for lignocellulosic ethanol production by native xylose-fermenting yeast. After screening of xylose-fermenting yeasts, Scheffersomyces stipitis CBS 6054 was selected as the ideal organism for conversion of the HPW hydrolysate material. The individual and synergistic effects of inhibitory compounds present in the hydrolysate were evaluated using response surface methodology. It was concluded that organic acids have an additive negative effect on fermentations. Fermentation conditions were also optimized in terms of aeration and pH. Methods for improving productivity and achieving higher ethanol yields were investigated. Adaptation to the conditions present in the hydrolysate through repeated cell sub-culturing was used. The objectives of this present study were to adapt S. stipitis CBS6054 to a dilute-acid pretreated lignocellulosic containing waste stream; compare the physiological, metabolic, and proteomic profiles of the adapted strain to its parent; quantify changes in protein expression/regulation, metabolite abundance, and enzyme activity; and determine the biochemical and molecular mechanism of adaptation. The adapted culture showed improvement in both substrate utilization and ethanol yields compared to the unadapted parent strain. The adapted strain also represented a growth phenotype compared to its unadapted parent based on its physiological and proteomic profiles. Several potential targets that could be responsible for strain improvement were identified. These targets could have implications for metabolic engineering of strains for improved ethanol production from lignocellulosic feedstocks. Although this work focuses specifically on the conversion of HPW to ethanol, the methods developed can be used for any feedstock/product systems that employ a microbial conversion step. The benefit of this research is that the organisms will the optimized for a company's specific system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years there has been a tremendous amount of research in the area of nanotechnology. History tells us that the commercialization of technologies will always be accompanied by both positive and negative effects for society and the environment. Products containing nanomaterials are already available in the market, and yet there is still not much information regarding the potential negative effects that these products may cause. The work presented in this dissertation describes a holistic approach to address different dimensions of nanotechnology sustainability. Life cycle analysis (LCA) was used to study the potential usage of polyethylene filled with nanomaterials to manufacture automobile body panels. Results showed that the nanocomposite does not provide an environmental benefit over traditional steel panels. A new methodology based on design of experiments (DOE) techniques, coupled with LCA, was implemented to investigate the impact of inventory uncertainties. Results showed that data variability does not have a significant effect on the prediction of the environmental impacts. Material profiles for input materials did have a highly significant effect on the overall impact. Energy consumption and material characterization were identified as two mainstreams where additional research is needed in order to predict the overall impact of nanomaterials more effectively. A study was undertaken to gain insights into the behavior of small particles in contact with a surface exposed to air flow to determine particle lift-off from the surface. A mapping strategy was implemented that allows for the identification of conditions for particle liftoff based on particle size and separation distance from the wall. Main results showed that particles smaller than 0:1mm will not become airborne under shear flow unless the separation distance is greater than 15 nm. Results may be used to minimize exposure to airborne materials. Societal implications that may occur in the workplace were researched. This research task explored different topics including health, ethics, and worker perception with the aim of identifying the base knowledge available in the literature. Recommendations are given for different scenarios to describe how workers and employers could minimize the unwanted effects of nanotechnology production.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scaphoid is one of the 8 carpal bones found adjacent to the thumb supported proximally by Radius bone. During the free fall, on outstretched hand, the impact load gets transferred to the scaphoid at its free anterior end. Unique arrangement of other carpal bones in the palm is also one of the reasons for the load to get transferred to scaphoid. About half of the total load acting upon carpal bone gets transferred to scaphoid at its distal pole. There are about 10 to 12 clinically observed fracture pattern in the scaphoid due to free fall. The aim of the study is to determine the orientation of the load, magnitude of the load and the corresponding fracture pattern. This study includes both static and dynamic finite element models validated by experiments. The scaphoid model has been prepared from CT scans of a 27 year old person. The 2D slices of the CT scans have been converted to 3D model by using MIMICS software. There are four cases of loading studied which are considered to occur clinically more frequently. In case (i) the load is applied at the posterior end at distal pole whereas in case (ii), (iii) and (iv), the load is applied at anterior end at different directions. The model is given a fixed boundary condition at the region which is supported by Radius bone during the impact. Same loading and boundary conditions have been used in both static and dynamic explicit finite element analysis. The site of fracture initiation and path of fracture propagation have been identified by using max principal stress / gradient and max principal strain / gradient criterion respectively in static and dynamic explicit finite element analysis. Static and dynamic impact experiments were performed on the polyurethane foam specimens to validate the finite element results. Experimental results such as load at fracture, site of fracture initiation and path of fracture propagation have been compared with the results of finite element analysis. Four different types of fracture patterns observed in clinical studies have been identified in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Epoxies find variety of applications and during these applications they get exposed to different conditions like elevated temperatures, hydrothermal, chemical, etc. It has been observed that properties of epoxies do get affected substantially if exposed to these conditions for extended period of time and because of the variety of applications, researchers found it necessary to study their effects on the thermal, mechanical, physical and chemical properties. However in this report the focus is on studying effects of physical aging on mechanical properties of EPON 862 with DETDA as its curing agent, where physical aging is aging is the condition which occurs due to exposure to elevated temperatures. A fair amount of computational work has been performed on EPON 862- DETDA to study the effects of physical aging, however very little known work has been done experimentally to study these effects. Young’s modulus, hardness, failure strength, strain to failure, density and glass transition are the properties which have been obtained using various experimental methods - tensile testing, nanoindentation and differential scanning calorimetry. Experimental work on other epoxies have shown no increase or very slight increase in the Young’s modulus and hardness with increased aging time, also decrease in failure strength and strain to failure and through this work on EPON 862- DETDA we can observe similar trends.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite failed attempts at obtaining a potable water system, the village of El Caracol in Southern Honduras remains committed to improving access to water resources. To assist in this endeavor, an investigation of the hydrogeological characteristics of the local watershed was conducted. Daily precipitation was recorded to examine the relationship between precipitation and approximated river and spring discharges. A Thornthwaite Mather Water Balance Model was used to predict monthly discharges for comparison with observed values, and to infer the percentage of topographic watersheds contributing to the respective discharges. As aquifer porosity in this region is thought to be primarily secondary (i.e., fractures), field observed lineaments were compared with those interpreted from remote sensing imagery in an attempt to determine the usefulness of these interpretations in locating potential water sources for a future project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the question of optimal shapes, e.g., those causing minimal extinction among all shapes of equal volume. Guided by the isoperimetric property of a sphere, relevant in the geometrical optics limit of scattering by large particles, we examine an analogous question in the low frequency (electrostatics) approximation, seeking to disentangle electric and geometric contributions. To that end, we survey the literature on shape functionals and focus on ellipsoids, giving a simple proof of spherical optimality for the coated ellipsoidal particle. Monotonic increase with asphericity in the low frequency regime for orientation-averaged induced dipole moments and scattering cross-sections is also shown. Additional physical insight is obtained from the Rayleigh-Gans (transparent) limit and eccentricity expansions. We propose connecting low and high frequency regime in a single minimum principle valid for all size parameters, provided that reasonable size distributions of randomly oriented aspherical particles wash out the resonances for intermediate size parameters. This proposal is further supported by the sum rule for integrated extinction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this research was to address how culturally informed ethnomathematical methods of teaching can be utilized to support the learning of Navajo students in mathematics. The study was conducted over the course of four years on the Navajo Reservations at Tohatchi Middle School in Tohatchi New Mexico. The students involved in the study were all in 8th grade and were enrolled either in Algebra 1 or a Response to Intervention, RTI, class. The data collected came in the form of a student survey, student observation and student assessment. The teacher written survey, a math textbook word problem, and two original math textbook problems along with their rewritten version were the sources of these three studies. The first year of the study consisted of a math attitude survey and how Navajo students perceived math as a subject of interest. The students answered four questions pertaining to their thoughts about mathematics. The students’ responses were positive according to their written answers. The second year of the study involved the observation of how students worked through a math word problem as a group. This method tested how the students culturally interacted in order to solve a math problem. Their questions and reasoning to solve the problem were shared with peers and the teacher. The teacher supported the students in understanding and solving the problem by asking questions that kept the students focused on the goal of solving the problem. The students worked collaboratively and openly in order to complete the activity. During the iv study, the teacher was more able to notice the students’ deficiencies individually or as a group, therefore was able to support them in a more specific manner. The last study was conducted over a period of two different years. This study was used to determine how textbook bias in the form of its sentence structure or word choice affects the performance of students who are not culturally familiar with one or both. It was found that the students performed better and took less time on the rewritten problem than on the original problem. The data suggests that focusing on the culture, language and education of Navajo students can affect how the students learn and understand math.