996 resultados para Michigan Tech
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
Bidirectional promoters regulate adjacent genes organized in a divergent fashion (head to head orientation). Several Reports pertaining to bidirectional promoters on a genomic scale exists in mammals. This work provides the essential background on theoretical and experimental work to carry out a genomic scale analysis of bidirectional promoters in plants. A computational study was performed to identify putative bidirectional promoters and the over-represented cis-regulatory motifs from three sequenced plant genomes: rice (Oryza sativa), Arabidopsis thaliana, and Populus trichocarpa using the Plant Cis-acting Regulatory DNA Elements (PLACE) and PLANT CARE databases. Over-represented motifs along with their possible function were described with the help of a few conserved representative putative bidirectional promoters from the three model plants. By doing so a foundation was laid for the experimental evaluation of bidirectional promoters in plants. A novel Agrobacterium tumefaciens mediated transient expression assay (AmTEA) was developed for young plants of different cereal species and the model dicot Arabidopsis thaliana. AmTEA was evaluated using five promoters (six constructs) and two reporter genes, gus and egfp. Efficacy and stability of AmTEA was compared with stable transgenics using the Arabidopsis DEAD-box RNA helicase family gene promoter. AmTEA was primarily developed to overcome the many problems associated with the development of transgenics and expression studies in plants. Finally a possible mechanism for the bidirectional activity of bidirectional promoters was highlighted. Deletion analysis using promoter-reporter gene constructs identified three rice promoters to be bidirectional. Regulatory elements located in the 5’- untranslated regions (UTR) of one of the genes of the divergent gene pair were found to be responsible for their bidirectional ctivity
Resumo:
In this research, a modification to initiation aid ignition in bomb calorimetry that involves systemically blending levels of boron and potassium nitrate initiation aids with a bulk structural energetic elemental power blend is developed. A regression is used to estimate the nominal heat of reaction for the primary reaction. The technique is first applied to the synthesis of TiB2 as a validation study to see if close proximity to literature values can be achieved. The technique is then applied to two systems of interest, Al-Ti-B, and Al-Ti-B4C. In all three investigations, x-ray diffraction is used to characterize the product phases of the reactions to determine the extent and identity of the product phases and any by-products that may have formed as a result of adding the initiation aid. The experimental data indicates the technique approximates the heat of reaction value for the synthesis of TiB2 from Ti-B powder blends and the formation of TiB2 is supported by volume fraction analysis by x-ray diffraction. Application to the Al-Ti-B and Al-Ti-B4C blends show some correlation with variation of the initiation aid, with x-ray diffraction showing the formation of equilibrium products. However, these blends require further investigation to resolve more complex interactions and rule out extraneous variables.
Resumo:
In 1998-2001 Finland suffered the most severe insect outbreak ever recorded, over 500,000 hectares. The outbreak was caused by the common pine sawfly (Diprion pini L.). The outbreak has continued in the study area, Palokangas, ever since. To find a good method to monitor this type of outbreaks, the purpose of this study was to examine the efficacy of multi-temporal ERS-2 and ENVISAT SAR imagery for estimating Scots pine (Pinus sylvestris L.) defoliation. Three methods were tested: unsupervised k-means clustering, supervised linear discriminant analysis (LDA) and logistic regression. In addition, I assessed if harvested areas could be differentiated from the defoliated forest using the same methods. Two different speckle filters were used to determine the effect of filtering on the SAR imagery and subsequent results. The logistic regression performed best, producing a classification accuracy of 81.6% (kappa 0.62) with two classes (no defoliation, >20% defoliation). LDA accuracy was with two classes at best 77.7% (kappa 0.54) and k-means 72.8 (0.46). In general, the largest speckle filter, 5 x 5 image window, performed best. When additional classes were added the accuracy was usually degraded on a step-by-step basis. The results were good, but because of the restrictions in the study they should be confirmed with independent data, before full conclusions can be made that results are reliable. The restrictions include the small size field data and, thus, the problems with accuracy assessment (no separate testing data) as well as the lack of meteorological data from the imaging dates.
Resumo:
Indoor air pollution from combustion of solid fuels is the fifth leading contributor to disease burden in low-income countries. This, and potential to reduce environmental impacts, has resulted in emphasis on use of improved stoves. However, many efforts have failed to meet expectations and effective coverage remains limited. A disconnect exists between technologies, delivery methods, and long-term adoption. The purpose of this research is to develop a framework to increase long-term success of improved stove projects. The framework integrates sustainability factors into the project life-cycle. It is represented as a matrix and checklist which encourages consideration of social, economic, and environmental issues in projects. A case study was conducted in which an improved stove project in Honduras was evaluated using the framework. Results indicated areas of strength and weakness in project execution and highlighted potential improvements for future projects. The framework is also useful as a guide during project planning.
Resumo:
Polymer electrolyte fuel cell (PEMFC) is promising source of clean power in many applications ranging from portable electronics to automotive and land-based power generation. However, widespread commercialization of PEMFC is primarily challenged by degradation. The mechanisms of fuel cell degradation are not well understood. Even though the numbers of installed units around the world continue to increase and dominate the pre-markets, the present lifetime requirements for fuel cells cannot be guarantee, creating the need for a more comprehensive knowledge of material’s ageing mechanism. The objective of this project is to conduct experiments on membrane electrode assembly (MEA) components of PEMFC to study structural, mechanical, electrical and chemical changes during ageing and understanding failure/degradation mechanism. The first part of this project was devoted to surface roughness analysis on catalyst layer (CL) and gas diffusion layer (GDL) using surface mapping microscopy. This study was motivated by the need to have a quantitative understanding of the GDL and CL surface morphology at the submicron level to predict interfacial contact resistance. Nanoindentation studies using atomic force microscope (AFM) were introduced to investigate the effect of degradation on mechanical properties of CL. The elastic modulus was decreased by 45 % in end of life (EOL) CL as compare to beginning of life (BOL) CL. In another set of experiment, conductive AFM (cAFM) was used to probe the local electric current in CL. The conductivity drops by 62 % in EOL CL. The future task will include characterization of MEA degradation using Raman and Fourier transform infrared (FTIR) spectroscopy. Raman spectroscopy will help to detect degree of structural disorder in CL during degradation. FTIR will help to study the effect of CO in CL. XRD will be used to determine Pt particle size and its crystallinity. In-situ conductive AFM studies using electrochemical cell on CL to correlate its structure with oxygen reduction reaction (ORR) reactivity
Resumo:
The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.
Resumo:
The Modeling method of teaching has demonstrated well--‐documented success in the improvement of student learning. The teacher/researcher in this study was introduced to Modeling through the use of a technique called White Boarding. Without formal training, the researcher began using the White Boarding technique for a limited number of laboratory experiences with his high school physics classes. The question that arose and was investigated in this study is “What specific aspects of the White Boarding process support student understanding?” For the purposes of this study, the White Boarding process was broken down into three aspects – the Analysis of data through the use of Logger Pro software, the Preparation of White Boards, and the Presentations each group gave about their specific lab data. The lab used in this study, an Acceleration of Gravity Lab, was chosen because of the documented difficulties students experience in the graphing of motion. In the lab, students filmed a given motion, utilized Logger Pro software to analyze the motion, prepared a White Board that described the motion with position--‐time and velocity--‐time graphs, and then presented their findings to the rest of the class. The Presentation included a class discussion with minimal contribution from the teacher. The three different aspects of the White Boarding experience – Analysis, Preparation, and Presentation – were compared through the use of student learning logs, video analysis of the Presentations, and follow--‐up interviews with participants. The information and observations gathered were used to determine the level of understanding of each participant during each phase of the lab. The researcher then looked for improvement in the level of student understanding, the number of “aha” moments students had, and the students’ perceptions about which phase was most important to their learning. The results suggest that while all three phases of the White Boarding experience play a part in the learning process for students, the Presentations provided the most significant changes. The implications for instruction are discussed.
Resumo:
Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.
Resumo:
Green-tree retention under the conceptual framework of ecological forestry has the potential to provide both biomass feedstock for industry and maintain quality wildlife habitat. I examined the effects of retained canopy trees as biological legacies (“legacy trees”) in aspen (Populus spp.) forests on above-ground live woody biomass, understory plant floristic quality, and bird diversity. Additionally, I evaluated habitat quality for a high conservation priority species, the Golden-winged Warbler (Vermivora chrysoptera). I selected 27 aspen-dominated forest stands in northern Wisconsin with nine stands in each of three legacy tree retention treatments (conifer retention, hardwood retention, and clearcuts or no retention) across a chronosequence (4-36 years post-harvest). Conifer retention stands had greater legacy tree and all tree species biomass but lower regenerating tree biomass than clearcuts. Coniferous but not hardwood legacy trees appeared to suppress regenerating tree biomass. I evaluated the floristic quality of the understory plant assemblage by estimating the mean coefficient of conservatism (C). Mean C was lower in young stands than in middle-age or old stands; there was a marginally significant (p=0.058) interaction effect between legacy tree retention treatment and stand age. Late-seral plant species were positively associated with stand age and legacy tree diameter or age revealing an important relationship between legacy tree retention and stand development. Bird species richness was greatest in stands with hardwood retention particularly early in stand development. Six conservation priority bird species were indicators of legacy tree retention or clearcuts. Retention of legacy trees in aspen stands provided higher quality nest habitat for the Golden-winged Warbler than clearcuts based on high pairing success and nesting activity. Retention of hardwoods, particularly northern red oak (Quercus rubra), yielded the most consistent positive effects in this study with the highest bird species richness and the highest quality habitat for the Golden-winged Warbler. This treatment maintained stand biomass comparable to clearcuts and did not suppress regenerating tree biomass. In conclusion, legacy tree retention can enhance even-aged management techniques to produce a win-win scenario for the conservation of declining bird species and late-seral understory plants and for production of woody biomass feedstock from naturally regenerating aspen forests.
Resumo:
A major deficiency in disaster management plans is the assumption that pre-disaster civil-society does not have the capacity to respond effectively during crises. Following from this assumption a dominant emergency management strategy is to replace weak civil-society organizations with specialized disaster organizations that are often either military or Para-military and seek to centralize decision-making. Many criticisms have been made of this approach, but few specifically addresses disasters in the developing world. Disasters in the developing world present unique problems not seen in the developed world because they often occur in the context of compromised governments, and marginalized populations. In this context it is often community members themselves who possess the greatest capacity to respond to disasters. This paper focuses on the capacity of community groups to respond to disaster in a small town in rural Guatemala. Key informant interviews and ethnographic observations are used to reconstruct the community response to the disaster instigated by Hurricane Stan (2005) in the municipality of Tectitán in the Huehuetenango department. The interviews were analyzed using techniques adapted from grounded theory to construct a narrative of the events, and identify themes in the community’s disaster behavior. These themes are used to critique the emergency management plans advocated by the Guatemalan National Coordination for the Reduction of Disasters (CONRED). This paper argues that CONRED uncritically adopts emergency management strategies that do not account for the local realities in communities throughout Guatemala. The response in Tectitán was characterized by the formation of new organizations, whose actions and leadership structure were derived from “normal” or routine life. It was found that pre-existing social networks were resilient and easily re-oriented meet the novel needs of a crisis. New or emergent groups that formed during the disaster utilized social capital accrued by routine collective behavior, and employed organizational strategies derived from “normal” community relations. Based on the effectiveness of this response CONRED could improve its emergency planning on the local-level by utilizing the pre-existing community organizations rather than insisting that new disaster-specific organizations be formed.
Resumo:
Internal combustion engines are, and will continue to be, a primary mode of power generation for ground transportation. Challenges exist in meeting fuel consumption regulations and emission standards while upholding performance, as fuel prices rise, and resource depletion and environmental impacts are of increasing concern. Diesel engines are advantageous due to their inherent efficiency advantage over spark ignition engines; however, their NOx and soot emissions can be difficult to control and reduce due to an inherent tradeoff. Diesel combustion is spray and mixing controlled providing an intrinsic link between spray and emissions, motivating detailed, fundamental studies on spray, vaporization, mixing, and combustion characteristics under engine relevant conditions. An optical combustion vessel facility has been developed at Michigan Technological University for these studies, with detailed tests and analysis being conducted. In this combustion vessel facility a preburn procedure for thermodynamic state generation is used, and validated using chemical kinetics modeling both for the MTU vessel, and institutions comprising the Engine Combustion Network international collaborative research initiative. It is shown that minor species produced are representative of modern diesel engines running exhaust gas recirculation and do not impact the autoignition of n-heptane. Diesel spray testing of a high-pressure (2000 bar) multi-hole injector is undertaken including non-vaporizing, vaporizing, and combusting tests, with sprays characterized using Mie back scatter imaging diagnostics. Liquid phase spray parameter trends agree with literature. Fluctuations in liquid length about a quasi-steady value are quantified, along with plume to plume variations. Hypotheses are developed for their causes including fuel pressure fluctuations, nozzle cavitation, internal injector flow and geometry, chamber temperature gradients, and turbulence. These are explored using a mixing limited vaporization model with an equation of state approach for thermopyhysical properties. This model is also applied to single and multi-component surrogates. Results include the development of the combustion research facility and validated thermodynamic state generation procedure. The developed equation of state approach provides application for improving surrogate fuels, both single and multi-component, in terms of diesel spray liquid length, with knowledge of only critical fuel properties. Experimental studies are coupled with modeling incorporating improved thermodynamic non-ideal gas and fuel
Resumo:
State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.
Resumo:
Energy crisis and worldwide environmental problem make hydrogen a prospective energy carrier. However, storage and transportation of hydrogen in large quantities at small volume is currently not practical. Lots of materials and devices have been developed for storage hydrogen, but to today none is able to meet the DOE targets. Activated carbon has been found to be a good hydrogen adsorbent due to its high surface area. However, the weak van der Waals force between hydrogen and the adsorbent has limited the adsorption capacity. Previous studies have found that enhanced adsorption can be obtained with applied electric field. Stronger interaction between the polarized hydrogen and the charged sorbents under high voltage is considered as the reason. This study was initiated to investigate if the adsorption can be further enhanced when the activated carbon particles are separated with a dielectric coating. Dielectric TiO2 nanoparticles were first utilized. Hydrogen adsorption measurements on the TiO2-coated carbon materials, with or without an external electric field, were made. The results showed that the adsorption capacity enhancement increased with the increasing amount of TiO2 nanoparticles with an applied electric field. Since the hydrogen adsorption capacity on TiO2 particles is very low and there is no hydrogen adsorption enhancement on TiO2 particles alone when electric field is applied, the effect of dielectric coating is demonstrated. Another set of experiments investigated the behavior of hydrogen adsorption over TiO2-coated activated carbon under various electric potentials. The results revealed that the hydrogen adsorption first increased and then decreased with the increase of electric field. The improved storage was due to a stronger interaction between charged carbon surface and polarized hydrogen molecule caused by field induced polarization of TiO2 coating. When the electric field was sufficient to cause considerable ionization of hydrogen, the decrease of hydrogen adsorption occurred. The current leak detected at 3000 V was a sign of ionization of hydrogen. Experiments were also carried out to examine the hydrogen adsorption performances over activated carbon separated by other dielectric materials, MgO, ZnO and BaTiO3, respectively. For the samples partitioned with MgO and ZnO, the measurements with and without an electric field indicated negligible differences. Electric field enhanced adsorption has been observed on the activated carbon separated with BaTiO3, a material with unusually high dielectric constant. Corresponding computational calculations using Density Functional Theory have been performed on hydrogen interaction with charged TiO2 molecule as well as TiO2 molecule, coronene and TiO2-doped coronene in the presence of an electric field. The simulated results were consistent with the observations from experiments, further confirming the proposed hypotheses.
Resumo:
Supercritical carbon dioxide is used to exfoliate graphite, producing a small, several-layer graphitic flake. The supercritical conditions of 2000, 2500, and 3000 psi and temperatures of 40°, 50°, and 60°C, have been used to study the effect of critical density on the sizes and zeta potentials of the treated flakes. Photon Correlation Spectroscopy (PCS), Brunauer-Emmett-Teller (BET) surface area measurement, field emission scanning electron microscopy (FE-SEM), and atomic force microscopy (AFM) are used to observe the features of the flakes. N-methyl-2-pyrrolidinone (NMP), dimethylformamide (DMF), and isopropanol are used as co-solvents to enhance the supercritical carbon dioxide treatment. As a result, the PCS results show that the flakes obtained from high critical density treatment (low temperature and high pressure) are more stable due to more negative charges of zeta potential, but have smaller sizes than those from low critical density (high temperature and low pressure). However, when an additional 1-hour sonication is applied, the size of the flakes from low critical density treatment becomes smaller than those from high critical density treatment. This is probably due to more CO2 molecules stacked between the layers of the graphitic flakes. The zeta potentials of the sonicated samples were slightly more negative than nonsonicated samples. NMP and DMF co-solvents maintain stability and prevented reaggregation of the flakes better than isopropanol. The flakes tend to be larger and more stable as the treatment time increases since larger flat area of graphite is exfoliated. In these experiments, the temperature has more impact on the flakes than pressure. The BET surface area resultsshow that CO2 penetrates the graphite layers more than N2. Moreover, the negative surface area of the treated graphite indicates that the CO2 molecules may be adsorbed between the graphite layers during supercritical treatment. The FE-SEM and AFM images show that the flakes have various shapes and sizes. The effects of surfactants can be observed on the FE-SEM images of the samples in one percent by weight solution of SDBS in water since the sodium dodecylbenzene sulfonate (SDBS) residue covers all of the remaining flakes. The AFM images show that the vertical thickness of the graphitic flakes can ranges from several nanometers (less than ten layers thick), to more than a hundred nanometers. In conclusion, supercritical carbon dioxide treatment is a promising step compared to mechanical and chemical exfoliation techniques in the large scale production of thin graphitic flake, breaking down the graphite flakes into flakes only a fewer graphene layers thick.