995 resultados para Michigan Tech Lode
Resumo:
Corporate Social Responsibility (CSR) addresses the responsibility of companies for their impacts on society. The concept of strategic CSR is becoming increasingly mainstreamed in the forest industry, but there is, however, little consensus on the definition and implementation of CSR. The objective of this research is to build knowledge on the characteristics of CSR and to provide insights on the emerging trend to increase the credibility and legitimacy of CSR through standardization. The study explores how the sustainability managers of European and North American forest companies perceive CSR and the recently released ISO 26000 guidance standard on social responsibility. The conclusions were drawn from an analysis of two data sets; multivariate survey data based on one subset of 30 European and 13 North American responses, and data obtained through in-depth interviewing of 10 sustainability managers that volunteered for an hour long phone discussion about social responsibility practices at their company. The analysis concluded that there are no major differences in the characteristics of cross-Atlantic CSR. Hence, the results were consistent with previous research that suggests that CSR is a case- and company-specific concept. Regarding the components of CSR, environmental issues and organizational governance were key priorities in both regions. Consumer issues, human rights, and financial issues were among the least addressed categories. The study reveals that there are varying perceptions on the ISO 26000 guidance standard, both positive and negative. Moreover, sustainability managers of European and North American forest companies are still uncertain regarding the applicability of the ISO 26000 guidance standard to the forest industry. This study is among the first to provide a preliminary review of the practical implications of the ISO 26000 standard in the forest sector. The results may be utilized by sustainability managers interested in the best practices on CSR, and also by a variety of forest industrial stakeholders interested in the practical outcomes of the long-lasting CSR debate.
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Most of the definitions used in the thesis will be defined, and we provide a basic survey of topics in graph theory and design theory pertinent to the topics studied in this thesis. In Chapter 2, we are concerned with the study of fixed block configuration group divisible designs, GDD(n; m; k; λ1; λ2). We study those GDDs in which each block has configuration (s; t), that is, GDDs in which each block has exactly s points from one of the two groups and t points from the other. Chapter 2 begins with an overview of previous results and constructions for small group size and block sizes 3, 4 and 5. Chapter 2 is largely devoted to presenting constructions and results about GDDs with two groups and block size 6. We show the necessary conditions are sufficient for the existence of GDD(n, 2, 6; λ1, λ2) with fixed block configuration (3; 3). For configuration (1; 5), we give minimal or nearminimal index constructions for all group sizes n ≥ 5 except n = 10, 15, 160, or 190. For configuration (2, 4), we provide constructions for several families ofGDD(n, 2, 6; λ1, λ2)s. Chapter 3 addresses characterizing (3, r)-regular graphs. We begin with providing previous results on the well studied class of (2, r)-regular graphs and some results on the structure of large (t; r)-regular graphs. In Chapter 3, we completely characterize all (3, 1)-regular and (3, 2)-regular graphs, as well has sharpen existing bounds on the order of large (3, r)- regular graphs of a certain form for r ≥ 3. Finally, the appendix gives computational data resulting from Sage and C programs used to generate (3, 3)-regular graphs on less than 10 vertices.
Resumo:
Isolated water-soluble analytes extracted from fog water collected during a radiation fog event near Fresno, CA were analyzed using collision induced dissociation and ultrahigh-resolution mass spectrometry. Tandem mass analysis was performed on scan ranges between 100-400 u to characterize the structures of nitrogen and/or sulfur containing species. CHNO, CHOS, and CHNOS compounds were targeted specifically because of the high number of oxygen atoms contained in their molecular formulas. The presence of 22 neutral losses corresponding to fragment ions was evaluated for each of the 1308 precursors. Priority neutral losses represent specific polar functional groups (H2O, CO2, CH3OH, HNO3, SO3, etc., and several combinations of these). Additional neutral losses represent non-specific functional groups (CO, CH2O, C3H8, etc.) Five distinct monoterpene derived organonitrates, organosulfates, and nitroxy-organosulfates were observed in this study, including C10H16O7S, C10H17NO7S, C10H17 NO8S, C10H17NO9S, and C10H17NO10S. Nitrophenols and linear alkyl benzene sulfonates were present in high abundance. Liquid chromatography/mass spectrometery methodology was developed to isolate and quantify nitrophenols based on their fragmentation behavior.
Resumo:
A body sensor network solution for personal healthcare under an indoor environment is developed. The system is capable of logging the physiological signals of human beings, tracking the orientations of human body, and monitoring the environmental attributes, which covers all necessary information for the personal healthcare in an indoor environment. The major three chapters of this dissertation contain three subsystems in this work, each corresponding to one subsystem: BioLogger, PAMS and CosNet. Each chapter covers the background and motivation of the subsystem, the related theory, the hardware/software design, and the evaluation of the prototype’s performance.
Resumo:
Intermediaries permeate modern economic exchange. Most classical models on intermediated exchange are driven by information asymmetry and inventory management. These two factors are of reduced significance in modern economies. This makes it necessary to develop models that correspond more closely to modern financial marketplaces. The goal of this dissertation is to propose and examine such models in a game theoretical context. The proposed models are driven by asymmetries in the goals of different market participants. Hedging pressure as one of the most critical aspects in the behavior of commercial entities plays a crucial role. The first market model shows that no equilibrium solution can exist in a market consisting of a commercial buyer, a commercial seller and a non-commercial intermediary. This indicates a clear economic need for non-commercial trading intermediaries: a direct trade from seller to buyer does not result in an equilibrium solution. The second market model has two distinct intermediaries between buyer and seller: a spread trader/market maker and a risk-neutral intermediary. In this model a unique, natural equilibrium solution is identified in which the supply-demand surplus is traded by the risk-neutral intermediary, whilst the market maker trades the remainder from seller to buyer. Since the market maker’s payoff for trading at the identified equilibrium price is zero, this second model does not provide any motivation for the market maker to enter the market. The third market model introduces an explicit transaction fee that enables the market maker to secure a positive payoff. Under certain assumptions on this transaction fee the equilibrium solution of the previous model applies and now also provides a financial motivation for the market maker to enter the market. If the transaction fee violates an upper bound that depends on supply, demand and riskaversity of buyer and seller, the market will be in disequilibrium.
Resumo:
This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.
Resumo:
Global climate change might significantly impact future ecosystems. The purpose of this thesis was to investigate potential changes in woody plant fine root respiration in response to a changing climate. In a sugar maple dominated northern hardwood forest, the soil was experimentally warmed (+4 °C) to determine if the tree roots could metabolically acclimate to warmer soil conditions. After one and a half years of soil warming, there was an indication of slight acclimation in the fine roots of sugar maple, helping the ecosystem avoid excessive C loss to the atmosphere. In a poor fen northern peatland in northern Michigan, the impacts of water level changes on woody plant fine root respiration were investigated. In areas of increased and also decreased water levels, there were increases in the CO2 efflux from ecosystem fine root respiration. These studies show the importance of investigating further the impacts climate change may have on C balance in northern ecosystems.
Resumo:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.
Resumo:
Osteoarthritis (OA) is a debilitating disease that is becoming more prevalent in today’s society. OA affects approximately 28 million adults in the United States alone and when present in the knee joint, usually leads to a total knee replacement. Numerous studies have been conducted to determine possible methods to halt the initiation of OA, but the structural integrity of the menisci has been shown have a direct effect on the progression of OA. Menisci are two C-shaped structures that are attached to the tibial plateau and aid in facilitating proper load transmission within the knee. The meniscal cross-section is wedge-like to fit the contour of the femoral condyles and help attenuate stresses on the tibial plateau. While meniscal tears are common, only the outer 1/3 of the meniscus is vascularized and has the capacity to heal, hence tears of the inner 2/3rds are generally treated via meniscectomy, leading to OA. To help combat this OA epidemic, an effective biomimetric meniscal replacement is needed. Numerous mechanical and biochemical studies have been conducted on the human meniscus, but very little is known about the mechanical properties on the nano-scale and how meniscal constituents are distributed in the meniscal cross-section. The regional (anterior, central and posterior) nano-mechanical properties of the meniscal superficial layers (both tibial and femoral contacting) and meniscal deep zone were investigated via nanoindentation to examine the regional inhomogeneity of both the lateral and medial menisci. Additionally, these results were compared to quantitative histological values to better formulate a structure-function relationship on the nano-scale. These data will prove imperative for further advancements of a tissue engineered meniscal replacement.
Resumo:
A push to reduce dependency on foreign energy and increase the use of renewable energy has many gas stations pumping ethanol blended fuels. Recreational engines typically have less complex fuel management systems than that of the automotive sector. This prevents the engine from being able to adapt to different ethanol concentrations. Using ethanol blended fuels in recreational engines raises several consumer concerns. Engine performance and emissions are both affected by ethanol blended fuels. This research focused on assessing the impact of E22 on two-stroke and four-stroke snowmobiles. Three snowmobiles were used for this study. A 2009 Arctic Cat Z1 Turbo with a closed-loop fuel injection system, a 2009 Yamaha Apex with an open-loop fuel injection system and a 2010 Polaris Rush with an open-loop fuel injection system were used to determine the impact of E22 on snowmobile engines. A five mode emissions test was conducted on each of the snowmobiles with E0 and E22 to determine the impact of the E22 fuel. All of the snowmobiles were left in stock form to assess the effect of E22 on snowmobiles currently on the trail. Brake specific emissions of the snowmobiles running on E22 were compared to that of the E0 fuel. Engine parameters such as exhaust gas temperature, fuel flow, and relative air to fuel ratio (λ) were also compared on all three snowmobiles. Combustion data using an AVL combustion analysis system was taken on the Polaris Rush. This was done to compare in-cylinder pressures, combustion duration, and location of 50% mass fraction burn. E22 decreased total hydrocarbons and carbon monoxide for all of the snowmobiles and increased carbon dioxide. Peak power increased for the closed-loop fuel injected Arctic Cat. A smaller increase of peak power was observed for the Polaris due to a partial ability of the fuel management system to adapt to ethanol. A decrease in peak power was observed for the open-loop fuel injected Yamaha.
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
This study describes the development and establishment of a proposed Simple Performance Test (SPT) specification in order to contribute to the asphalt materials technology in the state of Michigan. The properties and characteristic of materials, performance testing of specimens, and field analyses are used in developing draft SPT specifications. These advanced and more effective specifications should significantly improve the qualities of designed and constructed hot mix asphalt (HMA) leading to improvement in pavement life in Michigan. The objectives of this study include the following: 1) using the SPT, conduct a laboratory study to measure the parameters including the dynamic modulus terms (E*/sinϕ and E*) and the flow number (Fn) for typical Michigan HMA mixtures, 2) correlate the results of the laboratory study to field performance as they relate to flexible pavement performance (rutting, fatigue, and low temperature cracking), and 3) make recommendations for the SPT criteria at specific traffic levels (e.g. E3, E10, E30), including recommendations for a draft test specification for use in Michigan. The specification criteria of dynamic modulus were developed based upon field rutting performance and contractor warranty criteria.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.