13 resultados para Open circuit potential
em Digital Commons - Michigan Tech
Resumo:
Titanium oxide is an important semiconductor, which is widely applied for solar cells. In this research, titanium oxide nanotube arrays were synthesized by anodization of Ti foil in the electrolyte composed of ethylene glycol containing 2 vol % H2O and 0.3 wt % NH4F. The voltages of 40V-50V were employed for the anodizing process. Pore diameters and lengths of the TiO2 nanotubes were evaluated by field emission scanning electron microscope (FESEM). The obtained highly-ordered titanium nanotube arrays were exploited to fabricate photoelectrode for the Dye-sensitized solar cells (DSSCS). The TiO2 nanotubes based DSSCS exhibited an excellent performance with a high short circuit current and open circuit voltage as well as a good power conversion efficiency. Those can be attributed to the high surface area and one dimensional structure of TiO2 nanotubes, which could hold a large amount of dyes to absorb light and help electron percolation process to hinder the recombination during the electrons diffusion in the electrolyte.
Resumo:
Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.
Resumo:
Optical waveguides have shown promising results for use within printed circuit boards. These optical waveguides have higher bandwidth than traditional copper transmission systems and are immune to electromagnetic interference. Design parameters for these optical waveguides are needed to ensure an optimal link budget. Modeling and simulation methods are used to determine the optimal design parameters needed in designing the waveguides. As a result, optical structures necessary for incorporating optical waveguides into printed circuit boards are designed and optimized. Embedded siloxane polymer waveguides are investigated for their use in optical printed circuit boards. This material was chosen because it has low absorption, high temperature stability, and can be deposited using common processing techniques. Two sizes of waveguides are investigated, 50 $unit{mu m}$ multimode and 4 - 9 $unit{mu m}$ single mode waveguides. A beam propagation method is developed for simulating the multimode and single mode waveguide parameters. The attenuation of simulated multimode waveguides are able to match the attenuation of fabricated waveguides with a root mean square error of 0.192 dB. Using the same process as the multimode waveguides, parameters needed to ensure a low link loss are found for single mode waveguides including maximum size, minimum cladding thickness, minimum waveguide separation, and minimum bend radius. To couple light out-of-plane to a transmitter or receiver, a structure such as a vertical interconnect assembly (VIA) is required. For multimode waveguides the optimal placement of a total internal reflection mirror can be found without prior knowledge of the waveguide length. The optimal placement is found to be either 60 µm or 150 µm away from the end of the waveguide depending on which metric a designer wants to optimize the average output power, the output power variance, or the maximum possible power loss. For single mode waveguides a volume grating coupler is designed to couple light from a silicon waveguide to a polymer single mode waveguide. A focusing grating coupler is compared to a perpendicular grating coupler that is focused by a micro-molded lens. The focusing grating coupler had an optical loss of over -14 dB, while the grating coupler with a lens had an optical loss of -6.26 dB.
Resumo:
Shippers want to improve their transportation efficiency and rail transportation has the potential to provide an economical alternative to trucking, but it also has potential drawbacks. The pressure to optimize transportation supply chain logistics has resulted in growing interest in multimodal alternatives, such as a combination of truck and rail transportation, but the comparison of multimodal and modal alternatives can be complicated. Shippers in Michigan’s Upper Peninsula (UP) face similar challenges. Adding to the challenge is the distance from major markets and the absence of available facilities for transloading activities. This study reviewed three potential locations for a transload facility (Nestoria, Ishpeming, and Amasa) where truck shipments could be transferred to rail and vice versa. These locations were evaluated on the basis of transportation costs for shippers when compared to the use of single mode transportation by truck to Wisconsin, Chicago, Minneapolis, and Sault Ste. Marie. In addition to shipping costs, the study also evaluated the potential impact of future carbon emission penalties on the shipping cost and the effects of changing fuel prices on shipping cost. The study used data obtained from TRANSEARCH database (2009) and found that although there were slight differences between percent savings for the three locations, any of them could provide potential benefits for movements to Chicago and Minneapolis, as long as final destination could be accessed by rail for delivery. Short haul movements of less than 200 miles (Wisconsin and Sault Ste. Marie) were not cost effective for multimodal transport. The study also found that for every dollar increase in fuel price, cost savings from multimodal option increased by three to five percent, but the inclusion of emission costs would only add one to two percent additional savings. Under a specific case study that addressed shipments by Northern Hardwoods, the most distant locations in Wisconsin would also provide cost savings, partially due to the possibility of using Michigan trucks with higher carrying capacity for the initial movement from the facility to transload location. In addition, Minneapolis movements were found to provide savings for Northern Hardwoods, even without final rail access.
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
Silver and mercury are both dissolved in cyanide leaching and the mercury co-precipitates with silver during metal recovery. Mercury must then be removed from the silver/mercury amalgam by vaporizing the mercury in a retort, leading to environmental and health hazards. The need for retorting silver can be greatly reduced if mercury is selectively removed from leaching solutions. Theoretical calculations were carried out based on the thermodynamics of the Ag/Hg/CN- system in order to determine possible approaches to either preventing mercury dissolution, or selectively precipitating it without silver loss. Preliminary experiments were then carried out based on these calculations to determine if the reaction would be spontaneous with reasonably fast kinetics. In an attempt to stop mercury from dissolving and leaching the heap leach, the first set of experiments were to determine if selenium and mercury would form a mercury selenide under leaching conditions, lowering the amount of mercury in solution while forming a stable compound. From the results of the synthetic ore experiments with selenium, it was determined that another effect was already suppressing mercury dissolution and the effect of the selenium could not be well analyzed on the small amount of change. The effect dominating the reactions led to the second set of experiments in using silver sulfide as a selective precipitant of mercury. The next experiments were to determine if adding solutions containing mercury cyanide to un-leached silver sulfide would facilitate a precipitation reaction, putting silver in solution and precipitating mercury as mercury sulfide. Counter current flow experiments using the high selenium ore showed a 99.8% removal of mercury from solution. As compared to leaching with only cyanide, about 60% of the silver was removed per pass for the high selenium ore, and around 90% for the high mercury ore. Since silver sulfide is rather expensive to use solely as a mercury precipitant, another compound was sought which could selectively precipitate mercury and leave silver in solution. In looking for a more inexpensive selective precipitant, zinc sulfide was tested. The third set of experiments did show that zinc sulfide (as sphalerite) could be used to selectively precipitate mercury while leaving silver cyanide in solution. Parameters such as particle size, reduction potential, and amount of oxidation of the sphalerite were tested. Batch experiments worked well, showing 99.8% mercury removal with only ≈1% silver loss (starting with 930 ppb mercury, 300 ppb silver) at one hour. A continual flow process would work better for industrial applications, which was demonstrated with the filter funnel set up. Funnels with filter paper and sphalerite tested showed good mercury removal (from 31 ppb mercury and 333 ppb silver with a 87% mercury removal and 7% silver loss through one funnel). A counter current flow set up showed 100% mercury removal and under 0.1% silver loss starting with 704 ppb silver and 922 ppb mercury. The resulting sphalerite coated with mercury sulfide was also shown to be stable (not releasing mercury) under leaching tests. Use of sphalerite could be easily implemented through such means as sphalerite impregnated filter paper placed in currently existing processes. In summary, this work focuses on preventing mercury from following silver through the leaching circuit. Currently the only possible means of removing mercury is by retort, creating possible health hazards in the distillation process and in transportation and storage of the final mercury waste product. Preventing mercury from following silver in the earlier stages of the leaching process will greatly reduce the risk of mercury spills, human exposure to mercury, and possible environmental disasters. This will save mining companies millions of dollars from mercury handling and storage, projects to clean up spilled mercury, and will result in better health for those living near and working in the mines.
Resumo:
The Big Manistee River was one of the most well known Michigan rivers to historically support a population of Arctic grayling (Thymallus arctics). Overfishing, competition with introduced fish, and habitat loss due to logging are believed to have caused their decline and ultimate extirpation from the Big Manistee River around 1900 and from the State of Michigan by 1936. Grayling are a species of great cultural importance to Little River Band of Ottawa Indian tribal heritage and although past attempts to reintroduce Arctic grayling have been unsuccessful, a continued interest in their return led to the assessment of environmental conditions of tributaries within a 21 kilometer section of the Big Manistee River to determine if suitable habitat exists. Although data describing historical conditions in the Big Manistee River is limited, we reviewed the literature to determine abiotic conditions prior to Arctic grayling disappearance and the habitat conditions in rivers in western and northwestern North America where they currently exist. We assessed abiotic habitat metrics from 23 sites distributed across 8 tributaries within the Manistee River watershed. Data collected included basic water parameters, streambed substrate composition, channel profile and areal measurements of channel geomorphic unit, and stream velocity and discharge measurements. These environmental condition values were compared to literature values, habitat suitability thresholds, and current conditions of rivers with Arctic grayling populations to assess the feasibility of the abiotic habitat in Big Manistee River tributaries to support Arctic grayling. Although the historic grayling habitat in the region was disturbed during the era of major logging around the turn of the 20th century, our results indicate that some important abiotic conditions within Big Manistee River tributaries are within the range of conditions that support current and past populations of Arctic grayling. Seven tributaries contained between 20-30% pools by area, used by grayling for refuge. All but two tributaries were composed primarily of pebbles, with the remaining two dominated by fine substrates (sand, silt, clay). Basic water parameters and channel depth were within the ranges of those found for populations of Arctic grayling persisting in Montana, Alaska, and Canada for all tributaries. Based on the metrics analyzed in this study, suitable abiotic grayling habitat does exist in Big Manistee River tributaries.
Resumo:
Gravity-flow aqueducts are used to bring clean water from mountain springs in the Comarca Ngäbe-Buglé, Panama, to the homes of the indigenous people who reside there. Spring captures enclose a spring to direct the flow of water into the transmission line. Seepage contact springs are most common, with water appearing above either hard basalt bedrock or a dense clay layer. Spring flows vary dramatically during wet and dry seasons, and discharge points of springs can shift, sometimes enough to impact the capture structure and its ability to properly collect all of the available water. Traditionally, spring captures are concrete boxes. The spring boxes observed by the author were dilapidated or out of alignment with the spring itself, only capturing part of the discharge. An improved design approach was developed that mimics the terrain surrounding the spring source to address these issues. Over the course of a year, three different spring sites were evaluated, and spring captures were designed and constructed based on the new approach. Spring flow data from each case study demonstrate increased flow capture in the improved structures. Rural water systems, including spring captures, can be sustainably maintained by the Circuit Rider model, a technical support system in which technical assistance is provided for the operation of the water systems. During 2012-2013, the author worked as a Circuit Rider and facilitated a water system improvement project while exploring methods of community empowerment to increase the capacity for system maintenance. Based on these experiences, recommendations are provided to expand the Circuit Rider model in the Comarca Ngäbe-Buglé under the Panamanian Ministry of Health’s Water and Sanitation Project (PASAP)