1000 resultados para Michigan Tech


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis we study weak isometries of Hamming spaces. These are permutations of a Hamming space that preserve some but not necessarily all distances. We wish to find conditions under which a weak isometry is in fact an isometry. This type of problem was first posed by Beckman and Quarles for Rn. In chapter 2 we give definitions pertinent to our research. The 3rd chapter focuses on some known results in this area with special emphasis on papers by V. Krasin as well as S. De Winter and M. Korb who solved this problem for the Boolean cube, that is, the binary Hamming space. We attempted to generalize some of their methods to the non-boolean case. The 4th chapter has our new results and is split into two major contributions. Our first contribution shows if n=p or p < n2, then every weak isometry of Hnq that preserves distance p is an isometry. Our second contribution gives a possible method to check if a weak isometry is an isometry using linear algebra and graph theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

By providing vehicle-to-vehicle and vehicle-to-infrastructure wireless communications, vehicular ad hoc networks (VANETs), also known as the “networks on wheels”, can greatly enhance traffic safety, traffic efficiency and driving experience for intelligent transportation system (ITS). However, the unique features of VANETs, such as high mobility and uneven distribution of vehicular nodes, impose critical challenges of high efficiency and reliability for the implementation of VANETs. This dissertation is motivated by the great application potentials of VANETs in the design of efficient in-network data processing and dissemination. Considering the significance of message aggregation, data dissemination and data collection, this dissertation research targets at enhancing the traffic safety and traffic efficiency, as well as developing novel commercial applications, based on VANETs, following four aspects: 1) accurate and efficient message aggregation to detect on-road safety relevant events, 2) reliable data dissemination to reliably notify remote vehicles, 3) efficient and reliable spatial data collection from vehicular sensors, and 4) novel promising applications to exploit the commercial potentials of VANETs. Specifically, to enable cooperative detection of safety relevant events on the roads, the structure-less message aggregation (SLMA) scheme is proposed to improve communication efficiency and message accuracy. The scheme of relative position based message dissemination (RPB-MD) is proposed to reliably and efficiently disseminate messages to all intended vehicles in the zone-of-relevance in varying traffic density. Due to numerous vehicular sensor data available based on VANETs, the scheme of compressive sampling based data collection (CS-DC) is proposed to efficiently collect the spatial relevance data in a large scale, especially in the dense traffic. In addition, with novel and efficient solutions proposed for the application specific issues of data dissemination and data collection, several appealing value-added applications for VANETs are developed to exploit the commercial potentials of VANETs, namely general purpose automatic survey (GPAS), VANET-based ambient ad dissemination (VAAD) and VANET based vehicle performance monitoring and analysis (VehicleView). Thus, by improving the efficiency and reliability in in-network data processing and dissemination, including message aggregation, data dissemination and data collection, together with the development of novel promising applications, this dissertation will help push VANETs further to the stage of massive deployment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is remarkable that there are no deployed military hybrid vehicles since battlefield fuel is approximately 100 times the cost of civilian fuel. In the commercial marketplace, where fuel prices are much lower, electric hybrid vehicles have become increasingly common due to their increased fuel efficiency and the associated operating cost benefit. An absence of military hybrid vehicles is not due to a lack of investment in research and development, but rather because applying hybrid vehicle architectures to a military application has unique challenges. These challenges include inconsistent duty cycles for propulsion requirements and the absence of methods to look at vehicle energy in a holistic sense. This dissertation provides a remedy to these challenges by presenting a method to quantify the benefits of a military hybrid vehicle by regarding that vehicle as a microgrid. This innovative concept allowed for the creation of an expandable multiple input numerical optimization method that was implemented for both real-time control and system design optimization. An example of each of these implementations was presented. Optimization in the loop using this new method was compared to a traditional closed loop control system and proved to be more fuel efficient. System design optimization using this method successfully illustrated battery size optimization by iterating through various electric duty cycles. By utilizing this new multiple input numerical optimization method, a holistic view of duty cycle synthesis, vehicle energy use, and vehicle design optimization can be achieved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aluminum alloyed with small atomic fractions of Sc, Zr, and Hf has been shown to exhibit high temperature microstructural stability that may improve high temperature mechanical behavior. These quaternary alloys were designed using thermodynamic modeling to increase the volume fraction of precipitated tri-aluminide phases to improve thermal stability. When aged during a multi-step, isochronal heat treatment, two compositions showed a secondary room-temperature hardness peak up to 700 MPa at 450°C. Elevated temperature hardness profiles also indicated an increase in hardness from 200-300°C, attributed to the precipitation of Al3Sc, however, no secondary hardness response was observed from the Al3Zr or Al3Hf phases in this alloy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

I utilized state the art remote sensing and GIS (Geographical Information System) techniques to study large scale biological, physical and ecological processes of coastal, nearshore, and offshore waters of Lake Michigan and Lake Superior. These processes ranged from chlorophyll a and primary production time series analysies in Lake Michigan to coastal stamp sand threats on Buffalo Reef in Lake Superior. I used SeaWiFS (Sea-viewing Wide Field-of-view Sensor) satellite imagery to trace various biological, chemical and optical water properties of Lake Michigan during the past decade and to investigate the collapse of early spring primary production. Using spatial analysis techniques, I was able to connect these changes to some important biological processes of the lake (quagga mussels filtration). In a separate study on Lake Superior, using LiDAR (Light Detection and Ranging) and aerial photos, we examined natural coastal erosion in Grand Traverse Bay, Michigan, and discussed a variety of geological features that influence general sediment accumulation patterns and interactions with migrating tailings from legacy mining. These sediments are moving southwesterly towards Buffalo Reef, creating a threat to the lake trout and lake whitefish breeding ground.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Early water resources modeling efforts were aimed mostly at representing hydrologic processes, but the need for interdisciplinary studies has led to increasing complexity and integration of environmental, social, and economic functions. The gradual shift from merely employing engineering-based simulation models to applying more holistic frameworks is an indicator of promising changes in the traditional paradigm for the application of water resources models, supporting more sustainable management decisions. This dissertation contributes to application of a quantitative-qualitative framework for sustainable water resources management using system dynamics simulation, as well as environmental systems analysis techniques to provide insights for water quality management in the Great Lakes basin. The traditional linear thinking paradigm lacks the mental and organizational framework for sustainable development trajectories, and may lead to quick-fix solutions that fail to address key drivers of water resources problems. To facilitate holistic analysis of water resources systems, systems thinking seeks to understand interactions among the subsystems. System dynamics provides a suitable framework for operationalizing systems thinking and its application to water resources problems by offering useful qualitative tools such as causal loop diagrams (CLD), stock-and-flow diagrams (SFD), and system archetypes. The approach provides a high-level quantitative-qualitative modeling framework for "big-picture" understanding of water resources systems, stakeholder participation, policy analysis, and strategic decision making. While quantitative modeling using extensive computer simulations and optimization is still very important and needed for policy screening, qualitative system dynamics models can improve understanding of general trends and the root causes of problems, and thus promote sustainable water resources decision making. Within the system dynamics framework, a growth and underinvestment (G&U) system archetype governing Lake Allegan's eutrophication problem was hypothesized to explain the system's problematic behavior and identify policy leverage points for mitigation. A system dynamics simulation model was developed to characterize the lake's recovery from its hypereutrophic state and assess a number of proposed total maximum daily load (TMDL) reduction policies, including phosphorus load reductions from point sources (PS) and non-point sources (NPS). It was shown that, for a TMDL plan to be effective, it should be considered a component of a continuous sustainability process, which considers the functionality of dynamic feedback relationships between socio-economic growth, land use change, and environmental conditions. Furthermore, a high-level simulation-optimization framework was developed to guide watershed scale BMP implementation in the Kalamazoo watershed. Agricultural BMPs should be given priority in the watershed in order to facilitate cost-efficient attainment of the Lake Allegan's TP concentration target. However, without adequate support policies, agricultural BMP implementation may adversely affect the agricultural producers. Results from a case study of the Maumee River basin show that coordinated BMP implementation across upstream and downstream watersheds can significantly improve cost efficiency of TP load abatement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a load sharing method applied in a distributed micro grid system. The goal of this method is to balance the state-of-charge (SoC) of each parallel connected battery and make it possible to detect the average SoC of the system by measuring bus voltage for all connected modules. In this method the reference voltage for each battery converter is adjusted by adding a proportional SoC factor. Under such setting the battery with a higher SoC will output more power, whereas the one with lower SoC gives out less. Therefore the higher SoC battery will use its energy faster than the lower ones, and eventually the SoC and output power of each battery will converge. And because the reference voltage is related to SoC status, the information of the average SoC in this system could be shared for all modules by measuring bus voltage. The SoC balancing speed is related to the SoC droop factors. This SoC-based load sharing control system is analyzed in feasibility and stability. Simulations in MATLAB/Simulink are presented, which indicate that this control scheme could balance the battery SoCs as predicted. The observation of SoC sharing through bus voltage was validated in both software simulation and hardware experiments. It could be of use to non-communicated distributed power system in load shedding and power planning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The technique of delineating Populus tremuloides (Michx.) clonal colonies based on morphology and phenology has been utilized in many studies and forestry applications since the 1950s. Recently, the availability and robustness of molecular markers has challenged the validity of such approaches for accurate clonal identification. However, genetically sampling an entire stand is largely impractical or impossible. For that reason, it is often necessary to delineate putative genet boundaries for a more selective approach when genetically analyzing a clonal population. Here I re-evaluated the usefulness of phenotypic delineation by: (1) genetically identifying clonal colonies using nuclear microsatellite markers, (2) assessing phenotypic inter- and intraclonal agreement, and (3) determining the accuracy of visible characters to correctly assign ramets to their respective genets. The long-term soil productivity study plot 28 was chosen for analysis and is located in the Ottawa National Forest, MI (46° 37'60.0" N, 89° 12'42.7" W). In total, 32 genets were identified from 181 stems using seven microsatellite markers. The average genet size was 5.5 ramets and six of the largest were selected for phenotypic analyses. Phenotypic analyses included budbreak timing, DBH, bark thickness, bark color or brightness, leaf senescence, leaf serrations, and leaf length ratio. All phenotypic characters, except for DBH, were useful for the analysis of inter- and intraclonal variation and phenotypic delineation. Generally, phenotypic expression was related to genotype with multiple response permutation procedure (MRPP) intraclonal distance values ranging from 0.148 and 0.427 and an observed MRPP delta value=0.221 when the expected delta=0.5. The phenotypic traits, though, overlapped significantly among some clones. When stems were assigned into phenotypic groups, six phenotypic groups were identified with each group containing a dominant genotype or clonal colony. All phenotypic groups contained stems from at least two clonal colonies and no clonal colony was entirely contained within one phenotypic group. These results demonstrate that phenotype varies with genotype and stand clonality can be determined using phenotypic characters, but phenotypic delineation is less precise. I therefore recommend that some genetic identification follow any phenotypic delineation. The amount of genetic identification required for clonal confirmation is likely to vary based on stand and environmental conditions. Further analysis, however, is needed to test these findings in other forest stands and populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Harmonic distortion on voltages and currents increases with the increased penetration of Plug-in Electric Vehicle (PEV) loads in distribution systems. Wind Generators (WGs), which are source of harmonic currents, have some common harmonic profiles with PEVs. Thus, WGs can be utilized in careful ways to subside the effect of PEVs on harmonic distortion. This work studies the impact of PEVs on harmonic distortions and integration of WGs to reduce it. A decoupled harmonic three-phase unbalanced distribution system model is developed in OpenDSS, where PEVs and WGs are represented by harmonic current loads and sources respectively. The developed model is first used to solve harmonic power flow on IEEE 34-bus distribution system with low, moderate, and high penetration of PEVs, and its impact on current/voltage Total Harmonic Distortions (THDs) is studied. This study shows that the voltage and current THDs could be increased upto 9.5% and 50% respectively, in case of distribution systems with high PEV penetration and these THD values are significantly larger than the limits prescribed by the IEEE standards. Next, carefully sized WGs are selected at different locations in the 34-bus distribution system to demonstrate reduction in the current/voltage THDs. In this work, a framework is also developed to find optimal size of WGs to reduce THDs below prescribed operational limits in distribution circuits with PEV loads. The optimization framework is implemented in MATLAB using Genetic Algorithm, which is interfaced with the harmonic power flow model developed in OpenDSS. The developed framework is used to find optimal size of WGs on the 34-bus distribution system with low, moderate, and high penetration of PEVs, with an objective to reduce voltage/current THD deviations throughout the distribution circuits. With the optimal size of WGs in distribution systems with PEV loads, the current and voltage THDs are reduced below 5% and 7% respectively, which are within the limits prescribed by IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Proton exchange membrane (PEM) fuel cell has been known as a promising power source for different applications such as automotive, residential and stationary. During the operation of a PEM fuel cell, hydrogen is oxidized in anode and oxygen is reduced in the cathode to produce the intended power. Water and heat are inevitable byproducts of these reactions. The water produced in the cathode should be properly removed from inside the cell. Otherwise, it may block the path of reactants passing through the gas channels and/or gas diffusion layer (GDL). This deteriorates the performance of the cell and eventually can cease the operation of the cell. Water transport in PEM fuel cell has been the subject of this PhD study. Water transport on the surface of the GDL, through the gas flow channels, and through GDL has been studied in details. For water transport on the surface of the GDL, droplet detachment has been measured for different GDL conditions and for anode and cathode gas flow channels. Water transport through gas flow channels has been investigated by measuring the two-phase flow pressure drop along the gas flow channels. As accumulated liquid water within gas flow channels resists the gas flow, the pressure drop increases along the flow channels. The two-phase flow pressure drop can reveal useful information about the amount of liquid water accumulated within gas flow channels. Liquid water transport though GDL has also been investigated by measuring the liquid water breakthrough pressure for the region between the capillary fingering and the stable displacement on the drainage phase diagram. The breakthrough pressure has been measured for different variables such as GDL thickness, PTFE/Nafion content within the GDL, GDL compression, the inclusion of a micro-porous layer (MPL), and different water flow rates through the GDL. Prior to all these studies, GDL microstructural properties have been studied. GDL microstructural properties such as mean pore diameter, pore diameter distribution, and pore roundness distribution have been investigated by analyzing SEM images of GDL samples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Silver and mercury are both dissolved in cyanide leaching and the mercury co-precipitates with silver during metal recovery. Mercury must then be removed from the silver/mercury amalgam by vaporizing the mercury in a retort, leading to environmental and health hazards. The need for retorting silver can be greatly reduced if mercury is selectively removed from leaching solutions. Theoretical calculations were carried out based on the thermodynamics of the Ag/Hg/CN- system in order to determine possible approaches to either preventing mercury dissolution, or selectively precipitating it without silver loss. Preliminary experiments were then carried out based on these calculations to determine if the reaction would be spontaneous with reasonably fast kinetics. In an attempt to stop mercury from dissolving and leaching the heap leach, the first set of experiments were to determine if selenium and mercury would form a mercury selenide under leaching conditions, lowering the amount of mercury in solution while forming a stable compound. From the results of the synthetic ore experiments with selenium, it was determined that another effect was already suppressing mercury dissolution and the effect of the selenium could not be well analyzed on the small amount of change. The effect dominating the reactions led to the second set of experiments in using silver sulfide as a selective precipitant of mercury. The next experiments were to determine if adding solutions containing mercury cyanide to un-leached silver sulfide would facilitate a precipitation reaction, putting silver in solution and precipitating mercury as mercury sulfide. Counter current flow experiments using the high selenium ore showed a 99.8% removal of mercury from solution. As compared to leaching with only cyanide, about 60% of the silver was removed per pass for the high selenium ore, and around 90% for the high mercury ore. Since silver sulfide is rather expensive to use solely as a mercury precipitant, another compound was sought which could selectively precipitate mercury and leave silver in solution. In looking for a more inexpensive selective precipitant, zinc sulfide was tested. The third set of experiments did show that zinc sulfide (as sphalerite) could be used to selectively precipitate mercury while leaving silver cyanide in solution. Parameters such as particle size, reduction potential, and amount of oxidation of the sphalerite were tested. Batch experiments worked well, showing 99.8% mercury removal with only ≈1% silver loss (starting with 930 ppb mercury, 300 ppb silver) at one hour. A continual flow process would work better for industrial applications, which was demonstrated with the filter funnel set up. Funnels with filter paper and sphalerite tested showed good mercury removal (from 31 ppb mercury and 333 ppb silver with a 87% mercury removal and 7% silver loss through one funnel). A counter current flow set up showed 100% mercury removal and under 0.1% silver loss starting with 704 ppb silver and 922 ppb mercury. The resulting sphalerite coated with mercury sulfide was also shown to be stable (not releasing mercury) under leaching tests. Use of sphalerite could be easily implemented through such means as sphalerite impregnated filter paper placed in currently existing processes. In summary, this work focuses on preventing mercury from following silver through the leaching circuit. Currently the only possible means of removing mercury is by retort, creating possible health hazards in the distillation process and in transportation and storage of the final mercury waste product. Preventing mercury from following silver in the earlier stages of the leaching process will greatly reduce the risk of mercury spills, human exposure to mercury, and possible environmental disasters. This will save mining companies millions of dollars from mercury handling and storage, projects to clean up spilled mercury, and will result in better health for those living near and working in the mines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

“Seeing is believing” the proverb well suits for fluorescent imaging probes. Since we can selectively and sensitively visualize small biomolecules, organelles such as lysosomes, neutral molecules, metal ions, anions through cellular imaging, fluorescent probes can help shed light on the physiological and pathophysiological path ways. Since these biomolecules are produced in low concentrations in the biochemical pathways, general analytical techniques either fail to detect or are not sensitive enough to differentiate the relative concentrations. During my Ph.D. study, I exploited synthetic organic techniques to design and synthesize fluorescent probes with desirable properties such as high water solubility, high sensitivity and with varying fluorescent quantum yields. I synthesized a highly water soluble BOIDPY-based turn-on fluorescent probe for endogenous nitric oxide. I also synthesized a series of cell membrane permeable near infrared (NIR) pH activatable fluorescent probes for lysosomal pH sensing. Fluorescent dyes are molecular tools for designing fluorescent bio imaging probes. This prompted me to design and synthesize a hybrid fluorescent dye with a functionalizable chlorine atom and tested the chlorine re-activity for fluorescent probe design. Carbohydrate and protein interactions are key for many biological processes, such as viral and bacterial infections, cell recognition and adhesion, and immune response. Among several analytical techniques aimed to study these interactions, electrochemical bio sensing is more efficient due to its low cost, ease of operation, and possibility for miniaturization. During my Ph.D., I synthesized mannose bearing aniline molecule which is successfully tested as electrochemical bio sensor. A Ferrocene-mannose conjugate with an anchoring group is synthesized, which can be used as a potential electrochemical biosensor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sporulation is a process in which some bacteria divide asymmetrically to form tough protective endospores, which help them to survive in a hazardous environment for a quite long time. The factors which can trigger this process are diverse. Heat, radiation, chemicals and lacking of nutrition can all lead to the formation of endospores. This phenomenon will lead to low productivity during industrial production. However, the sporulation mechanism in a spore-forming bacterium, Clostridium theromcellum, is still unclear. Therefore, if a regulation network of sporulation can be built, we may figure out ways to inhibit this process. In this study, a computational method is applied to predict the sporulation network in Clostridium theromcellum. A working sporulation network model with 40 new predicted genes and 4 function groups is built by using a network construction program, CINPER. 5 sets of microarray expression data in Clostridium theromcellum under different conditions have been collected. The analysis shows the predicted result is reasonable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.