19 resultados para modeling of data sources
em Digital Commons - Michigan Tech
Resumo:
Volcán Pacaya is one of three currently active volcanoes in Guatemala. Volcanic activity originates from the local tectonic subduction of the Cocos plate beneath the Caribbean plate along the Pacific Guatemalan coast. Pacaya is characterized by generally strombolian type activity with occasional larger vulcanian type eruptions approximately every ten years. One particularly large eruption occurred on May 27, 2010. Using GPS data collected for approximately 8 years before this eruption and data from an additional three years of collection afterwards, surface movement covering the period of the eruption can be measured and used as a tool to help understand activity at the volcano. Initial positions were obtained from raw data using the Automatic Precise Positioning Service provided by the NASA Jet Propulsion Laboratory. Forward modeling of observed 3-D displacements for three time periods (before, covering and after the May 2010 eruption) revealed that a plausible source for deformation is related to a vertical dike or planar surface trending NNW-SSE through the cone. For three distinct time periods the best fitting models describe deformation of the volcano: 0.45 right lateral movement and 0.55 m tensile opening along the dike mentioned above from October 2001 through January 2009 (pre-eruption); 0.55 m left lateral slip along the dike mentioned above for the period from January 2009 and January 2011 (covering the eruption); -0.025 m dip slip along the dike for the period from January 2011 through March 2013 (post-eruption). In all bestfit models the dike is oriented with a 75° westward dip. These data have respective RMS misfit values of 5.49 cm, 12.38 cm and 6.90 cm for each modeled period. During the time period that includes the eruption the volcano most likely experienced a combination of slip and inflation below the edifice which created a large scar at the surface down the northern flank of the volcano. All models that a dipping dike may be experiencing a combination of inflation and oblique slip below the edifice which augments the possibility of a westward collapse in the future.
Resumo:
Adding conductive carbon fillers to insulating thermoplastic resins increases composite electrical and thermal conductivity. Often, as much of a single type of carbon filler is added to achieve the desired conductivity, while still allowing the material to be molded into a bipolar plate for a fuel cell. In this study, varying amounts of three different carbons (carbon black, synthetic graphite particles, and carbon fiber) were added to Vectra A950RX Liquid Crystal Polymer. The in-plane thermal conductivity of the resulting single filler composites were tested. The results showed that adding synthetic graphite particles caused the largest increase in the in-plane thermal conductivity of the composite. The composites were modeled using ellipsoidal inclusion problems to predict the effective in-plane thermal conductivities at varying volume fractions with only physical property data of constituents. The synthetic graphite and carbon black were modeled using the average field approximation with ellipsoidal inclusions and the model showed good agreement with the experimental data. The carbon fiber polymer composite was modeled using an assemblage of coated ellipsoids and the model showed good agreement with the experimental data.
Resumo:
Thermally conductive resins are a class of material that show promise in many different applications. One growing field for their use is in the area of bipolar plate technology for fuel cell applications. In this work, a LCP was mixed with different types of carbon fillers to determine the effects of the individual carbon fillers on the thermal conductivity of the composite resin. In addition, mathematical modeling was performed on the thermal conductivity data with the goal of developing predictive models for the thermal conductivity of highly filled composite resins.
Resumo:
Materials are inherently multi-scale in nature consisting of distinct characteristics at various length scales from atoms to bulk material. There are no widely accepted predictive multi-scale modeling techniques that span from atomic level to bulk relating the effects of the structure at the nanometer (10-9 meter) on macro-scale properties. Traditional engineering deals with treating matter as continuous with no internal structure. In contrast to engineers, physicists have dealt with matter in its discrete structure at small length scales to understand fundamental behavior of materials. Multiscale modeling is of great scientific and technical importance as it can aid in designing novel materials that will enable us to tailor properties specific to an application like multi-functional materials. Polymer nanocomposite materials have the potential to provide significant increases in mechanical properties relative to current polymers used for structural applications. The nanoscale reinforcements have the potential to increase the effective interface between the reinforcement and the matrix by orders of magnitude for a given reinforcement volume fraction as relative to traditional micro- or macro-scale reinforcements. To facilitate the development of polymer nanocomposite materials, constitutive relationships must be established that predict the bulk mechanical properties of the materials as a function of the molecular structure. A computational hierarchical multiscale modeling technique is developed to study the bulk-level constitutive behavior of polymeric materials as a function of its molecular chemistry. Various parameters and modeling techniques from computational chemistry to continuum mechanics are utilized for the current modeling method. The cause and effect relationship of the parameters are studied to establish an efficient modeling framework. The proposed methodology is applied to three different polymers and validated using experimental data available in literature.
Resumo:
Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.
Resumo:
The South Florida Water Management District (SFWMD) manages and operates numerous water control structures that are subject to scour. In an effort to reduce scour downstream of these gated structures, laboratory experiments were performed to investigate the effect of active air-injection downstream of the terminal structure of a gated spillway on the depth of the scour hole. A literature review involving similar research revealed significant variables such as the ratio of headwater-to-tailwater depths, the diffuser angle, sediment uniformity, and the ratio of air-to-water volumetric discharge values. The experimental design was based on the analysis of several of these non-dimensional parameters. Bed scouring at stilling basins downstream of gated spillways has been identified as posing a serious risk to the spillway’s structural stability. Although this type of scour has been studied in the past, it continues to represent a real threat to water control structures and requires additional attention. A hydraulic scour channel comprised of a head tank, flow straightening section, gated spillway, stilling basin, scour section, sediment trap, and tail-tank was used to further this analysis. Experiments were performed in a laboratory channel consisting of a 1:30 scale model of the SFWMD S65E spillway structure. To ascertain the feasibility of air injection for scour reduction a proof-of-concept study was performed. Experiments were conducted without air entrainment and with high, medium, and low air entrainment rates for high and low headwater conditions. For the cases with no air entrainment it was found that there was excessive scour downstream of the structure due to a downward roller formed upon exiting the downstream sill of the stilling basin. When air was introduced vertically just downstream of, and at the same level as, the stilling basin sill, it was found that air entrainment does reduce scour depth by up to 58% depending on the air flow rate, but shifts the deepest scour location to the sides of the channel bed instead of the center. Various hydraulic flow conditions were tested without air injection to verify which scenario caused more scour. That scenario, uncontrolled free, in which water does not contact the gate and the water elevation in the stilling basin is lower than the spillway crest, would be used for the remainder of experiments testing air injection. Various air flow rates, diffuser elevations, air hole diameters, air hole spacings, diffuser angles and widths were tested in over 120 experiments. Optimal parameters include air injection at a rate that results in a water-to-air ratio of 0.28, air holes 1.016mm in diameter the entire width of the stilling basin, and a vertically orientated injection pattern. Detailed flow measurements were collected for one case using air injection and one without. An identical flow scenario was used for each experiment, namely that of a high flow rate and upstream headwater depth and a low tailwater depth. Equilibrium bed scour and velocity measurements were taken using an Acoustic Doppler Velocimeter at nearly 3000 points. Velocity data was used to construct a vector plot in order to identify which flow components contribute to the scour hole. Additionally, turbulence parameters were calculated in an effort to help understand why air-injection reduced bed scour. Turbulence intensities, normalized mean flow, normalized kinetic energy, and anisotropy of turbulence plots were constructed. A clear trend emerged that showed air-injection reduces turbulence near the bed and therefore reduces scour potential.
Resumo:
This document will demonstrate the methodology used to create an energy and conductance based model for power electronic converters. The work is intended to be a replacement for voltage and current based models which have limited applicability to the network nodal equations. Using conductance-based modeling allows direct application of load differential equations to the bus admittance matrix (Y-bus) with a unified approach. When applied directly to the Y-bus, the system becomes much easier to simulate since the state variables do not need to be transformed. The proposed transformation applies to loads, sources, and energy storage systems and is useful for DC microgrids. Transformed state models of a complete microgrid are compared to experimental results and show the models accurately reflect the system dynamic behavior.
Resumo:
The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.
Resumo:
The thermoset epoxy resin EPON 862, coupled with the DETDA hardening agent, are utilized as the polymer matrix component in many graphite (carbon fiber) composites. Because it is difficult to experimentally characterize the interfacial region, computational molecular modeling is a necessary tool for understanding the influence of the interfacial molecular structure on bulk-level material properties. The purpose of this research is to investigate the many possible variables that may influence the interfacial structure and the effect they will have on the mechanical behavior of the bulk level composite. Molecular models are established for EPON 862-DETDA polymer in the presence of a graphite surface. Material characteristics such as polymer mass-density, residual stresses, and molecular potential energy are investigated near the polymer/fiber interface. Because the exact degree of crosslinking in these thermoset systems is not known, many different crosslink densities (degrees of curing) are investigated. It is determined that a region exists near the carbon fiber surface in which the polymer mass density is different than that of the bulk mass density. These surface effects extend ~10 Å into the polymer from the center of the outermost graphite layer. Early simulations predict polymer residual stress levels to be higher near the graphite surface. It is also seen that the molecular potential energy in polymer atoms decreases with increasing crosslink density. New models are then established in order to investigate the interface between EPON 862-DETDA polymer and graphene nanoplatelets (GNPs) of various atomic thicknesses. Mechanical properties are extracted from the models using Molecular Dynamics techniques. These properties are then implemented into micromechanics software that utilizes the generalized method of cells to create representations of macro-scale composites. Micromechanics models are created representing GNP doped epoxy with varying number of graphene layers and interfacial polymer crosslink densities. The initial micromechanics results for the GNP doped epoxy are then taken to represent the matrix component and are re-run through the micromechanics software with the addition of a carbon fiber to simulate a GNP doped epoxy/carbon fiber composite. Micromechanics results agree well with experimental data, and indicate GNPs of 1 to 2 atomic layers to be highly favorable. The effect of oxygen bonded to the surface of the GNPs is lastly investigated. Molecular Models are created for systems with varying graphene atomic thickness, along with different amounts of oxygen species attached to them. Models are created for graphene containing hydroxyl groups only, epoxide groups only, and a combination of epoxide and hydroxyl groups. Results show models of oxidized graphene to decrease in both tensile and shear modulus. Attaching only epoxide groups gives the best results for mechanical properties, though pristine graphene is still favored.
Resumo:
EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.
Resumo:
Intraneural Ganglion Cysts expand within in a nerve, causing neurological deficits in afflicted patients. Modeling the propagation of these cysts, originating in the articular branch and then expanding radially outward, will help prove articular theory, and ultimately allow for more purposeful treatment of this condition. In Finite Element Analysis, traditional Lagrangian meshing methods fail to model the excessive deformation that occurs in the propagation of these cysts. This report explores the method of manual adaptive remeshing as a method to allow for the use of Lagrangian meshing, while circumventing the severe mesh distortions typical of using a Lagrangian mesh with a large deformation. Manual adaptive remeshing is the process of remeshing a deformed meshed part and then reapplying loads in order to achieve a larger deformation than a single mesh can achieve without excessive distortion. The methods of manual adaptive remeshing described in this Master’s Report are sufficient in modeling large deformations.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.
Resumo:
Intraneural Ganglion Cyst is a 200 year old mystery related to nerve injury which is yet to be solved. Current treatments for the above problem are relatively simple procedures related to removal of cystic contents from the nerve. However, these treatments may result into neuropathic pain and recurrence of the cyst. The articular theory proposed by Spinner et al., (Spinner et al. 2003) takes into consideration the neurological deficit in Common Peroneal Nerve (CPN) branch of the sciatic nerve and affirms that in addition to the above treatments, ligation of articular branch results into foolproof eradication of the deficit. Mechanical Modeling of the Affected Nerve Cross Section will reinforce the articular theory (Spinner et al. 2003). As the cyst propagates, it compresses the neighboring fascicles and the nerve cross section appears like a signet ring. Hence, in order to mechanically model the affected nerve cross section; computational methods capable of modeling excessively large deformations are required. Traditional FEM produces distorted elements while modeling such deformations, resulting into inaccuracies and premature termination of the analysis. The methods described in this Master’s Thesis are effective enough to be able to simulate such deformations. The results obtained from the model adequately resemble the MRI image obtained at the same location and shows an appearance of a signet ring. This Master’s Thesis describes the neurological deficit in brief followed by detail explanation of the advanced computational methods used to simulate this problem. Finally, qualitative results show the resemblance of mechanical model to MRI images of the Nerve Cross Section at the same location validating the capability of these methods to study this neurological deficit.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.