14 resultados para Process-based model
em Digital Commons - Michigan Tech
Resumo:
This work presents a 1-D process scale model used to investigate the chemical dynamics and temporal variability of nitrogen oxides (NOx) and ozone (O3) within and above snowpack at Summit, Greenland for March-May 2009 and estimates surface exchange of NOx between the snowpack and surface layer in April-May 2009. The model assumes the surface of snowflakes have a Liquid Like Layer (LLL) where aqueous chemistry occurs and interacts with the interstitial air of the snowpack. Model parameters and initialization are physically and chemically representative of snowpack at Summit, Greenland and model results are compared to measurements of NOx and O3 collected by our group at Summit, Greenland from 2008-2010. The model paired with measurements confirmed the main hypothesis in literature that photolysis of nitrate on the surface of snowflakes is responsible for nitrogen dioxide (NO2) production in the top ~50 cm of the snowpack at solar noon for March – May time periods in 2009. Nighttime peaks of NO2 in the snowpack for April and May were reproduced with aqueous formation of peroxynitric acid (HNO4) in the top ~50 cm of the snowpack with subsequent mass transfer to the gas phase, decomposition to form NO2 at nighttime, and transportation of the NO2 to depths of 2 meters. Modeled production of HNO4 was hindered in March 2009 due to the low production of its precursor, hydroperoxy radical, resulting in underestimation of nighttime NO2 in the snowpack for March 2009. The aqueous reaction of O3 with formic acid was the major sync of O3 in the snowpack for March-May, 2009. Nitrogen monoxide (NO) production in the top ~50 cm of the snowpack is related to the photolysis of NO2, which underrepresents NO in May of 2009. Modeled surface exchange of NOx in April and May are on the order of 1011 molecules m-2 s-1. Removal of measured downward fluxes of NO and NO2 in measured fluxes resulted in agreement between measured NOx fluxes and modeled surface exchange in April and an order of magnitude deviation in May. Modeled transport of NOx above the snowpack in May shows an order of magnitude increase of NOx fluxes in the first 50 cm of the snowpack and is attributed to the production of NO2 during the day from the thermal decomposition and photolysis of peroxynitric acid with minor contributions of NO from HONO photolysis in the early morning.
Resumo:
Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
A novel solution to the long standing issue of chip entanglement and breakage in metal cutting is presented in this dissertation. Through this work, an attempt is made to achieve universal chip control in machining by using chip guidance and subsequent breakage by backward bending (tensile loading of the chip's rough top surface) to effectively control long continuous chips into small segments. One big limitation of using chip breaker geometries in disposable carbide inserts is that the application range is limited to a narrow band depending on cutting conditions. Even within a recommended operating range, chip breakers do not function effectively as designed due to the inherent variations of the cutting process. Moreover, for a particular process, matching the chip breaker geometry with the right cutting conditions to achieve effective chip control is a very iterative process. The existence of a large variety of proprietary chip breaker designs further exacerbates the problem of easily implementing a robust and comprehensive chip control technique. To address the need for a robust and universal chip control technique, a new method is proposed in this work. By using a single tool top form geometry coupled with a tooling system for inducing chip breaking by backward bending, the proposed method achieves comprehensive chip control over a wide range of cutting conditions. A geometry based model is developed to predict a variable edge inclination angle that guides the chip flow to a predetermined target location. Chip kinematics for the new tool geometry is examined via photographic evidence from experimental cutting trials. Both qualitative and quantitative methods are used to characterize the chip kinematics. Results from the chip characterization studies indicate that the chip flow and final form show a remarkable consistency across multiple levels of workpiece and tool configurations as well as cutting conditions. A new tooling system is then designed to comprehensively break the chip by backward bending. Test results with the new tooling system prove that by utilizing the chip guidance and backward bending mechanism, long continuous chips can be more consistently broken into smaller segments that are generally deemed acceptable or good chips. It is found that the proposed tool can be applied effectively over a wider range of cutting conditions than present chip breakers thus taking possibly the first step towards achieving universal chip control in machining.
Resumo:
Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.
Resumo:
A mass‐balance model for Lake Superior was applied to polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and mercury to determine the major routes of entry and the major mechanisms of loss from this ecosystem as well as the time required for each contaminant class to approach steady state. A two‐box model (water column, surface sediments) incorporating seasonally adjusted environmental parameters was used. Both numerical (forward Euler) and analytical solutions were employed and compared. For validation, the model was compared with current and historical concentrations and fluxes in the lake and sediments. Results for PCBs were similar to prior work showing that air‐water exchange is the most rapid input and loss process. The model indicates that mercury behaves similarly to a moderately‐chlorinated PCB, with air‐water exchange being a relatively rapid input and loss process. Modeled accumulation fluxes of PBDEs in sediments agreed with measured values reported in the literature. Wet deposition rates were about three times greater than dry particulate deposition rates for PBDEs. Gas deposition was an important process for tri‐ and tetra‐BDEs (BDEs 28 and 47), but not for higher‐brominated BDEs. Sediment burial was the dominant loss mechanism for most of the PBDE congeners while volatilization was still significant for tri‐ and tetra‐BDEs. Because volatilization is a relatively rapid loss process for both mercury and the most abundant PCBs (tri‐ through penta‐), the model predicts that similar times (from 2 ‐ 10 yr) are required for the compounds to approach steady state in the lake. The model predicts that if inputs of Hg(II) to the lake decrease in the future then concentrations of mercury in the lake will decrease at a rate similar to the historical decline in PCB concentrations following the ban on production and most uses in the U.S. In contrast, PBDEs are likely to respond more slowly if atmospheric concentrations are reduced in the future because loss by volatilization is a much slower process for PBDEs, leading to lesser overall loss rates for PBDEs in comparison to PCBs and mercury. Uncertainties in the chemical degradation rates and partitioning constants of PBDEs are the largest source of uncertainty in the modeled times to steady‐state for this class of chemicals. The modeled organic PBT loading rates are sensitive to uncertainties in scavenging efficiencies by rain and snow, dry deposition velocity, watershed runoff concentrations, and uncertainties in air‐water exchange such as the effect of atmospheric stability.
Resumo:
The effect of shot particles on the high temperature, low cycle fatigue of a hybrid fiber/particulate metal-matrix composite (MMC) was studied. Two hybrid composites with the general composition A356/35%SiC particle/5%Fiber (one without shot) were tested. It was found that shot particles acting as stress concentrators had little effect on the fatigue performance. It appears that fibers with a high silica content were more likely to debond from the matrix. Final failure of the composite was found to occur preferentially in the matrix. SiC particles fracture progressively during fatigue testing, leading to higher stress in the matrix, and final failure by matrix overload. A continuum mechanics based model was developed to predict failure in fatigue based on the tensile properties of the matrix and particles. By accounting for matrix yielding and recovery, composite creep and particle strength distribution, failure of the composite was predicted.
Resumo:
Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.
Resumo:
Heterogeneous materials are ubiquitous in nature and as synthetic materials. These materials provide unique combination of desirable mechanical properties emerging from its heterogeneities at different length scales. Future structural and technological applications will require the development of advanced light weight materials with superior strength and toughness. Cost effective design of the advanced high performance synthetic materials by tailoring their microstructure is the challenge facing the materials design community. Prior knowledge of structure-property relationships for these materials is imperative for optimal design. Thus, understanding such relationships for heterogeneous materials is of primary interest. Furthermore, computational burden is becoming critical concern in several areas of heterogeneous materials design. Therefore, computationally efficient and accurate predictive tools are highly essential. In the present study, we mainly focus on mechanical behavior of soft cellular materials and tough biological material such as mussel byssus thread. Cellular materials exhibit microstructural heterogeneity by interconnected network of same material phase. However, mussel byssus thread comprises of two distinct material phases. A robust numerical framework is developed to investigate the micromechanisms behind the macroscopic response of both of these materials. Using this framework, effect of microstuctural parameters has been addressed on the stress state of cellular specimens during split Hopkinson pressure bar test. A voronoi tessellation based algorithm has been developed to simulate the cellular microstructure. Micromechanisms (microinertia, microbuckling and microbending) governing macroscopic behavior of cellular solids are investigated thoroughly with respect to various microstructural and loading parameters. To understand the origin of high toughness of mussel byssus thread, a Genetic Algorithm (GA) based optimization framework has been developed. It is found that two different material phases (collagens) of mussel byssus thread are optimally distributed along the thread. These applications demonstrate that the presence of heterogeneity in the system demands high computational resources for simulation and modeling. Thus, Higher Dimensional Model Representation (HDMR) based surrogate modeling concept has been proposed to reduce computational complexity. The applicability of such methodology has been demonstrated in failure envelope construction and in multiscale finite element techniques. It is observed that surrogate based model can capture the behavior of complex material systems with sufficient accuracy. The computational algorithms presented in this thesis will further pave the way for accurate prediction of macroscopic deformation behavior of various class of advanced materials from their measurable microstructural features at a reasonable computational cost.
Resumo:
Time-averaged discharge rates (TADR) were calculated for five lava flows at Pacaya Volcano (Guatemala), using an adapted version of a previously developed satellite-based model. Imagery acquired during periods of effusive activity between the years 2000 and 2010 were obtained from two sensors of differing temporal and spatial resolutions; the Moderate Resolution Imaging Spectroradiometer (MODIS), and the Geostationary Operational Environmental Satellites (GOES) Imager. A total of 2873 MODIS and 2642 GOES images were searched manually for volcanic “hot spots”. It was found that MODIS imagery, with superior spatial resolution, produced better results than GOES imagery, so only MODIS data were used for quantitative analyses. Spectral radiances were transformed into TADR via two methods; first, by best-fitting some of the parameters (i.e. density, vesicularity, crystal content, temperature change) of the TADR estimation model to match flow volumes previously estimated from ground surveys and aerial photographs, and second by measuring those parameters from lava samples to make independent estimates. A relatively stable relationship was defined using the second method, which suggests the possibility of estimating lava discharge rates in near-real-time during future volcanic crises at Pacaya.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
Highway infrastructure plays a significant role in society. The building and upkeep of America’s highways provide society the necessary means of transportation for goods and services needed to develop as a nation. However, as a result of economic and social development, vast amounts of greenhouse gas emissions (GHG) are emitted into the atmosphere contributing to global climate change. In recognizing this, future policies may mandate the monitoring of GHG emissions from public agencies and private industries in order to reduce the effects of global climate change. To effectively reduce these emissions, there must be methods that agencies can use to quantify the GHG emissions associated with constructing and maintaining the nation’s highway infrastructure. Current methods for assessing the impacts of highway infrastructure include methodologies that look at the economic impacts (costs) of constructing and maintaining highway infrastructure over its life cycle. This is known as Life Cycle Cost Analysis (LCCA). With the recognition of global climate change, transportation agencies and contractors are also investigating the environmental impacts that are associated with highway infrastructure construction and rehabilitation. A common tool in doing so is the use of Life Cycle Assessment (LCA). Traditionally, LCA is used to assess the environmental impacts of products or processes. LCA is an emerging concept in highway infrastructure assessment and is now being implemented and applied to transportation systems. This research focuses on life cycle GHG emissions associated with the construction and rehabilitation of highway infrastructure using a LCA approach. Life cycle phases of the highway section include; the material acquisition and extraction, construction and rehabilitation, and service phases. Departing from traditional approaches that tend to use LCA as a way to compare alternative pavement materials or designs based on estimated inventories, this research proposes a shift to a context sensitive process-based approach that uses actual observed construction and performance data to calculate greenhouse gas emissions associated with highway construction and rehabilitation. The goal is to support strategies that reduce long-term environmental impacts. Ultimately, this thesis outlines techniques that can be used to assess GHG emissions associated with construction and rehabilitation operations to support the overall pavement LCA.
Resumo:
This document will demonstrate the methodology used to create an energy and conductance based model for power electronic converters. The work is intended to be a replacement for voltage and current based models which have limited applicability to the network nodal equations. Using conductance-based modeling allows direct application of load differential equations to the bus admittance matrix (Y-bus) with a unified approach. When applied directly to the Y-bus, the system becomes much easier to simulate since the state variables do not need to be transformed. The proposed transformation applies to loads, sources, and energy storage systems and is useful for DC microgrids. Transformed state models of a complete microgrid are compared to experimental results and show the models accurately reflect the system dynamic behavior.