4 resultados para Scale models.
em DRUM (Digital Repository at the University of Maryland)
Resumo:
New constraints on isotope fractionation factors in inorganic aqueous sulfur systems based on theoretical and experimental techniques relevant to studies of the sulfur cycle in modern environments and the geologic rock record are presented in this dissertation. These include theoretical estimations of equilibrium isotope fractionation factors utilizing quantum mechanical software and a water cluster model approach for aqueous sulfur compounds that span the entire range of oxidation state for sulfur. These theoretical calculations generally reproduce the available experimental determinations from the literature and provide new constraints where no others are available. These theoretical calculations illustrate in detail the relationship between sulfur bonding environment and the mass dependence associated with equilibrium isotope exchange reactions involving all four isotopes of sulfur. I additionally highlight the effect of isomers of protonated compounds (compounds with the same chemical formula but different structure, where protons are bound to either sulfur or oxygen atoms) on isotope partitioning in the sulfite (S4+) and sulfoxylate (S2+) systems, both of which are key intermediates in oxidation-reduction processes in the sulfur cycle. I demonstrate that isomers containing the highest degree of coordination around sulfur (where protonation occurs on the sulfur atom) have a strong influence on isotopic fractionation factors, and argue that isomerization phenomenon should be considered in models of the sulfur cycle. Additionally, experimental results of the reaction rates and isotope fractionations associated with the chemical oxidation of aqueous sulfide are presented. Sulfide oxidation is a major process in the global sulfur cycle due largely to the sulfide-producing activity of anaerobic microorganisms in organic-rich marine sediments. These experiments reveal relationships between isotope fractionations and reaction rate as a function of both temperature and trace metal (ferrous iron) catalysis that I interpret in the context of the complex mechanism of sulfide oxidation. I also demonstrate that sulfide oxidation is a process associated with a mass dependence that can be described as not conforming to the mass dependence typically associated with equilibrium isotope exchange. This observation has implications for the inclusion of oxidative processes in environmental- and global-scale models of the sulfur cycle based on the mass balance of all four isotopes of sulfur. The contents of this dissertation provide key reference information on isotopic fractionation factors in aqueous sulfur systems that will have far-reaching applicability to studies of the sulfur cycle in a wide variety of natural settings.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
This dissertation focuses on design challenges caused by secondary impacts to printed wiring assemblies (PWAs) within hand-held electronics due to accidental drop or impact loading. The continuing increase of functionality, miniaturization and affordability has resulted in a decrease in the size and weight of handheld electronic products. As a result, PWAs have become thinner and the clearances between surrounding structures have decreased. The resulting increase in flexibility of the PWAs in combination with the reduced clearances requires new design rules to minimize and survive possible internal collisions impacts between PWAs and surrounding structures. Such collisions are being termed ‘secondary impact’ in this study. The effect of secondary impact on board-level drop reliability of printed wiring boards (PWBs) assembled with MEMS microphone components, is investigated using a combination of testing, response and stress analysis, and damage modeling. The response analysis is conducted using a combination of numerical finite element modeling and simplified analytic models for additional parametric sensitivity studies.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.