18 resultados para Digital applications ecosystem
em DRUM (Digital Repository at the University of Maryland)
Resumo:
A model to estimate the mean monthly growth of Crassostrea virginica oysters in Chesapeake Bay was developed. This model is based on the classic von Bertalanffy growth function, however the growth constant is changed every monthly timestep in response to short term changes in temperature and salinity. Using a dynamically varying growth constant allows the model to capture seasonal oscillations in growth, and growth responses to changing environmental conditions that previous applications of the von Bertalanffy model do not capture. This model is further expanded to include an estimation of Perkinsus marinus impacts on growth rates as well as estimations of ecosystem services provided by a restored oyster bar over time. The model was validated by comparing growth estimates from the model to oyster shell height observations from a variety of restoration sites in the upper Chesapeake Bay. Without using the P. marinus impact on growth, the model consistently overestimates mean oyster growth. However, when P. marinus effects are included in the model, the model estimates match the observed mean shell height closely for at least the first 3 years of growth. The estimates of ecosystem services suggested by this model imply that even with high levels of mortality on an oyster reef, the ecosystem services provided by that reef can still be maintained by growth for several years. Because larger oyster filter more water than smaller ones, larger oysters contribute more to the filtration and nutrient removal ecosystem services of the reef. Therefore a reef with an abundance of larger oysters will provide better filtration and nutrient removal. This implies that if an oyster restoration project is trying to improve water quality through oyster filtration, it is important to maintain the larger older oysters on the reef.
Resumo:
This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.
Resumo:
Causal inference with a continuous treatment is a relatively under-explored problem. In this dissertation, we adopt the potential outcomes framework. Potential outcomes are responses that would be seen for a unit under all possible treatments. In an observational study where the treatment is continuous, the potential outcomes are an uncountably infinite set indexed by treatment dose. We parameterize this unobservable set as a linear combination of a finite number of basis functions whose coefficients vary across units. This leads to new techniques for estimating the population average dose-response function (ADRF). Some techniques require a model for the treatment assignment given covariates, some require a model for predicting the potential outcomes from covariates, and some require both. We develop these techniques using a framework of estimating functions, compare them to existing methods for continuous treatments, and simulate their performance in a population where the ADRF is linear and the models for the treatment and/or outcomes may be misspecified. We also extend the comparisons to a data set of lottery winners in Massachusetts. Next, we describe the methods and functions in the R package causaldrf using data from the National Medical Expenditure Survey (NMES) and Infant Health and Development Program (IHDP) as examples. Additionally, we analyze the National Growth and Health Study (NGHS) data set and deal with the issue of missing data. Lastly, we discuss future research goals and possible extensions.
Resumo:
The aim of this dissertation was to investigate flexible polymer-nanoparticle composites with unique magnetic and electrical properties. Toward this goal, two distinct projects were carried out. The first project explored the magneto-dielectric properties and morphology of flexible polymer-nanoparticle composites that possess high permeability (µ), high permittivity (ε) and minimal dielectric, and magnetic loss (tan δε, tan δµ). The main materials challenges were the synthesis of magnetic nanoparticle fillers displaying high saturation magnetization (Ms), limited coercivity, and their homogeneous dispersion in a polymeric matrix. Nanostructured magnetic fillers including polycrystalline iron core-shell nanoparticles, and constructively assembled superparamagnetic iron oxide nanoparticles were synthesized, and dispersed uniformly in an elastomer matrix to minimize conductive losses. The resulting composites have demonstrated promising permittivity (22.3), permeability (3), and sustained low dielectric (0.1), magnetic (0.4) loss for frequencies below 2 GHz. This study demonstrated nanocomposites with tunable magnetic resonance frequency, which can be used to develop compact and flexible radio frequency devices with high efficiency. The second project focused on fundamental research regarding methods for the design of highly conductive polymer-nanoparticle composites that can maintain high electrical conductivity under tensile strain exceeding 100%. We investigated a simple solution spraying method to fabricate stretchable conductors based on elastomeric block copolymer fibers and silver nanoparticles. Silver nanoparticles were assembled both in and around block copolymer fibers forming interconnected dual nanoparticle networks, resulting in both in-fiber conductive pathways and additional conductive pathways on the outer surface of the fibers. Stretchable composites with conductivity values reaching 9000 S/cm maintained 56% of their initial conductivity after 500 cycles at 100% strain. The developed manufacturing method in this research could pave the way towards direct deposition of flexible electronic devices on any shaped substrate. The electrical and electromechanical properties of these dual silver nanoparticle network composites make them promising materials for the future construction of stretchable circuitry for displays, solar cells, antennas, and strain and tactility sensors.
Resumo:
Resource allocation decisions are made to serve the current emergency without knowing which future emergency will be occurring. Different ordered combinations of emergencies result in different performance outcomes. Even though future decisions can be anticipated with scenarios, previous models follow an assumption that events over a time interval are independent. This dissertation follows an assumption that events are interdependent, because speed reduction and rubbernecking due to an initial incident provoke secondary incidents. The misconception that secondary incidents are not common has resulted in overlooking a look-ahead concept. This dissertation is a pioneer in relaxing the structural assumptions of independency during the assignment of emergency vehicles. When an emergency is detected and a request arrives, an appropriate emergency vehicle is immediately dispatched. We provide tools for quantifying impacts based on fundamentals of incident occurrences through identification, prediction, and interpretation of secondary incidents. A proposed online dispatching model minimizes the cost of moving the next emergency unit, while making the response as close to optimal as possible. Using the look-ahead concept, the online model flexibly re-computes the solution, basing future decisions on present requests. We introduce various online dispatching strategies with visualization of the algorithms, and provide insights on their differences in behavior and solution quality. The experimental evidence indicates that the algorithm works well in practice. After having served a designated request, the available and/or remaining vehicles are relocated to a new base for the next emergency. System costs will be excessive if delay regarding dispatching decisions is ignored when relocating response units. This dissertation presents an integrated method with a principle of beginning with a location phase to manage initial incidents and progressing through a dispatching phase to manage the stochastic occurrence of next incidents. Previous studies used the frequency of independent incidents and ignored scenarios in which two incidents occurred within proximal regions and intervals. The proposed analytical model relaxes the structural assumptions of Poisson process (independent increments) and incorporates evolution of primary and secondary incident probabilities over time. The mathematical model overcomes several limiting assumptions of the previous models, such as no waiting-time, returning rule to original depot, and fixed depot. The temporal locations flexible with look-ahead are compared with current practice that locates units in depots based on Poisson theory. A linearization of the formulation is presented and an efficient heuristic algorithm is implemented to deal with a large-scale problem in real-time.
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Life Cycle Climate Performance (LCCP) is an evaluation method by which heating, ventilation, air conditioning and refrigeration systems can be evaluated for their global warming impact over the course of their complete life cycle. LCCP is more inclusive than previous metrics such as Total Equivalent Warming Impact. It is calculated as the sum of direct and indirect emissions generated over the lifetime of the system “from cradle to grave”. Direct emissions include all effects from the release of refrigerants into the atmosphere during the lifetime of the system. This includes annual leakage and losses during the disposal of the unit. The indirect emissions include emissions from the energy consumption during manufacturing process, lifetime operation, and disposal of the system. This thesis proposes a standardized approach to the use of LCCP and traceable data sources for all aspects of the calculation. An equation is proposed that unifies the efforts of previous researchers. Data sources are recommended for average values for all LCCP inputs. A residential heat pump sample problem is presented illustrating the methodology. The heat pump is evaluated at five U.S. locations in different climate zones. An excel tool was developed for residential heat pumps using the proposed method. The primary factor in the LCCP calculation is the energy consumption of the system. The effects of advanced vapor compression cycles are then investigated for heat pump applications. Advanced cycle options attempt to reduce the energy consumption in various ways. There are three categories of advanced cycle options: subcooling cycles, expansion loss recovery cycles and multi-stage cycles. The cycles selected for research are the suction line heat exchanger cycle, the expander cycle, the ejector cycle, and the vapor injection cycle. The cycles are modeled using Engineering Equation Solver and the results are applied to the LCCP methodology. The expander cycle, ejector cycle and vapor injection cycle are effective in reducing LCCP of a residential heat pump by 5.6%, 8.2% and 10.5%, respectively in Phoenix, AZ. The advanced cycles are evaluated with the use of low GWP refrigerants and are capable of reducing the LCCP of a residential heat by 13.7%, 16.3% and 18.6% using a refrigerant with a GWP of 10. To meet the U.S. Department of Energy’s goal of reducing residential energy use by 40% by 2025 with a proportional reduction in all other categories of residential energy consumption, a reduction in the energy consumption of a residential heat pump of 34.8% with a refrigerant GWP of 10 for Phoenix, AZ is necessary. A combination of advanced cycle, control options and low GWP refrigerants are necessary to meet this goal.
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
Magnetic nanoparticles (MNPs) are known for the unique properties conferred by their small size and have found wide application in food safety analyses. However, their high surface energy and strong magnetization often lead to aggregation, compromising their functions. In this study, iron oxide magnetic particles (MPs) over the range of nano to micro size were synthesized, from which particles with less aggregation and excellent magnetic properties were obtained. MPs were synthesized via three different hydrothermal procedures, using poly (acrylic acid) (PAA) of different molecular weight (Mw) as the stabilizer. The particle size, morphology, and magnetic properties of the MPs from these synthesis procedures were characterized and compared. Among the three syntheses, one-step hydrothermal synthesis demonstrated the highest yield and most efficient magnetic collection of the resulting PAA-coated magnetic microparticles (PAA-MMPs, >100 nm). Iron oxide content of these PAA-MMPs was around 90%, and the saturation magnetization ranged from 70.3 emu/g to 57.0 emu/g, depending on the Mw of PAA used. In this approach, the particles prepared using PAA with Mw of 100K g/mol exhibited super-paramagnetic behavior with ~65% lower coercivity and remanence compared to others. They were therefore less susceptible to aggregation and remained remarkably water-dispersible even after one-month storage. Three applications involving PAA-MMPs from one-step hydrothermal synthesis were explored: food proteins and enzymes immobilization, antibody conjugation for pathogen capture, and magnetic hydrogel film fabrication. These studies demonstrated their versatile functions as well as their potential applications in the food science area.
Resumo:
Traditional organic chemistry has long been dominated by ground state thermal reactions. The alternative to this is excited state chemistry, which uses light to drive chemical transformations. There is considerable interest in using this clean renewable energy source due to concerns surrounding the combustion byproducts associated with the consumption of fossil fuels. The work presented in this text will focus on the use of light (both ultraviolet and visible) for the following quantitative chemical transformations: (1) the release of compounds containing carboxylic acid and alcohol functional groups and (2) the conversion of carbon dioxide into other useable chemicals. Chapters 1-3 will introduce and explore the use of photoremovable protecting groups (PPGs) for the spatiotemporal control of molecular concentrations. Two new PPGs are discussed, the 2,2,2-tribromoethoxy group for the protection of carboxylic acids and the 9-phenyl-9-tritylone group for the protection of alcohols. Fundamental interest in the factors that affect C–X bond breaking has driven the work presented in this text for the release of carboxylic acid substrates. Product analysis from the UV photolysis of 2,2,2-tribromoethyl-(2′-phenylacetate) in various solvents results in the formation of H–atom abstraction products as well as the release of phenylacetic acid. The deprotection of alcohols is realized through the use of UV or visible light photolysis of 9-phenyl-9-tritylone ethers. Central to this study is the use of photoinduced electron transfer chemistry for the generation of ion diradicals capable of undergoing bond-breaking chemistry leading to the release of the alcohol substrates. Chapters 4 and 5 will explore the use of N-heterocyclic carbenes (NHCs) as a catalyst for the photochemical reduction of carbon dioxide. Previous experiments have demonstrated that NHCs can add to CO2 to form stable zwitterionic species known as N-heterocylic-2-carboxylates (NHC–CO2). Work presented in this text illustrate that the stability of these species is highly dependent on solvent polarity, consistent with a lengthening of the imidazolium to carbon dioxide bond (CNHC–CCO2). Furthermore, these adducts interact with excited state electron donors resulting in the generation of ion diradicals capable of converting carbon dioxide into formic acid.
Resumo:
The past several years have seen the surprising and rapid rise of Bitcoin and other “cryptocurrencies.” These are decentralized peer-to-peer networks that allow users to transmit money, tocompose financial instruments, and to enforce contracts between mutually distrusting peers, andthat show great promise as a foundation for financial infrastructure that is more robust, efficientand equitable than ours today. However, it is difficult to reason about the security of cryptocurrencies. Bitcoin is a complex system, comprising many intricate and subtly-interacting protocol layers. At each layer it features design innovations that (prior to our work) have not undergone any rigorous analysis. Compounding the challenge, Bitcoin is but one of hundreds of competing cryptocurrencies in an ecosystem that is constantly evolving. The goal of this thesis is to formally reason about the security of cryptocurrencies, reining in their complexity, and providing well-defined and justified statements of their guarantees. We provide a formal specification and construction for each layer of an abstract cryptocurrency protocol, and prove that our constructions satisfy their specifications. The contributions of this thesis are centered around two new abstractions: “scratch-off puzzles,” and the “blockchain functionality” model. Scratch-off puzzles are a generalization of the Bitcoin “mining” algorithm, its most iconic and novel design feature. We show how to provide secure upgrades to a cryptocurrency by instantiating the protocol with alternative puzzle schemes. We construct secure puzzles that address important and well-known challenges facing Bitcoin today, including wasted energy and dangerous coalitions. The blockchain functionality is a general-purpose model of a cryptocurrency rooted in the “Universal Composability” cryptography theory. We use this model to express a wide range of applications, including transparent “smart contracts” (like those featured in Bitcoin and Ethereum), and also privacy-preserving applications like sealed-bid auctions. We also construct a new protocol compiler, called Hawk, which translates user-provided specifications into privacy-preserving protocols based on zero-knowledge proofs.
Resumo:
Social network sites (SNS), such as Facebook, Google+ and Twitter, have attracted hundreds of millions of users daily since their appearance. Within SNS, users connect to each other, express their identity, disseminate information and form cooperation by interacting with their connected peers. The increasing popularity and ubiquity of SNS usage and the invaluable user behaviors and connections give birth to many applications and business models. We look into several important problems within the social network ecosystem. The first one is the SNS advertisement allocation problem. The other two are related to trust mechanisms design in social network setting, including local trust inference and global trust evaluation. In SNS advertising, we study the problem of advertisement allocation from the ad platform's angle, and discuss its differences with the advertising model in the search engine setting. By leveraging the connection between social networks and hyperbolic geometry, we propose to solve the problem via approximation using hyperbolic embedding and convex optimization. A hyperbolic embedding method, \hcm, is designed for the SNS ad allocation problem, and several components are introduced to realize the optimization formulation. We show the advantages of our new approach in solving the problem compared to the baseline integer programming (IP) formulation. In studying the problem of trust mechanisms in social networks, we consider the existence of distrust (i.e. negative trust) relationships, and differentiate between the concept of local trust and global trust in social network setting. In the problem of local trust inference, we propose a 2-D trust model. Based on the model, we develop a semiring-based trust inference framework. In global trust evaluation, we consider a general setting with conflicting opinions, and propose a consensus-based approach to solve the complex problem in signed trust networks.
Resumo:
The Li-ion rechargeable battery (LIB) is widely used as an energy storage device, but has significant limitations in battery cycle life and safety. During initial charging, decomposition of the ethylene carbonate (EC)-based electrolytes of the LIB leads to the formation of a passivating layer on the anode known as the solid electrolyte interphase (SEI). The formation of an SEI has great impact on the cycle life and safety of LIB, yet mechanistic aspects of SEI formation are not fully understood. In this dissertation, two surface science model systems have been created under ultra-high vacuum (UHV) to probe the very initial stage of SEI formation at the model carbon anode surfaces of LIB. The first model system, Model System I, is an lithium-carbonate electrolyte/graphite C(0001) system. I have developed a temperature programmed desorption/temperature programmed reaction spectroscopy (TPD/TPRS) instrument as part of my dissertation to study Model System I in quantitative detail. The binding strengths and film growth mechanisms of key electrolyte molecules on model carbon anode surfaces with varying extents of lithiation were measured by TPD. TPRS was further used to track the gases evolved from different reduction products in the early-stage SEI formation. The branching ratio of multiple reaction pathways was quantified for the first time and determined to be 70.% organolithium products vs. 30% inorganic lithium product. The obtained branching ratio provides important information on the distribution of lithium salts that form at the very onset of SEI formation. One of the key reduction products formed from EC in early-stage SEI formation is lithium ethylene dicarbonate (LEDC). Despite intensive studies, the LEDC structure in either the bulk or thin-film (SEI) form is unknown. To enable structural study, pure LEDC was synthesized and subject to synchrotron X-ray diffraction measurements (bulk material) and STM measurements (deposited films). To enable studies of LEDC thin films, Model System II, a lithium ethylene dicarbonate (LEDC)-dimethylformamide (DMF)/Ag(111) system was created by a solution microaerosol deposition technique. Produced films were then imaged by ultra-high vacuum scanning tunneling microscopy (UHV-STM). As a control, the dimethylformamide (DMF)-Ag(111) system was first prepared and its complex 2D phase behavior was mapped out as a function of coverage. The evolution of three distinct monolayer phases of DMF was observed with increasing surface pressure — a 2D gas phase, an ordered DMF phase, and an ordered Ag(DMF)2 complex phase. The addition of LEDC to this mixture, seeded the nucleation of the ordered DMF islands at lower surface pressures (DMF coverages), and was interpreted through nucleation theory. A structural model of the nucleation seed was proposed, and the implication of ionic SEI products, such as LEDC, in early-stage SEI formation was discussed.
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.