973 resultados para High-fidelity simulators


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The predictive capability of high fidelity finite element modelling, to accurately capture damage and crush behaviour of composite structures, relies on the acquisition of accurate material properties, some of which have necessitated the development of novel approaches. This paper details the measurement of interlaminar and intralaminar fracture toughness, the non-linear shear behaviour of carbon fibre (AS4)/thermoplastic Polyetherketoneketone (PEKK) composite laminates and the utilisation of these properties for the accurate computational modelling of crush. Double-cantilever-beam (DCB), four-point end-notched flexure (4ENF) and Mixed-mode bending (MMB) test configurations were used to determine the initiation and propagation fracture toughness in mode I, mode II and mixed-mode loading, respectively. Compact Tension (CT) and Compact Compression (CC) test samples were employed to determine the intralaminar longitudinal tensile and compressive fracture toughness. V-notched rail shear tests were used to measure the highly non-linear shear behaviour, associated with thermoplastic composites, and fracture toughness. Corresponding numerical models of these tests were developed for verification and yielded good correlation with the experimental response. This also confirmed the accuracy of the measured values which were then employed as input material parameters for modelling the crush behaviour of a corrugated test specimen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thermoplastic composites are likely to emerge as the preferred solution for meeting the high-volume production demands of passenger road vehicles. Substantial effort is currently being directed towards the development of new modelling techniques to reduce the extent of costly and time consuming physical testing. Developing a high-fidelity numerical model to predict the crush behaviour of composite laminates is dependent on the accurate measurement of material properties as well as a thorough understanding of damage mechanisms associated with crush events. This paper details the manufacture, testing and modelling of self-supporting corrugated-shaped thermoplastic composite specimens for crashworthiness assessment. These specimens demonstrated a 57.3% higher specific energy absorption compared to identical specimen made from thermoset composites. The corresponding damage mechanisms were investigated in-situ using digital microscopy and post analysed using Scanning Electron Microscopy (SEM). Splaying and fragmentation modes were the 2 primary failure modes involving fibre breakage, matrix cracking and delamination. A mesoscale composite damage model, with new non-linear shear constitutive laws, which combines a range of novel techniques to accurately capture the material response under crushing, is presented. The force-displacement curves, damage parameter maps and dissipated energy, obtained from the numerical analysis, are shown to be in a good qualitative and quantitative agreement with experimental results. The proposed approach could significantly reduce the extent of physical testing required in the development of crashworthy structures.  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background The use of simulation in medical education is increasing, with students taught and assessed using simulated patients and manikins. Medical students at Queen’s University of Belfast are taught advanced life support cardiopulmonary resuscitation as part of the undergraduate curriculum. Teaching and feedback in these skills have been developed in Queen’s University with high-fidelity manikins. This study aimed to evaluate the effectiveness of video compared to verbal feedback in assessment of student cardiopulmonary resuscitation performance Methods Final year students participated in this study using a high-fidelity manikin, in the Clinical Skills Centre, Queen’s University Belfast. Cohort A received verbal feedback only on their performance and cohort B received video feedback only. Video analysis using ‘StudioCode’ software was distributed to students. Each group returned for a second scenario and evaluation 4 weeks later. An assessment tool was created for performance assessment, which included individual skill and global score evaluation. Results One hundred thirty eight final year medical students completed the study. 62 % were female and the mean age was 23.9 years. Students having video feedback had significantly greater improvement in overall scores compared to those receiving verbal feedback (p = 0.006, 95 % CI: 2.8–15.8). Individual skills, including ventilation quality and global score were significantly better with video feedback (p = 0.002 and p < 0.001, respectively) when compared with cohort A. There was a positive change in overall score for cohort B from session one to session two (p < 0.001, 95 % CI: 6.3–15.8) indicating video feedback significantly benefited skill retention. In addition, using video feedback showed a significant improvement in the global score (p < 0.001, 95 % CI: 3.3–7.2) and drug administration timing (p = 0.004, 95 % CI: 0.7–3.8) of cohort B participants, from session one to session two. Conclusions There is increased use of simulation in medicine but a paucity of published data comparing feedback methods in cardiopulmonary resuscitation training. Our study shows the use of video feedback when teaching cardiopulmonary resuscitation is more effective than verbal feedback, and enhances skill retention. This is one of the first studies to demonstrate the benefit of video feedback in cardiopulmonary resuscitation teaching.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter examines four papers that have been influential in the use of virtual worlds for learning, but also draws on a range of other research and literature in order to locate virtual world learning across the landscape of higher education. Whilst there is sometimes a misconception that research into learning in virtual worlds is very new, the field began to develop in the late 1990’s and has continued since then. Typical examples of the first iterations of virtual worlds include Second Life, Active Worlds, and Kaneva, which have been available for up to 20 years. The second generation is currently being developed, examples being High Fidelity and Project Sansar. The chapter reviews the literature in this field and suggests central themes that emerge are: Socialisation; Presence and immersion in virtual world learning; Learning collaboratively and Trajectories of participation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työn tavoitteena oli toimintatutkimuksen kautta tutkia ketterän ohjelmistokehityksen keinoin toteutetun käyttöliittymäkehityksen kykyä vastata asiakkaiden todellisiin tarpeisiin. Työssä haettiin tapaustutkimusyritykselle olemassa olevan työkalun uuden version käyttöliittymän toteutusvaihtoehtoja ja toteutettiin korkean tarkkuuden prototyyppejä näitä hyödyntäen. Ketterän ohjelmistokehityksen arvot ja periaatteet soveltuivat kehitysprosessissa käytettäviksi erinomaisesti. Iteratiivinen lähestymistapa kehitykseen ja läheinen yhteistyö tapaustutkimusyrityksen ja kandidaatintyöntekijän kanssa mahdollistivat yrityksen odotusten täyttämisen. Työkalun käyttöliittymä saatettiin tasolle, joka mahdollistaa jatkokehittämisen aloituksen. Kattavamman testauttamisen sisällyttäminen kehitysprosessiin olisi edesauttanut vielä paremman lopputuloksen saavuttamista.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study the relationship between heterogeneous nucleate boiling surfaces and deposition of suspended metallic colloidal particles, popularly known as crud or corrosion products in process industries, on those heterogeneous sites is investigated. Various researchers have reported that hematite is a major constituent of crud which makes it the primary material of interest; however the models developed in this work are irrespective of material choice. Qualitative hypotheses on the deposition process under boiling as proposed by previous researchers have been tested, which fail to provide explanations for several physical mechanisms observed and analyzed. In this study a quantitative model of deposition rate has been developed on the basis of bubble dynamics and colloid-surface interaction potential. Boiling from a heating surface aids in aggregation of the metallic particulates viz. nano-particles, crud particulate, etc. suspended in a liquid, which helps in transporting them to heating surfaces. Consequently, clusters of particles deposit onto the heating surfaces due to various interactive forces, resulting in formation of porous or impervious layers. The deposit layer grows or recedes depending upon variations in interparticle and surface forces, fluid shear, fluid chemistry, etc. This deposit layer in turn affects the rate of bubble generation, formation of porous chimneys, critical heat flux (CHF) of surfaces, activation and deactivation of nucleation sites on the heating surfaces. Several problems are posed due to the effect of boiling on colloidal deposition, which range from research initiatives involving nano-fluids as a heat transfer medium to industrial applications such as light water nuclear reactors. In this study, it is attempted to integrate colloid and surface science with vapor bubble dynamics, boiling heat transfer and evaporation rate. Pool boiling experiments with dilute metallic colloids have been conducted to investigate several parameters impacting the system. The experimental data available in the literature is obtained by flow experiments, which do not help in correlating boiling mechanism with the deposition amount or structure. With the help of experimental evidences and analysis, previously proposed hypothesis for particle transport to the contact line due to hydrophobicity has been challenged. The experimental observations suggest that deposition occurs around the bubble surface contact line and extends underneath area of the bubble microlayer as well. During the evaporation the concentration gradient of a non-volatile species is created, which induces osmotic pressure. The osmotic pressure developed inside the microlayer draws more particles inside the microlayer region or towards contact line. The colloidal escape time is slower than the evaporation time, which leads to the aggregation of particles in the evaporating micro-layer. These aggregated particles deposit onto or are removed from the heating surface, depending upon their total interaction potential. Interaction potential has been computed with the help of surface charge and van der Waals potential for the materials in aqueous solutions. Based upon the interaction-force boundary layer thickness, which is governed by debye radius (or ionic concentration and pH), a simplified quantitative model for the attachment kinetics is proposed. This attachment kinetics model gives reasonable results in predicting attachment rate against data reported by previous researchers. The attachment kinetics study has been done for different pH levels and particle sizes for hematite particles. Quantification of colloidal transport under boiling scenarios is done with the help of overall average evaporation rates because generally waiting times for bubbles at the same position is much larger than growth times. In other words, from a larger measurable scale perspective, frequency of bubbles dictates the rate of collection of particles rather than evaporation rate during micro-layer evaporation of one bubble. The combination of attachment kinetics and colloidal transport kinetics has been used to make a consolidated model for prediction of the amount of deposition and is validated with the help of high fidelity experimental data. In an attempt to understand and explain boiling characteristics, high speed visualization of bubble dynamics from a single artificial large cavity and multiple naturally occurring cavities is conducted. A bubble growth and departure dynamics model is developed for artificial active sites and is validated with the experimental data. The variation of bubble departure diameter with wall temperature is analyzed with experimental results and shows coherence with earlier studies. However, deposit traces after boiling experiments show that bubble contact diameter is essential to predict bubble departure dynamics, which has been ignored previously by various researchers. The relationship between porosity of colloid deposits and bubbles under the influence of Jakob number, sub-cooling and particle size has been developed. This also can be further utilized in variational wettability of the surface. Designing porous surfaces can having vast range of applications varying from high wettability, such as high critical heat flux boilers, to low wettability, such as efficient condensers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first part of the thesis describes a new patterning technique--microfluidic contact printing--that combines several of the desirable aspects of microcontact printing and microfluidic patterning and addresses some of their important limitations through the integration of a track-etched polycarbonate (PCTE) membrane. Using this technique, biomolecules (e.g., peptides, polysaccharides, and proteins) were printed in high fidelity on a receptor modified polyacrylamide hydrogel substrate. The patterns obtained can be controlled through modifications of channel design and secondary programming via selective membrane wetting. The protocols support the printing of multiple reagents without registration steps and fast recycle times. The second part describes a non-enzymatic, isothermal method to discriminate single nucleotide polymorphisms (SNPs). SNP discrimination using alkaline dehybridization has long been neglected because the pH range in which thermodynamic discrimination can be done is quite narrow. We found, however, that SNPs can be discriminated by the kinetic differences exhibited in the dehybridization of PM and MM DNA duplexes in an alkaline solution using fluorescence microscopy. We combined this method with multifunctional encoded hydrogel particle array (fabricated by stop-flow lithography) to achieve fast kinetics and high versatility. This approach may serve as an effective alternative to temperature-based method for analyzing unamplified genomic DNA in point-of-care diagnostic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The creation of thermostable enzymes has wide-ranging applications in industrial, scientific, and pharmaceutical settings. As various stabilization techniques exist, it is often unclear how to best proceed. To this end, we have redesigned Cel5A (HjCel5A) from Hypocrea jecorina (anamorph Trichoderma reesei) to comparatively evaluate several significantly divergent stabilization methods: 1) consensus design, 2) core repacking, 3) helix dipole stabilization, 4) FoldX ΔΔG approximations, 5) Triad ΔΔG approximations, and 6) entropy reduction through backbone stabilization. As several of these techniques require structural data, we initially solved the first crystal structure of HjCel5A to 2.05 Å. Results from the stabilization experiments demonstrate that consensus design works best at accurately predicting highly stabilizing and active mutations. FoldX and helix dipole stabilization, however, also performed well. Both methods rely on structural data and can reveal non-conserved, structure-dependent mutations with high fidelity. HjCel5A is a prime target for stabilization. Capable of cleaving cellulose strands from agricultural waste into fermentable sugars, this protein functions as the primary endoglucanase in an organism commonly used in the sustainable biofuels industry. Creating a long-lived, highly active thermostable HjCel5A would allow cellulose hydrolysis to proceed more efficiently, lowering production expenses. We employed information gleaned during the survey of stabilization techniques to generate HjCel5A variants demonstrating a 12-15 °C increase in the temperature at which 50% of the total activity persists, an 11-14 °C increase in optimal operating temperature, and a 60% increase over the maximal amount of hydrolysis achievable using the wild type enzyme. We anticipate that our comparative analysis of stabilization methods will prove useful in future thermostabilization experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.

The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.

The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1D and 2D patterning of uncharged micro- and nanoparticles via dielectrophoretic forces on photovoltaic z-cut Fe:LiNbO3 have been investigated for the first time. The technique has been successfully applied with dielectric micro-particles of CaCO3 (diameter d = 1-3 ?m) and metal nanoparticles of Al (d = 70 nm). At difference with previous experiments in x- and y-cut, the obtained patterns locally reproduce the light distribution with high fidelity. A simple model is provided to analyse the trapping process. The results show the remarkably good capabilities of this geometry for high quality 2D light-induced dielectrophoretic patterning overcoming the important limitations presented by previous configurations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding the ecology of migratory birds during the non-breeding season is necessary for ensuring their conservation. Using satellite telemetry data we describe winter ranging behaviour and movements of pallid harriers Circus macrourus that bred in Kazakhstan. We developed an ecological niche model for the species in Africa, to identify the most suitable wintering areas for pallid harriers and the importance of habitat in determining the location of those areas. We also assessed how well represented suitable areas are in the network of protected areas. Individual harriers showed relatively high fidelity to wintering areas but with potential for interannual changes. The ecological niche model highlighted the importance of open habitats with natural vegetation. Most suitable areas for the species were located in eastern Africa. Suitable areas had a patchy distribution but were relatively well included in the network of protected areas. The preferential use of habitats with natural vegetation by wintering pallid harriers and the patchiness of the most suitable areas highlight the harrier's vulnerability to land-use changes and the associated loss of natural vegetation in Africa. Conservation of harriers could be enhanced by preserving natural grasslands within protected areas and improving habitat management in the human-influenced portions of the species’ core wintering areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A NOx reduction efficiency higher than 95% with NH3 slip less than 30 ppm is desirable for heavy-duty diesel (HDD) engines using selective catalytic reduction (SCR) systems to meet the US EPA 2010 NOx standard and the 2014-2018 fuel consumption regulation. The SCR performance needs to be improved through experimental and modeling studies. In this research, a high fidelity global kinetic 1-dimensional 2-site SCR model with mass transfer, heat transfer and global reaction mechanisms was developed for a Cu-zeolite catalyst. The model simulates the SCR performance for the engine exhaust conditions with NH3 maldistribution and aging effects, and the details are presented. SCR experimental data were collected for the model development, calibration and validation from a reactor at Oak Ridge National Laboratory (ORNL) and an engine experimental setup at Michigan Technological University (MTU) with a Cummins 2010 ISB engine. The model was calibrated separately to the reactor and engine data. The experimental setup, test procedures including a surrogate HD-FTP cycle developed for transient studies and the model calibration process are described. Differences in the model parameters were determined between the calibrations developed from the reactor and the engine data. It was determined that the SCR inlet NH3 maldistribution is one of the reasons causing the differences. The model calibrated to the engine data served as a basis for developing a reduced order SCR estimator model. The effect of the SCR inlet NO2/NOx ratio on the SCR performance was studied through simulations using the surrogate HD-FTP cycle. The cumulative outlet NOx and the overall NOx conversion efficiency of the cycle are highest with a NO2/NOx ratio of 0.5. The outlet NH3 is lowest for the NO2/NOx ratio greater than 0.6. A combined engine experimental and simulation study was performed to quantify the NH3 maldistribution at the SCR inlet and its effects on the SCR performance and kinetics. The uniformity index (UI) of the SCR inlet NH3 and NH3/NOx ratio (ANR) was determined to be below 0.8 for the production system. The UI was improved to 0.9 after installation of a swirl mixer into the SCR inlet cone. A multi-channel model was developed to simulate the maldistribution effects. The results showed that reducing the UI of the inlet ANR from 1.0 to 0.7 caused a 5-10% decrease in NOx reduction efficiency and 10-20 ppm increase in the NH3 slip. The simulations of the steady-state engine data with the multi-channel model showed that the NH3 maldistribution is a factor causing the differences in the calibrations developed from the engine and the reactor data. The Reactor experiments were performed at ORNL using a Spaci-IR technique to study the thermal aging effects. The test results showed that the thermal aging (at 800°C for 16 hours) caused a 30% reduction in the NH3 stored on the catalyst under NH3 saturation conditions and different axial concentration profiles under SCR reaction conditions. The kinetics analysis showed that the thermal aging caused a reduction in total NH3 storage capacity (94.6 compared to 138 gmol/m3), different NH3 adsorption/desorption properties and a decrease in activation energy and the pre-exponential factor for NH3 oxidation, standard and fast SCR reactions. Both reduction in the storage capability and the change in kinetics of the major reactions contributed to the change in the axial storage and concentration profiles observed from the experiments.