895 resultados para Vehicle Operating Performance Modeling.
Resumo:
Les problèmes d'écoulements multiphasiques en média poreux sont d'un grand intérêt pour de nombreuses applications scientifiques et techniques ; comme la séquestration de C02, l'extraction de pétrole et la dépollution des aquifères. La complexité intrinsèque des systèmes multiphasiques et l'hétérogénéité des formations géologiques sur des échelles multiples représentent un challenge majeur pour comprendre et modéliser les déplacements immiscibles dans les milieux poreux. Les descriptions à l'échelle supérieure basées sur la généralisation de l'équation de Darcy sont largement utilisées, mais ces méthodes sont sujettes à limitations pour les écoulements présentant de l'hystérèse. Les avancées récentes en terme de performances computationnelles et le développement de méthodes précises pour caractériser l'espace interstitiel ainsi que la distribution des phases ont favorisé l'utilisation de modèles qui permettent une résolution fine à l'échelle du pore. Ces modèles offrent un aperçu des caractéristiques de l'écoulement qui ne peuvent pas être facilement observées en laboratoire et peuvent être utilisé pour expliquer la différence entre les processus physiques et les modèles à l'échelle macroscopique existants. L'objet premier de la thèse se porte sur la simulation numérique directe : les équations de Navier-Stokes sont résolues dans l'espace interstitiel et la méthode du volume de fluide (VOF) est employée pour suivre l'évolution de l'interface. Dans VOF, la distribution des phases est décrite par une fonction fluide pour l'ensemble du domaine et des conditions aux bords particulières permettent la prise en compte des propriétés de mouillage du milieu poreux. Dans la première partie de la thèse, nous simulons le drainage dans une cellule Hele-Shaw 2D avec des obstacles cylindriques. Nous montrons que l'approche proposée est applicable même pour des ratios de densité et de viscosité très importants et permet de modéliser la transition entre déplacement stable et digitation visqueuse. Nous intéressons ensuite à l'interprétation de la pression capillaire à l'échelle macroscopique. Nous montrons que les techniques basées sur la moyenne spatiale de la pression présentent plusieurs limitations et sont imprécises en présence d'effets visqueux et de piégeage. Au contraire, une définition basée sur l'énergie permet de séparer les contributions capillaires des effets visqueux. La seconde partie de la thèse est consacrée à l'investigation des effets d'inertie associés aux reconfigurations irréversibles du ménisque causé par l'interface des instabilités. Comme prototype pour ces phénomènes, nous étudions d'abord la dynamique d'un ménisque dans un pore angulaire. Nous montrons que, dans un réseau de pores cubiques, les sauts et reconfigurations sont si fréquents que les effets d'inertie mènent à différentes configurations des fluides. A cause de la non-linéarité du problème, la distribution des fluides influence le travail des forces de pression, qui, à son tour, provoque une chute de pression dans la loi de Darcy. Cela suggère que ces phénomènes devraient être pris en compte lorsque que l'on décrit l'écoulement multiphasique en média poreux à l'échelle macroscopique. La dernière partie de la thèse s'attache à démontrer la validité de notre approche par une comparaison avec des expériences en laboratoire : un drainage instable dans un milieu poreux quasi 2D (une cellule Hele-Shaw avec des obstacles cylindriques). Plusieurs simulations sont tournées sous différentes conditions aux bords et en utilisant différents modèles (modèle intégré 2D et modèle 3D) afin de comparer certaines quantités macroscopiques avec les observations au laboratoire correspondantes. Malgré le challenge de modéliser des déplacements instables, où, par définition, de petites perturbations peuvent grandir sans fin, notre approche numérique apporte de résultats satisfaisants pour tous les cas étudiés. - Problems involving multiphase flow in porous media are of great interest in many scientific and engineering applications including Carbon Capture and Storage, oil recovery and groundwater remediation. The intrinsic complexity of multiphase systems and the multi scale heterogeneity of geological formations represent the major challenges to understand and model immiscible displacement in porous media. Upscaled descriptions based on generalization of Darcy's law are widely used, but they are subject to several limitations for flow that exhibit hysteric and history- dependent behaviors. Recent advances in high performance computing and the development of accurate methods to characterize pore space and phase distribution have fostered the use of models that allow sub-pore resolution. These models provide an insight on flow characteristics that cannot be easily achieved by laboratory experiments and can be used to explain the gap between physical processes and existing macro-scale models. We focus on direct numerical simulations: we solve the Navier-Stokes equations for mass and momentum conservation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. In the VOF the distribution of the phases is described by a fluid function (whole-domain formulation) and special boundary conditions account for the wetting properties of the porous medium. In the first part of this thesis we simulate drainage in a 2-D Hele-Shaw cell filled with cylindrical obstacles. We show that the proposed approach can handle very large density and viscosity ratios and it is able to model the transition from stable displacement to viscous fingering. We then focus on the interpretation of the macroscopic capillary pressure showing that pressure average techniques are subject to several limitations and they are not accurate in presence of viscous effects and trapping. On the contrary an energy-based definition allows separating viscous and capillary contributions. In the second part of the thesis we investigate inertia effects associated with abrupt and irreversible reconfigurations of the menisci caused by interface instabilities. As a prototype of these phenomena we first consider the dynamics of a meniscus in an angular pore. We show that in a network of cubic pores, jumps and reconfigurations are so frequent that inertia effects lead to different fluid configurations. Due to the non-linearity of the problem, the distribution of the fluids influences the work done by pressure forces, which is in turn related to the pressure drop in Darcy's law. This suggests that these phenomena should be taken into account when upscaling multiphase flow in porous media. The last part of the thesis is devoted to proving the accuracy of the numerical approach by validation with experiments of unstable primary drainage in a quasi-2D porous medium (i.e., Hele-Shaw cell filled with cylindrical obstacles). We perform simulations under different boundary conditions and using different models (2-D integrated and full 3-D) and we compare several macroscopic quantities with the corresponding experiment. Despite the intrinsic challenges of modeling unstable displacement, where by definition small perturbations can grow without bounds, the numerical method gives satisfactory results for all the cases studied.
Resumo:
Resum en anglès del projecte de recerca L'empresa xarxa a Catalunya. TIC, productivitat, competitivitat, salaris i beneficis a l'empresa catalana té com a objectiu principal constatar que la consolidació d'un nou model estratègic, organitzatiu i d'activitat empresarial, vinculat amb la inversió i l'ús de les TIC (o empresa xarxa), modifica substancialment els patrons de comportament dels resultats empresarials, en especial la productivitat, la competitivitat, les retribucions dels treballadors i el benefici. La contrastació empírica de les hipòtesis de treball l'hem feta per mitjà de les dades d'una enquesta a una mostra representativa de 2.038 empreses catalanes. Amb la perspectiva de l'impacte de la inversió i l'ús de les TIC no s'aprecia una relació directa entre els processos d'innovació digital i els resultats de l'activitat de l'empresa catalana. En aquest sentit, hem hagut de segmentar el teixit productiu català per a buscar les organitzacions en què el procés de coinnovació tecnològica digital i organitzativa és més present i en què la intensitat de l'ús del coneixement és un recurs molt freqüent per a poder copsar impactes rellevants en els principals resultats empresarials. Això és així perquè l'economia catalana, avui, presenta una estructura productiva dual.
Resumo:
Joints are always a concern in the construction and long-term performance of concrete pavements. Research has shown that we need some type of positive load transfer across transverse joints. The same research has directed pavement designers to use round dowels spaced at regular intervals across the transverse joint to distribute the vehicle loads both longitudinally and transversely across the joint. The goal is to reduce bearing stresses on the dowels and the two pavement slab edges and erosion of the underlying surface, hence improved long-term joint and pavement structure performance. Road salts cause metal corrosion in doweled joints, excessive bearing stresses hollow dowel ends, and construction processes are associated with cracking pavement at the end of dowels. Dowels are also a cost factor in the pavement costs when joint spacing is reduced to control curling and warping distress in pavements. Designers desire to place adequate numbers of dowels spaced at the proper locations to handle the anticipated loads and bearing stresses for the design life of the pavement. This interim report is the second of three reports on the evaluation of elliptical steel dowels. This report consists of an update on the testing and performance of the various shapes and sizes of dowels. It also documents the results of the first series of performance surveys and draws interim conclusions about the performance of various bar shapes, sizes, spacings, and basket configurations. In addition to the study of elliptical steel dowel performance, fiber reinforced polymers (FRP) are also tested as elliptical dowel material (in contrast to steel) on a section of the highway construction north of the elliptical steel test sections.
Resumo:
Part 6 of the Manual on Uniform Traffic Control Devices (MUTCD) describes several types of channelizing devices that can be used to warn road users and guide them through work zones; these devices include cones, tubular markers, vertical panels, drums, barricades, and temporary raised islands. On higher speed/volume roadways, drums and/or vertical panels have been popular choices in many states, due to their formidable appearance and the enhanced visibility they provide when compared to standard cones. However, due to their larger size, drums also require more effort and storage space to transport, deploy and retrieve. Recent editions of the MUTCD have introduced new devices for channelizing; specifically of interest for this study is a taller (>36 inches) but thinner cone. While this new device does not offer a comparable target value to that of drums, the new devices are significantly larger than standard cones and they offer improved stability as well. In addition, these devices are more easily deployed and stored than drums and they cost less. Further, for applications previously using both drums and tall cones, the use of tall cones only provides the ability for delivery and setup by a single vehicle. An investigation of the effectiveness of the new channelizing devices provides a reference for states to use in selecting appropriate traffic control for high speed, high volume applications, especially for short term or limited duration exposures. This study includes a synthesis of common practices by state DOTs, as well as daytime and nighttime field observations of driver reactions using video detection equipment. The results of this study are promising for the day and night performance of the new tall cones, comparing favorably to the performance of drums when used for channelizing in tapers. The evaluation showed no statistical difference in merge distance and location, shy distance, or operating speed in either daytime or nighttime conditions. The study should provide a valuable resource for state DOTs to utilize in selecting the most effective channelizing device for use on high speed/high volume roadways where timely merging by drivers is critical to safety and mobility.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Tietokonejärjestelmän osien ja ohjelmistojen suorituskykymittauksista saadaan tietoa,jota voidaan käyttää suorituskyvyn parantamiseen ja laitteistohankintojen päätöksen tukena. Tässä työssä tutustutaan suorituskyvyn mittaamiseen ja mittausohjelmiin eli ns. benchmark-ohjelmistoihin. Työssä etsittiin ja arvioitiin eri tyyppisiä vapaasti saatavilla olevia benchmark-ohjelmia, jotka soveltuvat Linux-laskentaklusterin suorituskyvynanalysointiin. Benchmarkit ryhmiteltiin ja arvioitiin testaamalla niiden ominaisuuksia Linux-klusterissa. Työssä käsitellään myös mittausten tekemisen ja rinnakkaislaskennan haasteita. Benchmarkkeja löytyi moneen tarkoitukseen ja ne osoittautuivat laadultaan ja laajuudeltaan vaihteleviksi. Niitä on myös koottu ohjelmistopaketeiksi, jotta laitteiston suorituskyvystä saisi laajemman kuvan kuin mitä yhdellä ohjelmalla on mahdollista saada. Olennaista on ymmärtää nopeus, jolla dataa saadaan siirretyä prosessorille keskusmuistista, levyjärjestelmistä ja toisista laskentasolmuista. Tyypillinen benchmark-ohjelma sisältää paljon laskentaa tarvitsevan matemaattisen algoritmin, jota käytetään tieteellisissä ohjelmistoissa. Benchmarkista riippuen tulosten ymmärtäminen ja hyödyntäminen voi olla haasteellista.
Resumo:
Diplomityö käsittelee prosessilähtöisen ajattelun soveltamista julkisen terveydenhuolto-organisaation toiminnan kehittämiseen. Työn tavoitteena on tutkia terveydenhuollon palveluprosesseja ja tehostaa niiden toimintaa. Työssä analysoidaan tietty palveluprosessien kokonaisuus, palveluketju asiakkaan näkökulmasta ja pyritään löytämään siitä ongelma- jakehityskohtia. Työn teoreettisessa osassa on pohjustettu tutkimusta luomalla katsaus julkisen organisaation toimintaan sekä prosessijohtamisen periaatteisiin, painotuksen ollessa asiakaslähtöisyydessä sekä teknologiassa. Palveluketjun analysointi suoritettiin prosessimallinnuksen periaatteiden mukaisesti, hyödyntäenprosessikuvausohjelmistoa. Kuvaus toteutettiin istunnoissa, joissa paikalla oliterveydenhuollon ammattihenkilöitä, prosessimallinnuksen tekninen asiantuntija sekä tutkimusryhmän edustajia. Tutkittavan prosessikokonaisuuden nykytila dokumentoitiin prosessikaavioksi sekä kaaviota tukeviksi dokumenteiksi. Ongelmakohtienkehittämisen perustaksi määritettiin oletuksia sekä tietoteknisistä ratkaisuista että toimintatapojen muutoksista. Kehitysehdotusten pohjalta luotiin palveluketjulle kaksi erilaista tavoitetilaa. Prosessilähtöisen johtamisen periaatteet soveltuivat julkisen terveydenhuollon toimintaprosessien analysointiin ja kehittämiseen. Esitetyillä kehittämisehdotuksilla voitiin saavuttaa kustannussäästöjä sekä tehostaa työajan kohdentumista. Myös asiakkaan hoidon laatua sekä hänen osallisuuttaan palveluun oli mahdollista lisätä.
Resumo:
A highly sensitive ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) method was developed for the quantification of buprenorphine and its major metabolite norbuprenorphine in human plasma. In order to speed up the process and decrease costs, sample preparation was performed by simple protein precipitation with acetonitrile. To the best of our knowledge, this is the first application of this extraction technique for the quantification of buprenorphine in plasma. Matrix effects were strongly reduced and selectivity increased by using an efficient chromatographic separation on a sub-2μm column (Acquity UPLC BEH C18 1.7μm, 2.1×50mm) in 5min with a gradient of ammonium formate 20mM pH 3.05 and acetonitrile as mobile phase at a flow rate of 0.4ml/min. Detection was made using a tandem quadrupole mass spectrometer operating in positive electrospray ionization mode, using multiple reaction monitoring. The procedure was fully validated according to the latest Food and Drug Administration guidelines and the Société Française des Sciences et Techniques Pharmaceutiques. Very good results were obtained by using a stable isotope-labeled internal standard for each analyte, to compensate for the variability due to the extraction and ionization steps. The method was very sensitive with lower limits of quantification of 0.1ng/ml for buprenorphine and 0.25ng/ml for norbuprenorphine. The upper limit of quantification was 250ng/ml for both drugs. Trueness (98.4-113.7%), repeatability (1.9-7.7%), intermediate precision (2.6-7.9%) and internal standard-normalized matrix effects (94-101%) were in accordance with international recommendations. The procedure was successfully used to quantify plasma samples from patients included in a clinical pharmacogenetic study and can be transferred for routine therapeutic drug monitoring in clinical laboratories without further development.
Resumo:
Diplomityön tavoitteena on paineistimen yksityiskohtainen mallintaminen APROS- ja TRACE- termohydrauliikkaohjelmistoja käyttäen. Rakennetut paineistinmallit testattiin vertaamalla laskentatuloksia paineistimen täyttymistä, tyhjentymistä ja ruiskutusta käsittelevistä erilliskokeista saatuun mittausdataan. Tutkimuksen päätavoitteena on APROSin paineistinmallin validoiminen käyttäen vertailuaineistona PACTEL ATWS-koesarjan sopivia paineistinkokeita sekä MIT Pressurizer- ja Neptunus- erilliskokeita. Lisäksi rakennettiin malli Loviisan ydinvoimalaitoksen paineistimesta, jota käytettiin turbiinitrippitransientin simulointiin tarkoituksena selvittää mahdolliset voimalaitoksen ja koelaitteistojen mittakaavaerosta johtuvat vaikutukset APROSin paineistinlaskentaan. Kokeiden simuloinnissa testattiin erilaisia noodituksia ja mallinnusvaihtoehtoja, kuten entalpian ensimmäisen ja toisen kertaluvun diskretisointia, ja APROSin sekä TRACEn antamia tuloksia vertailtiin kattavasti toisiinsa. APROSin paineistinmallin lämmönsiirtokorrelaatioissa havaittiin merkittävä puute ja laskentatuloksiin saatiin huomattava parannus ottamalla käyttöön uusi seinämälauhtumismalli. Työssä tehdyt TRACE-simulaatiot ovat osa United States Nuclear Regulatory Commissionin kansainvälistä CAMP-koodinkehitys-ja validointiohjelmaa.
Resumo:
Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.
Resumo:
BACKGROUND: Reading volume and mammography screening performance appear positively correlated. Quality and effectiveness were compared across low-volume screening programmes targeting relatively small populations and operating under the same decentralised healthcare system. Except for accreditation of 2nd readers (restrictive vs non-restrictive strategy), these organised programmes had similar screening regimen/procedures and duration, which maximises comparability. Variation in performance and its determinants were explored in order to improve mammography practice and optimise screening performance. METHODS: Circa 200,000 screens performed between 1999 and 2006 (4 rounds) in 3 longest standing Swiss cantonal programmes (of Vaud, Geneva and Valais) were assessed. Indicators of quality and effectiveness were assessed according to European standards. Interval cancers were identified through linkage with cancer registries records. RESULTS: Swiss programmes met most European standards of performance with a substantial, favourable cancer stage shift. Up to a two-fold variation occurred for several performance indicators. In subsequent rounds, compared with programmes (Vaud and Geneva) that applied a restrictive selection strategy for 2nd readers, proportions of in situ lesions and of small cancers (≤1cm) were one third lower and halved, respectively, and the proportion of advanced lesions (stage II+) nearly 50% higher in the programme without a restrictive selection strategy. Discrepancy in second-year proportional incidence of interval cancers appears to be multicausal. CONCLUSION: Differences in performance could partly be explained by a selective strategy for second readers and a prior experience in service screening, but not by the levels of opportunistic screening and programme attendance. This study provides clues for enhancing mammography screening performance in low-volume programmes.
Resumo:
The accumulation of aqueous pollutants is becoming a global problem. The search for suitable methods and/or combinations of water treatment processes is a task that can slow down and stop the process of water pollution. In this work, the method of wet oxidation was considered as an appropriate technique for the elimination of the impurities present in paper mill process waters. It has been shown that, when combined with traditional wastewater treatment processes, wet oxidation offers many advantages. The combination of coagulation and wet oxidation offers a new opportunity for the improvement of the quality of wastewater designated for discharge or recycling. First of all, the utilization of coagulated sludge via wet oxidation provides a conditioning process for the sludge, i.e. dewatering, which is rather difficult to carry out with untreated waste. Secondly, Fe2(SO4)3, which is employed earlier as a coagulant, transforms the conventional wet oxidation process into a catalytic one. The use of coagulation as the post-treatment for wet oxidation can offer the possibility of the brown hue that usually accompanies the partial oxidation to be reduced. As a result, the supernatant is less colored and also contains a rather low amount of Fe ions to beconsidered for recycling inside mills. The thickened part that consists of metal ions is then recycled back to the wet oxidation system. It was also observed that wet oxidation is favorable for the degradation of pitch substances (LWEs) and lignin that are present in the process waters of paper mills. Rather low operating temperatures are needed for wet oxidation in order to destruct LWEs. The oxidation in the alkaline media provides not only the faster elimination of pitch and lignin but also significantly improves the biodegradable characteristics of wastewater that contains lignin and pitch substances. During the course of the kinetic studies, a model, which can predict the enhancements of the biodegradability of wastewater, was elaborated. The model includes lumped concentrations suchas the chemical oxygen demand and biochemical oxygen demand and reflects a generalized reaction network of oxidative transformations. Later developments incorporated a new lump, the immediately available biochemical oxygen demand, which increased the fidelity of the predictions made by the model. Since changes in biodegradability occur simultaneously with the destruction of LWEs, an attempt was made to combine these two facts for modeling purposes.
Resumo:
Firms operating in a changing environment have a need for structures and practices that provide flexibility and enable rapid response to changes. Given the challenges they face in attempts to keep up with market needs, they have to continuously improve their processes and products, and develop new products to match market requirements. Success in changing markets depends on the firm's ability to convert knowledge into innovations, and consequently their internal structures and capabilities have an important role in innovation activities. According 10 the dynamic capability view of the firm, firms thus need dynamic capabilities in (he form ofassets, processes and structures that enable strategic flexibility and support entrepreneurial opportunity sensing and exploitation. Dynamic capabilities are also needed in conditions of rapid change in the operating environment, and in activities such as new product development and expansion to new markets. Despite the growing interest in these issues and the theoretical developments in the field of strategy research, there are still only very few empirical studies, and large-scale empirical studies in particular, that provide evidence that firms'dynamic capabilities are reflected in performance differences. This thesis represents an attempt to advance the research by providing empirical evidence of thelinkages between the firm's dynamic capabilities and performance in intenationalization and innovation activities. The aim is thus to increase knowledge and enhance understanding of the organizational factors that explain interfirm performance differences. The study is in two parts. The first part is the introduction and the second part comprises five research publications covering the theoretical foundations of the dynamic capability view and subsequent empirical analyses. Quantitative research methodology is used throughout. The thesis contributes to the literature in several ways. While a lot of prior research on dynamic capabilities is conceptual in nature, or conducted through case studies, this thesis introduces empirical measures for assessing the different aspects, and uses large-scale sampling to investigate the relationships between them and performance indicators. The dynamic capability view is further developed by integrating theoretical frameworks and research traditions from several disciplines. The results of the study provide support for the basic tenets of the dynamic capability view. The empirical findings demonstrate that the firm's ability to renew its knowledge base and other intangible assets, its proactive, entrepreneurial behavior, and the structures and practices that support operational flexibility arepositively related to performance indicators.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.