985 resultados para Solid separation problems
Resumo:
El sistema de fangs activats és el tractament biològic més àmpliament utilitzat arreu del món per la depuració d'aigües residuals. El seu funcionament depèn de la correcta operació tant del reactor biològic com del decantador secundari. Quan la fase de sedimentació no es realitza correctament, la biomassa no decantada s'escapa amb l'efluent causant un impacte sobre el medi receptor. Els problemes de separació de sòlids, són actualment una de les principals causes d'ineficiència en l'operació dels sistemes de fangs activats arreu del món. Inclouen: bulking filamentós, bulking viscós, escumes biològiques, creixement dispers, flòcul pin-point i desnitrificació incontrolada. L'origen dels problemes de separació generalment es troba en un desequilibri entre les principals comunitats de microorganismes implicades en la sedimentació de la biomassa: els bacteris formadors de flòcul i els bacteris filamentosos. Degut a aquest origen microbiològic, la seva identificació i control no és una tasca fàcil pels caps de planta. Els Sistemes de Suport a la Presa de Decisions basats en el coneixement (KBDSS) són un grup d'eines informàtiques caracteritzades per la seva capacitat de representar coneixement heurístic i tractar grans quantitats de dades. L'objectiu de la present tesi és el desenvolupament i validació d'un KBDSS específicament dissenyat per donar suport als caps de planta en el control dels problemes de separació de sòlids d'orígen microbiològic en els sistemes de fangs activats. Per aconseguir aquest objectiu principal, el KBDSS ha de presentar les següents característiques: (1) la implementació del sistema ha de ser viable i realista per garantir el seu correcte funcionament; (2) el raonament del sistema ha de ser dinàmic i evolutiu per adaptar-se a les necessitats del domini al qual es vol aplicar i (3) el raonament del sistema ha de ser intel·ligent. En primer lloc, a fi de garantir la viabilitat del sistema, s'ha realitzat un estudi a petita escala (Catalunya) que ha permès determinar tant les variables més utilitzades per a la diagnosi i monitorització dels problemes i els mètodes de control més viables, com la detecció de les principals limitacions que el sistema hauria de resoldre. Els resultats d'anteriors aplicacions han demostrat que la principal limitació en el desenvolupament de KBDSSs és l'estructura de la base de coneixement (KB), on es representa tot el coneixement adquirit sobre el domini, juntament amb els processos de raonament a seguir. En el nostre cas, tenint en compte la dinàmica del domini, aquestes limitacions es podrien veure incrementades si aquest disseny no fos òptim. En aquest sentit, s'ha proposat el Domino Model com a eina per dissenyar conceptualment el sistema. Finalment, segons el darrer objectiu referent al seguiment d'un raonament intel·ligent, l'ús d'un Sistema Expert (basat en coneixement expert) i l'ús d'un Sistema de Raonament Basat en Casos (basat en l'experiència) han estat integrats com els principals sistemes intel·ligents encarregats de dur a terme el raonament del KBDSS. Als capítols 5 i 6 respectivament, es presenten el desenvolupament del Sistema Expert dinàmic (ES) i del Sistema de Raonament Basat en Casos temporal, anomenat Sistema de Raonament Basat en Episodis (EBRS). A continuació, al capítol 7, es presenten detalls de la implementació del sistema global (KBDSS) en l'entorn G2. Seguidament, al capítol 8, es mostren els resultats obtinguts durant els 11 mesos de validació del sistema, on aspectes com la precisió, capacitat i utilitat del sistema han estat validats tant experimentalment (prèviament a la implementació) com a partir de la seva implementació real a l'EDAR de Girona. Finalment, al capítol 9 s'enumeren les principals conclusions derivades de la present tesi.
Resumo:
A hybrid formulation for coupled pore fluid-solid deformation problems is proposed. The scheme is a hybrid in the sense that we use a vertex centered finite volume formulation for the analysis of the pore fluid and a particle method for the solid in our model. The pore fluid formally occupies the same space as the solid particles. The size of the particles is not necessarily equal to the physical size of materials. A finite volume mesh for the pore fluid flow is generated by Delaunay triangulation. Each triangle possesses an initial porosity. Changes of the porosity are specified by the translations of the mass centers of particles. Net pore pressure gradients are applied to the particle centers and are considered in the particle momentum balance. The potential of our model is illustrated by means of a simulation of coupled fracture and fluid flow developed in porous rock under biaxial compression condition.
Resumo:
Mode of access: Internet.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
One of the key hindrances on development of solid catalysts containing cobalt species for partial oxidation of organic molecules at mild conditions in conventional liquid phase is the severe metal leaching. The leached soluble Co species with a higher degree of freedom always out-performs those of solid supported Co species in oxidation catalysis. However, the homogeneous Co species concomitantly introduces separation problems. We have recently reponed for the first time, a new oxidation catalyst system for the oxidation of organic molecules in supercritical CO2 using the principle of micellar catalysis. [CF3(CF2)(8)COO](2)Co.xH(2)O (the fluorinated anionic moiety forms aqueous reverse micelles carrying water-soluble Co2+ cations in scCO(2)) was previously shown to be extremely active for the oxidation of toluene in the presence of sodium bromide in water-CO2 mixture, giving 98% conversion and 99% selectivity to benzoic acid at 120 degreesC. In this study, we show that the effects of varying the type of surfactant counterions and the length of the surfactant chains on catalysis. It is found that the use of [CF3(CF2)(8)COO](2)Mg.yH(2)O/Co(II) acetate is as effective as the [CF3(CF2)(8)COO](2)Co.xH(2)O and the fluorinated chain length used has a subtle effect on the catalytic rate measured. It is also demonstrated that this new type of micellar catalyst in scCO(2) can be easily separated via CO2 depressurisation and be reused without noticeable deactivation. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The synthesis of nano-sized ZIF-11 with an average size of 36 ± 6 nm is reported. This material has been named nano-zeolitic imidazolate framework-11 (nZIF-11). It has the same chemical composition and thermal stability and analogous H2 and CO2 adsorption properties to the conventional microcrystalline ZIF-11 (i.e. 1.9 ± 0.9 μm). nZIF-11 has been obtained following the centrifugation route, typically used for solid separation, as a fast new technique (pioneering for MOFs) for obtaining nanomaterials where the temperature, time and rotation speed can easily be controlled. Compared to the traditional synthesis consisting of stirring + separation, the reaction time was lowered from several hours to a few minutes when using this centrifugation synthesis technique. Employing the same reaction time (2, 5 or 10 min), micro-sized ZIF-11 was obtained using the traditional synthesis while nano-scale ZIF-11 was achieved only by using centrifugation synthesis. The small particle size obtained for nZIF-11 allowed the use of the wet MOF sample as a colloidal suspension stable in chloroform. This helped to prepare mixed matrix membranes (MMMs) by direct addition of the membrane polymer (polyimide Matrimid®) to the colloidal suspension, avoiding particle agglomeration resulting from drying. The MMMs were tested for H2/CO2 separation, improving the pure polymer membrane performance, with permeation values of 95.9 Barrer of H2 and a H2/CO2 separation selectivity of 4.4 at 35 °C. When measured at 200 °C, these values increased to 535 Barrer and 9.1.
Resumo:
This study presents the first part of a CFD study on the performance of a downer reactor for biomass pyrolysis. The reactor was equipped with a novel gas-solid separation method, developed by the co-authors from the ICFAR (Canada). The separator, which was designed to allow for fast separation of clean pyrolysis gas, consisted of a cone deflector and a gas exit pipe installed inside the downer reactor. A multi-fluid model (Eulerian-Eulerian) with constitutive relations adopted from the kinetic theory of granular flow was used to simulate the multiphase flow. The effects of the various parameters including operation conditions, separator geometry and particle properties on the overall hydrodynamics and separation efficiency were investigated. The model prediction of the separator efficiency was compared with experimental measurements. The results revealed distinct hydrodynamic features around the cone separator, allowing for up to 100% separation efficiency. The developed model provided a platform for the second part of the study, where the biomass pyrolysis is simulated and the product quality as a function of operating conditions is analyzed. Crown Copyright © 2014 Published by Elsevier B.V. All rights reserved.
Resumo:
A Eulerian-Eulerian CFD model was used to investigate the fast pyrolysis of biomass in a downer reactor equipped with a novel gas-solid separation mechanism. The highly endothermic pyrolysis reaction was assumed to be entirely driven by an inert solid heat carrier (sand). A one-step global pyrolysis reaction, along with the equations describing the biomass drying and heat transfer, was implemented in the hydrodynamic model presented in part I of this study (Fuel Processing Technology, V126, 366-382). The predictions of the gas-solid separation efficiency, temperature distribution, residence time and the pyrolysis product yield are presented and discussed. For the operating conditions considered, the devolatilisation efficiency was found to be above 60% and the yield composition in mass fraction was 56.85% bio-oil, 37.87% bio-char and 5.28% non-condensable gas (NCG). This has been found to agree reasonably well with recent relevant published experimental data. The novel gas-solid separation mechanism allowed achieving greater than 99.9% separation efficiency and < 2 s pyrolysis gas residence time. The model has been found to be robust and fast in terms of computational time, thus has the great potential to aid in future design and optimisation of the biomass fast pyrolysis process.
Resumo:
This paper presents the results of experiments carried out in a laboratory-scale photochemical reactor on the photodegradation of different polymers in aqueous solutions by the photo-Fenton process. Solutions of three polymers, polyethyleneglicol (PEG), polyacrylamide (PAM), and polyvinylpyrrolidone(PVP), were tested under different. conditions. The reaction progress was evaluated by sampling and analyzing the total organic carbon concentration in solution (TOC) along the reaction time. The behavior of the different polymers is discussed, based oil the evolution of the TOC-time curves. Under specific reaction conditions, the formation and coalescence of solid particles was Visually observed. Solids formation occurred simultaneously to a sharp decrease in the TOC of the liquid phase. This may be favorable for the treatment of industrial wastewater containing polymers, since the photodegradation process can be Coupled with solid separation systems. which may reduce the treatment cost. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
A simple method determining airborne monoethanolamine has been developed. Monoethanolamine determination has traditionally been difficult due to analytical separation problems. Even in recent sophisticated methods, this difficulty remains as the major issue often resulting in time-consuming sample preparations. Impregnated glass fiber filters were used for sampling. Desorption of monoethanolamine was followed by capillary GC analysis and nitrogen phosphorous selective detection. Separation was achieved using a specific column for monoethanolamines (35% diphenyl and 65% dimethyl polysiloxane). The internal standard was quinoline. Derivatization steps were not needed. The calibration range was 0.5-80 μg/mL with a good correlation (R(2) = 0.996). Averaged overall precisions and accuracies were 4.8% and -7.8% for intraday (n = 30), and 10.5% and -5.9% for interday (n = 72). Mean recovery from spiked filters was 92.8% for the intraday variation, and 94.1% for the interday variation. Monoethanolamine on stored spiked filters was stable for at least 4 weeks at 5°C. This newly developed method was used among professional cleaners and air concentrations (n = 4) were 0.42 and 0.17 mg/m(3) for personal and 0.23 and 0.43 mg/m(3) for stationary measurements. The monoethanolamine air concentration method described here was simple, sensitive, and convenient both in terms of sampling and analytical analysis.
Resumo:
Teollisessa kromatografiassa kolonnia pyritään kuormittamaan mahdollisimman paljon, jotta saataisiin maksimoitua erotetun komponentin määrä aikayksikköä kohden. Tässä työssä kuormitusta tutkittiin nostamalla syöttöliuoksen, synteettisen melassin, näyteväkevyyttä 80-125 ºC:ssa. Eluenttina oli paineistettu kuumaa vesi ja hartsina vahva Na-muotoinen PS-DVB pohjainen vahva kationinvaihtohartsi. Lämpötilaa nostamalla piikit kapenivat ja tulivat symmetrisemmiksi, erotus nopeutui sekä suola erottui usein paremmin sokereista. Syöttöliuoksen kuiva-ainetta lisättiin asteittain 55 p-% saakka, jolloin ei vielä havaittu ongelmia erotuksessa. Lämpötilassa 125 ºC havaittiin erotuksen aikana kuormituksesta riippumatonta sakkaroosin invertoitumista. Vertailtaessa eri stationäärifaaseja havaittiin Na-muotoisen PS-DVB pohjaisen kationinvaihtohartsin erottavan yleensä sokereita, sokerialkoholeja, oligosakkarideja ja betaiinia lähes poikkeuksetta paremmin alhaisilla pitoisuuksilla kuin neutraalihartsi ja Na-muotoinen zeoliitti. Erottuminen ei yleensä parantunut lämpötilaa nostamalla, mutta piikit kapenivat ja erotus nopeutui. Monosakkaridien erotus huononi 125 ºC:ssa kationinvaihtohartsilla. Tutkittaessa terveysvaikutteisten ksylo-oligosakkaridien soveltuvuutta alikriittiseen erotukseen, niiden havaittiin huomattavasti hydrolysoituvan happamissa olosuhteissa koeputkessa 100 ºC:ssa kahdessa tunnissa. Näytteessä olevien epäpuhtauksien havaittiin katalysoineen hydrolyysiä. Hydrolysoituminen oli hitaampaa neutraaleissa olosuhteissa korotetussa lämpötilassa. Tästä voitiin tehdä johtopäätös, että alikriittiset olosuhteet eivät sovi ksylo-oligosakkaridien erotukseen.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
In this work, we prepared a new magnetically recoverable CoO catalyst through the deposition of the catalytic active metal nanoparticles of 2-3 nm on silica-coated magnetite nanoparticles to facilitate the solid separation from liquid media. The catalyst was fully characterized and presented interesting properties in the oxidation of cyclohexene, as for example, selectivity to the allylic oxidation product. It was also observed that CoO is the most active species when compared to Co(2+), Co(3)O(4) and Fe(3)O(4) in the catalytic conditions studied.
Resumo:
São apresentados os aspectos teóricos, práticos e bibliográficos envolvidos no desenvolvimento da tese de doutorado intitulada Modificação estrutural de bentonitas nacionais: caracterização e estudos de adsorção. O trabalho consistiu no desenvolvimento de um material adsorvente a partir de bentonitas, do tipo montmorilonitas, modificadas estruturalmente com o objetivo de aumentar sua capacidade de adsorção de poluentes, orgânicos e inorgânicos. O estudo visa incrementar o valor agregado deste recurso mineral e insere-se na área de tratamento de efluentes líquidos usando adsorventes não tradicionais, eficientes e de baixo custo em substituição ao carvão ativado ou às resinas de troca iônica. Foram estudadas as propriedades físicas e químicas; distribuição de tamanho de partículas, área superficial, potenciais eletrocinéticos, capacidade de troca catiônica, composição mineralógica, morfologia superficial e espaçamento basal, bem como as propriedades adsorptivas dos argilominerais não tratados e modificados, não modificadas e pilarizadas respectivamente. Também são discutidos os mecanismos de adsorção envolvidos e o desenvolvimento de um reator contínuo (adsorção em flocos) e de separação sólido/líquido. As modificações estruturais dos argilominerais foram realizadas via homoionização com cloreto de cálcio e posterior intercalação com compostos orgânicos com ação quelante de metais. A FENAN, bentonita obtida pela intercalação com Orto Fenantrolina (OF), foi a que apresentou melhor viabilidade técnica em termos de adsorção, adsorção/dessorção, floculação e de acumulação de poluentes na forma floculada e não floculada. Adicionalmente os estudos de reversibilidade da intercalação revelaram a alta estabilidade da OF na FENAN, em soluções fortemente ácidas, onde aproximadamente 90% da OF permanece ligada à superfície da argila. A quantidade de OF adsorvida na forma de unidades micelares foi de 112 mg por grama de bentonita a pH 8,5 ± 0,5. A caracterização das bentonitas, via difração de Raios X, análise térmica, microscopia eletrônica de varredura e por microscopia de força atômica, revelou que as FENAN possuem um comportamento estrutural muito estável ao longo da seqüência de adsorção/dessorção e que após a adsorção de poluentes inorgânicos, o quelato metálico formado apresenta alta estabilidade dentro da estrutura da organobentonita. A capacidade de acumulação alcançada nas FENAN foi de 110 mg de Cu/g de bentonita, valor superior à de diversos materiais adsorventes alternativos propostos em outros trabalhos similares. Os estudos de acumulação das FENAN floculadas – FENANFLOC, indicaram que a presença de floculante, na quantidade utilizada, não afeta significativamente a capacidade de remoção das bentonitas modificadas. Este comportamento apresentado, permitiu o desenvolvimento do Reator Expandido de Flocos Adsorventes (REFA), cujas características e parâmetros operacionais são discutidos em detalhe. Finalmente, os resultados são discutidos em termos dos fenômenos interfaciais envolvidos e dos potenciais práticos deste novo adsorvente e da nova técnica de adsorção em flocos no REFA.