901 resultados para Filter-rectify-filter-model
Resumo:
In mammography, the image contrast and dose delivered to the patient are determined by the x-ray spectrum and the scatter to primary ratio S/P. Thus the quality of the mammographic procedure is highly dependent on the choice of anode and filter material and on the method used to reduce the amount of scattered radiation reaching the detector. Synchrotron radiation is a useful tool to study the effect of beam energy on the optimization of the mammographic process because it delivers a high flux of monochromatic photons. Moreover, because the beam is naturally flat collimated in one direction, a slot can be used instead of a grid for scatter reduction. We have measured the ratio S/P and the transmission factors for grids and slots for monoenergetic synchrotron radiation. In this way the effect of beam energy and scatter rejection method were separated, and their respective importance for image quality and dose analyzed. Our results show that conventional mammographic spectra are not far from optimum and that the use of a slot instead of a grid has an important effect on the optimization of the mammographic process. We propose a simple numerical model to quantify this effect.
Resumo:
Rotaviruses are the major cause of severe diarrhea in infants and young children worldwide. Due to their restricted site of replication, i.e., mature enterocytes, local intestinal antibodies have been proposed to play a major role in protective immunity. Whether secretory immunoglobulin A (IgA) antibodies alone can provide protection against rotavirus diarrhea has not been fully established. To address this question, a library of IgA monoclonal antibodies (MAbs) previously developed against different proteins of rhesus rotavirus was used. A murine hybridoma "backpack tumor" model was established to examine if a single MAb secreted onto mucosal surfaces via the normal epithelial transport pathway was capable of protecting mice against diarrhea upon oral challenge with rotavirus. Of several IgA and IgG MAbs directed against VP8 and VP6 of rotavirus, only IgA VP8 MAbs (four of four) were found to protect newborn mice from diarrhea. An IgG MAb recognizing the same epitope as one of the IgA MAbs tested failed to protect mice from diarrhea. We also investigated if antibodies could be transcytosed in a biologically active form from the basolateral domain to the apical domain through filter-grown Madin-Darby canine kidney (MDCK) cells expressing the polymeric immunoglobulin receptor. Only IgA antibodies with VP8 specificity (four of four) neutralized apically administered virus. The results support the hypothesis that secretory IgA antibodies play a major role in preventing rotavirus diarrhea. Furthermore, the results show that the in vivo and in vitro methods described are useful tools for exploring the mechanisms of viral mucosal immunity.
Resumo:
En aquest Treball de Final de Grau s’exposen els resultats de l’anàlisi de les dades genètiques del projecte EurGast2 "Genetic susceptibility, environmental exposure and gastric cancer risk in an European population”, estudi cas‐control niat a la cohort europea EPIC “European Prospective lnvestigation into Cancer and Nutrition”, que té per objectiu l’estudi dels factors genètics i ambientals associats amb el risc de desenvolupar càncer gàstric (CG). A partir de les dades resultants de l’estudi EurGast2, en el què es van analitzar 1.294 SNPs en 365 casos de càncer gàstric i 1.284 controls en l’anàlisi Single SNP previ, la hipòtesi de partida del present Treball de Final de Grau és que algunes variants amb un efecte marginal molt feble, però que conjuntament amb altres variants estarien associades al risc de CG, podrien no haver‐se detectat. Així doncs, l’objectiu principal del projecte és la identificació d’interaccions de segon ordre entre variants genètiques de gens candidats implicades en la carcinogènesi de càncer gàstric. L’anàlisi de les interaccions s’ha dut a terme aplicant el mètode estadístic Model‐based Multifactor Dimensionality Reduction Method (MB‐MDR), desenvolupat per Calle et al. l’any 2008 i s’han aplicat dues metodologies de filtratge per seleccionar les interaccions que s’exploraran: 1) filtratge d’interaccions amb un SNP significatiu en el Single SNP analysis i 2) filtratge d’interaccions segons la mesura Sinèrgia. Els resultats del projecte han identificat 5 interaccions de segon ordre entre SNPs associades significativament amb un major risc de desenvolupar càncer gàstric, amb p‐valor inferior a 10‐4. Les interaccions identificades corresponen a interaccions entre els gens MPO i CDH1, XRCC1 i GAS6, ADH1B i NR5A2 i IL4R i IL1RN (que s’ha validat en les dues metodologies de filtratge). Excepte CDH1, cap altre d’aquests gens s’havia associat significativament amb el CG o prioritzat en les anàlisis prèvies, el que confirma l’interès d’analitzar les interaccions genètiques de segon ordre. Aquestes poden ser un punt de partida per altres anàlisis destinades a confirmar gens putatius i a estudiar a nivell biològic i molecular els mecanismes de carcinogènesi, i orientades a la recerca de noves dianes terapèutiques i mètodes de diagnosi i pronòstic més eficients.
Resumo:
We propose a deep study on tissue modelization andclassification Techniques on T1-weighted MR images. Threeapproaches have been taken into account to perform thisvalidation study. Two of them are based on FiniteGaussian Mixture (FGM) model. The first one consists onlyin pure gaussian distributions (FGM-EM). The second oneuses a different model for partial volume (PV) (FGM-GA).The third one is based on a Hidden Markov Random Field(HMRF) model. All methods have been tested on a DigitalBrain Phantom image considered as the ground truth. Noiseand intensity non-uniformities have been added tosimulate real image conditions. Also the effect of ananisotropic filter is considered. Results demonstratethat methods relying in both intensity and spatialinformation are in general more robust to noise andinhomogeneities. However, in some cases there is nosignificant differences between all presented methods.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.
Resumo:
Introduction: We recently observed in a chronic ovine model that a shortening of action potential duration (APD) as assessed by the activation recovery interval (ARI) may be a mechanism whereby pacing-induced atrial tachycardia (PIAT) facilitates atrial fibrillation (AF), mediated by a return to 1:1 atrial capture after the effective refractory period has been reached. The aim of the present study is to evaluate the effect of long term intermittent burst pacing on ARI before induction of AF.Methods: We specifically developed a chronic ovine model of PIAT using two pacemakers (PM) each with a right atrial (RA) lead separated by ∼2cm. The 1st PM (Vitatron T70) was used to record a broadband unipolar RA EGM (800 Hz, 0.4 Hz high pass filter). The 2nd was used to deliver PIAT during electrophysiological protocols at decremental pacing CL (400 beats, from 400 to 110ms) and long term intermittent RA burst pacing to promote electrical remodeling (5s of burst followed by 2s of sinus rhythm) until onset of sustained AF. ARI was defined as the time difference between the peak of the atrial repolarization wave and the first atrial depolarization. The mean ARIs of paired sequences (before and after remodeling), each consisting of 20 beats were compared.Results: As shown in the figure, ARIs (n=4 sheep, 46 recordings) decreased post remodeling compared to baseline (86±19 vs 103±12 ms, p<0.05). There was no difference in atrial structure as assessed by light microscopy between control and remodeled sheep.Conclusions: Using standard pacemaker technology, atrial ARIs as a surrogate of APDs were successfully measured in vivo during the electrical remodeling process leading to AF. The facilitation of AF by PIAT mimicking salvos from pulmonary veins is heralded by a significant shortening of ARI.
Resumo:
In this paper we develop a new linear approach to identify the parameters of a moving average (MA) model from the statistics of the output. First, we show that, under some constraints, the impulse response of the system can be expressed as a linear combination of cumulant slices. Then, thisresult is used to obtain a new well-conditioned linear methodto estimate the MA parameters of a non-Gaussian process. Theproposed method presents several important differences withexisting linear approaches. The linear combination of slices usedto compute the MA parameters can be constructed from dif-ferent sets of cumulants of different orders, providing a generalframework where all the statistics can be combined. Further-more, it is not necessary to use second-order statistics (the autocorrelation slice), and therefore the proposed algorithm stillprovides consistent estimates in the presence of colored Gaussian noise. Another advantage of the method is that while mostlinear methods developed so far give totally erroneous estimates if the order is overestimated, the proposed approach doesnot require a previous estimation of the filter order. The simulation results confirm the good numerical conditioning of thealgorithm and the improvement in performance with respect to existing methods.
Resumo:
Työn tavoitteena on kartoittaa yhdyskuntalietteen käsittelyä lietteenpolttolaitoksen tarpeita ajatellen. Lietteen käsittelytekniikoiden ja kuljetusvaihtoehtojen selvittäminen on siis työn keskeinen tavoite. Lisäksi otetaan selvää näiden tekijöiden kustannusrakenteesta. Yhdyskuntalietteen ominaisuuksien sekä käsittelyyn liittyvien ongelmakohtien valottaminen kuuluu samoin työn tavoitteisiin. Työssä tehdään muun ohella case-tarkastelua Kaakkois-Suomen alueeseen liittyen. Tavoitteena on muodostaa tarkoitukseen soveltuva lietteenkäsittelymalli kullekin tapaukselle. Työn alkuosassa tutustutaan yleisesti lietteeseen sekä polttoaineen että jätteen roolissa. Tarkastelu sisältää tietoja lietteen ominaisuuksista ja muodostuvista määristä sekä lietteenkäsittelyssä olennaisista lainsäädännöllisistä seikoista. Samoin katsastetaan hieman jäteve¬denpuhdistusprosessiin sekä näin ollen lietteen syntyyn. Lietteen esikäsittelyä, mekaanista vedenerotusta, termistä kuivausta ja polttoa tarkastellaan yleisessä valossa. Mekaanisen vedenerotuksen osalta myös eritellään ja vertaillaan laitteita. Etenkin linko, mutta myös suotonauhapuristin osoittautuivat erityisen sopiviksi kunnallisen lietteen käsittelyyn. Työn loppupuoliskolla kiinnitetään huomiota lietteen varastointiin sekä syöttö-ja purkumenetelmiin, lyhyen etäisyyden siirtoon ja pidemmän matkan kuljetukseen. Case-tapauksissa pohditaan Kymenlaakson ja Etelä-Karjalan paikallisia lietteenkäsittelymahdollisuuksia. Mekaanisesti kuivattua lietettä käsitellään kyseisissätapauksissa vuosittain 6000 t ja 15 000 t. Lietteen polton tuottama sähkö- ja lämpöteho näyttävät riippuvan voimakkaasti lietteen kuiva-ainepitoisuudesta, eivät niinkään lietteen muista ominaisuuksista. Lietteenkäsittelykustannukset tiivistetystä lietteestä termiseen kuivaukseen soveltuvaksi polttoaineeksi vaihtelevat10-20 \ lietetonnia kohden, riippuen käsittelyvaiheiden määrästä. Kustannuksia syntyy eniten mekaanisesta vedenerotuksesta ja varastoinnista.
Resumo:
Työn tavoitteena oli tutkia hyvän asiakasreferenssin ominaisuuksia suodatinvalmistaja Laroxin myynnin ja huollon sekä yrityksen asiakkaiden näkökulmasta. Larox voi käyttää saatua tietoa referenssien tehokkaampaan valintaan ja hyödyntämiseen. Kaksi internet-kyselyä toteutettiin, välineenä Webropol. Alustava kysely sunnattiin Laroxin myynnille ja huollolle. Kysely koostui viidestä kategoriasta asiakasreferenssejä, joiden tärkeyttä arvioitiin, sekä vapaista vastauksista. Tunnistettuja hyvän asiakasreferenssin ominaisuuksia ovat hyvä suhde referenssiasiakkaaseen, positiiviset jarehelliset suosittelut asiakkaalta, referenssilaitteen hyvä toimintakyky ja asiakas joka ymmärtää huollon tärkeyden. Pääkysely suunnattiin Laroxin asiakkaille. Tilastollisilla analyyseilla tutkittiin koetun riskin mallinmuuttujien välisiä yhteyksiä. Analyysit eivät paljastaneet merkittäviä riippuvuuksia asiakasreferenssin ominaisuuksien tärkeydessä eritaustaisten vastaajien tai tilannetekijöiden välillä, mutta asiakasreferenssin ominaisuuksien faktorit tukevat mallia. Referenssilaitteiden toimintakyky vaikuttaa tärkeimmältä ja huollon tärkeys on myös merkittävä.
Resumo:
Diplomityön ensimmäisessä vaiheessa tutkittiin hydraulisen kuristimen ominaisuuksia ja esiteltiin numeerisesti tehokas kuristinmalli käyttäenpolynomifunktiota virtauksen laminaarisen ja transitioalueen kuvaukseen. Puoliempiirisen mallin paremmuus tulee esille siinä, että kuristimen geometriatietoja ei tarvita laskettaessa virtausta paine-eron perusteella. Reaaliaikasimuloinnissa esiintyy kompromisseja tarkkuuden ja laskentanopeuden välillä. Tätä asiaa tutkittiin kahden virtausalueen kuristinmallilla. Transition paine-eron ja integrointiaika-askelen valinnan vaikutus tarkkuuteen ja laskentanopeuteen tutkittiin. Toisessa vaiheessa tutkittiin mahdollisimman hyvän liiketuntuman tuottamista liikealustalla ohjaussignaalia kehittämällä. Liikealustan liikeradan rajallisuudesta johtuen ohjauksessa on perinteisesti käytetty washout-suodatusta, joka erottelee simuloitavan järjestelmän kiihtyvyyssignaalista vain nopeatkiihtyvyydet. Tässä työssä tutkittiin hitaiden kiihtyvyyksien ottamista mukaan liikealustan ohjaukseen liikealustan liikeradan puitteissa. Tämä toteutettiin kuvaamalla hitaat kiihtyvyydet kallistamalla liikealustaa, jolloin käyttäjään kohdistuva voima saatiin kuvattua gravitaatiota hyväksi käyttäen.
Resumo:
The epithelial Na+ channel (ENaC) is highly selective for Na+ and Li+ over K+ and is blocked by the diuretic amiloride. ENaC is a heterotetramer made of two alpha, one beta, and one gamma homologous subunits, each subunit comprising two transmembrane segments. Amino acid residues involved in binding of the pore blocker amiloride are located in the pre-M2 segment of beta and gamma subunits, which precedes the second putative transmembrane alpha helix (M2). A residue in the alpha subunit (alphaS589) at the NH2 terminus of M2 is critical for the molecular sieving properties of ENaC. ENaC is more permeable to Li+ than Na+ ions. The concentration of half-maximal unitary conductance is 38 mM for Na+ and 118 mM for Li+, a kinetic property that can account for the differences in Li+ and Na+ permeability. We show here that mutation of amino acid residues at homologous positions in the pre-M2 segment of alpha, beta, and gamma subunits (alphaG587, betaG529, gammaS541) decreases the Li+/Na+ selectivity by changing the apparent channel affinity for Li+ and Na+. Fitting single-channel data of the Li+ permeation to a discrete-state model including three barriers and two binding sites revealed that these mutations increased the energy needed for the translocation of Li+ from an outer ion binding site through the selectivity filter. Mutation of betaG529 to Ser, Cys, or Asp made ENaC partially permeable to K+ and larger ions, similar to the previously reported alphaS589 mutations. We conclude that the residues alphaG587 to alphaS589 and homologous residues in the beta and gamma subunits form the selectivity filter, which tightly accommodates Na+ and Li+ ions and excludes larger ions like K+.
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
The main objective of this thesis was togenerate better filtration technologies for effective production of pure starchproducts, and thereby the optimisation of filtration sequences using created models, as well as the synthesis of the theories of different filtration stages, which were suitable for starches. At first, the structure and the characteristics of the different starch grades are introduced and each starch grade is shown to have special characteristics. These are taken as the basis of the understanding of the differences in the behaviour of the different native starch grades and their modifications in pressure filtration. Next, the pressure filtration process is divided into stages, which are filtration, cake washing, compression dewatering and displacement dewatering. Each stage is considered individually in their own chapters. The order of the different suitable combinations of the process stages are studied, as well as the proper durations and pressures of the stages. The principles of the theory of each stageare reviewed, the methods for monitoring the progress of each stage are presented, and finally, the modelling of them is introduced. The experimental results obtained from the different stages of starch filtration tests are given and the suitability of the theories and models to the starch filtration are shown. Finally, the theories and the models are gathered together and shown, that the analysis of the whole starch pressure filtration process can be performed with the software developed.
Resumo:
Environmentally harmful consequences of fossil fuel utilisation andthe landfilling of wastes have increased the interest among the energy producers to consider the use of alternative fuels like wood fuels and Refuse-Derived Fuels, RDFs. The fluidised bed technology that allows the flexible use of a variety of different fuels is commonly used at small- and medium-sized power plants ofmunicipalities and industry in Finland. Since there is only one mass-burn plantcurrently in operation in the country and no intention to build new ones, the co-firing of pre-processed wastes in fluidised bed boilers has become the most generally applied waste-to-energy concept in Finland. The recently validated EU Directive on Incineration of Wastes aims to mitigate environmentally harmful pollutants of waste incineration and co-incineration of wastes with conventional fuels. Apart from gaseous flue gas pollutants and dust, the emissions of toxic tracemetals are limited. The implementation of the Directive's restrictions in the Finnish legislation is assumed to limit the co-firing of waste fuels, due to the insufficient reduction of the regulated air pollutants in the existing flue gas cleaning devices. Trace metals emission formation and reduction in the ESP, the condensing wet scrubber, the fabric filter, and the humidification reactor were studied, experimentally, in full- and pilot-scale combustors utilising the bubbling fluidised bed technology, and, theoretically, by means of reactor model calculations. The core of the model is a thermodynamic equilibrium analysis. The experiments were carried out with wood chips, sawdust, and peat, and their refuse-derived fuel, RDF, blends. In all, ten different fuels or fuel blends were tested. Relatively high concentrations of trace metals in RDFs compared to the concentrations of these metals in wood fuels increased the trace metal concentrations in the flue gas after the boiler ten- to hundred-folds, when RDF was co-fired with sawdust in a full-scale BFB boiler. In the case of peat, lesser increase in trace metal concentrations was observed, due to the higher initial trace metal concentrations of peat compared to sawdust. Despite the high removal rate of most of the trace metals in the ESP, the Directive emission limits for trace metals were exceeded in each of the RDF co-firing tests. The dominat trace metals in fluegas after the ESP were Cu, Pb and Mn. In the condensing wet scrubber, the flue gas trace metal emissions were reduced below the Directive emission limits, whenRDF pellet was used as a co-firing fuel together with sawdust and peat. High chlorine content of the RDFs enhanced the mercuric chloride formation and hence the mercury removal in the ESP and scrubber. Mercury emissions were lower than theDirective emission limit for total Hg, 0.05 mg/Nm3, in all full-scale co-firingtests already in the flue gas after the ESP. The pilot-scale experiments with aBFB combustor equipped with a fabric filter revealed that the fabric filter alone is able to reduce the trace metal concentrations, including mercury, in the flue gas during the RDF co-firing approximately to the same level as they are during the wood chip firing. Lower trace metal emissions than the Directive limits were easily reached even with a 40% thermal share of RDF co-firing with sawdust.Enrichment of trace metals in the submicron fly ash particle fraction because of RDF co-firing was not observed in the test runs where sawdust was used as the main fuel. The combustion of RDF pellets with peat caused an enrichment of As, Cd, Co, Pb, Sb, and V in the submicron particle mode. Accumulation and release oftrace metals in the bed material was examined by means of a bed material analysis, mass balance calculations and a reactor model. Lead, zinc and copper were found to have a tendency to be accumulated in the bed material but also to have a tendency to be released from the bed material into the combustion gases, if the combustion conditions were changed. The concentration of the trace metal in the combustion gases of the bubbling fluidised bed boiler was found to be a summary of trace metal fluxes from three main sources. They were (1) the trace metal flux from the burning fuel particle (2) the trace metal flux from the ash in the bed, and (3) the trace metal flux from the active alkali metal layer on the sand (and ash) particles in the bed. The amount of chlorine in the system, the combustion temperature, the fuel ash composition and the saturation state of the bed material in regard to trace metals were discovered to be key factors affecting therelease process. During the co-firing of waste fuels with variable amounts of e.g. ash and chlorine, it is extremely important to consider the possible ongoingaccumulation and/or release of the trace metals in the bed, when determining the flue gas trace metal emissions. If the state of the combustion process in regard to trace metals accumulation and/or release in the bed material is not known,it may happen that emissions from the bed material rather than the combustion of the fuel in question are measured and reported.