981 resultados para cosmological parameters from CMBR
Resumo:
Protein-coding genes evolve at different rates, and the influence of different parameters, from gene size to expression level, has been extensively studied. While in yeast gene expression level is the major causal factor of gene evolutionary rate, the situation is more complex in animals. Here we investigate these relations further, especially taking in account gene expression in different organs as well as indirect correlations between parameters. We used RNA-seq data from two large datasets, covering 22 mouse tissues and 27 human tissues. Over all tissues, evolutionary rate only correlates weakly with levels and breadth of expression. The strongest explanatory factors of purifying selection are GC content, expression in many developmental stages, and expression in brain tissues. While the main component of evolutionary rate is purifying selection, we also find tissue-specific patterns for sites under neutral evolution and for positive selection. We observe fast evolution of genes expressed in testis, but also in other tissues, notably liver, which are explained by weak purifying selection rather than by positive selection.
Resumo:
Pysyäkseen kilpailukykyisenä vapautuneilla sähkömarkkinoilla on voimalaitoksen energiantuotantokustannusten oltava mahdollisimman matalia, tinkimättä kuitenkaan korkeasta käytettävyydestä. Polttoaineen energiasisällön mahdollisimman hyvä hyödyntäminen on ratkaisevan tärkeää voimalaitoksen kannattavuudelle. Polttoainekustannusten osuus on konvektiivisilla laitoksilla yleensä yli puolet koko elinjakson kustannuksista. Kun vielä päästörajat tiukkenevat koko ajan, korostuu polttoaineen korkea hyötykäyttö entisestään. Korkea energiantuotannon luotettavuus ja käytettävyys ovat myös elintärkeitä pyrittäessä kustannusten minimointiin. Tässä työssä on käyty läpi voimalaitoksen kustannuksiin vaikuttavia käsitteitä, kuten hyötysuhdetta, käytettävyyttä, polttoaineen hintoja, ylös- ja alasajoja ja tärkeimpiä häviöitä. Ajostrategiassa ja poikkeamien hallinnassa pyritään hyvään hyötysuhteeseen ja alhaisiin päästöihin joka käyttötilanteessa. Lisäksi on tarkasteltu tiettyjen suureiden, eli höyryn lämpötilan ja paineen, savukaasun hapen pitoisuuden, savukaasun loppulämpötilan, sekä lauhduttimen paineen poikkeamien vaikutusta ohjearvostaan energiantuotantokustannuksiin. Happi / hiilimonoksidi optimoinnissa on otettu huomioon myös pohjatuhkan palamattomat.
Resumo:
Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
Antimicrobial peptides offer a new class of therapeutic agents to which bacteria may not be able todevelop genetic resistance, since their main activity is in the lipid component of the bacterial cell mem-brane. We have developed a series of synthetic cationic cyclic lipopeptides based on natural polymyxin,and in this work we explore the interaction of sp-85, an analog that contains a C12 fatty acid at theN-terminus and two residues of arginine. This analog has been selected from its broad spectrum antibac-terial activity in the micromolar range, and it has a disruptive action on the cytoplasmic membrane ofbacteria, as demonstrated by TEM. In order to obtain information on the interaction of this analog withmembrane lipids, we have obtained thermodynamic parameters from mixed monolayers prepared withPOPG and POPE/POPG (molar ratio 6:4), as models of Gram positive and Gram negative bacteria, respec-tively. LangmuirBlodgett films have been extracted on glass plates and observed by confocal microscopy,and images are consistent with a strong destabilizing effect on the membrane organization induced bysp-85. The effect of sp-85 on the membrane is confirmed with unilamelar lipid vesicles of the same com-position, where biophysical experiments based on fluorescence are indicative of membrane fusion andpermeabilization starting at very low concentrations of peptide and only if anionic lipids are present.Overall, results described here provide strong evidence that the mode of action of sp-85 is the alterationof the bacterial membrane permeability barrier.
Resumo:
Hydroxymethylnitrofurazone (NFOH) is a prodrug that is active against Trypanosoma cruzi. It however presents low solubility and high toxicity. Hydroxypropyl-beta-cyclodextrin (HP-beta-CD) can be used as a drug-delivery system for NFOH modifying its physico-chemical properties. The aim of this work is to characterize the inclusion complex between NFOH and HP-beta-CD. The rate of NFOH release decreases after complexation and thermodynamic parameters from the solubility isotherm studies revealed that a stable complex is formed (deltaGº= 1.7 kJ/mol). This study focuses on the physico-chemical characterization of a drug-delivery formulation that comes out as a potentially new therapeutic option for Chagas disease treatment.
Resumo:
Al(C9H6ON)3.2.5H2O was precipitated from the mixture of an aqueous solution of aluminium ion and an acid solution of 8-hydroxyquinoline, by increasing the pH value to 9.5 with ammonia aqueous solution. The TG curves in nitrogen atmosphere present mass losses due to dehydration, partial volatilisation (sublimation plus vaporisation) of the anhydrous compound followed by thermal decomposition with the formation of a mixture of carbonaceous and residues. The relation between sublimation and vaporisation depends on the heating rate used. The non isothermic integral isoconventional methods as linear equations of Ozawa-Flynn-Wall and Kissinger-Akahira-Sunose (KAS) were used to obtain the kinetic parameters from TG and DTA curves, respectively. Despite the fact that both dehydration and volatilisation reactions follow the linearity by using both methods, only for the volatilisation reaction the validity condition, 20<= E/RT<= 50, was verified.
Resumo:
This study aimed to apply mathematical models to the growth of Nile tilapia (Oreochromis niloticus) reared in net cages in the lower São Francisco basin and choose the model(s) that best represents the conditions of rearing for the region. Nonlinear models of Brody, Bertalanffy, Logistic, Gompertz, and Richards were tested. The models were adjusted to the series of weight for age according to the methods of Gauss, Newton, Gradiente and Marquardt. It was used the procedure "NLIN" of the System SAS® (2003) to obtain estimates of the parameters from the available data. The best adjustment of the data were performed by the Bertalanffy, Gompertz and Logistic models which are equivalent to explain the growth of the animals up to 270 days of rearing. From the commercial point of view, it is recommended that commercialization of tilapia from at least 600 g, which is estimated in the Bertalanffy, Gompertz and Logistic models for creating over 183, 181 and 184 days, and up to 1 Kg of mass , it is suggested the suspension of the rearing up to 244, 244 and 243 days, respectively.
Resumo:
Pigs are more sensitive to high environmental temperatures explained by the inability of sweating and panting properly when compared to other species of farmed livestock. The evaporative cooling system might favor the thermal comfort of animals during exposure to extreme environmental heat and reduce the harmful effects of heat stress. The purpose of this study was to assess the sensible heat loss and thermoregulation parameters from lactating sows during summer submitted to two different acclimatization systems: natural and evaporative cooling. The experiment was carried out in a commercial farm with 72 lactating sows. The ambient variables (temperature, relative humidity and air velocity) and sows physiological parameters (rectal temperature, surface temperature and respiratory rate) were monitored and then the sensible heat loss at 21days lactation was calculated. The results of rectal temperature did not differ between treatments. However, the evaporative cooling led to a significant reduction in surface temperature and respiratory rate and a significant increase in the sow's sensible heat loss. It was concluded that the use of evaporative cooling system was essential to increase sensible heat loss; thus, it should reduce the negative effects of heat on the sows' thermoregulation during summer.
Resumo:
Higher travel speeds of rail vehicles will be possible by developing sophisticated top performance bogies having creep-controlled wheelsets. In this case the torque transmission between the right and the left wheel is realized by an actively controlled creep coupling. To investigate hunting stability and curving capability the linear equations of motion are written in state space notation. Simulation results are obtained with realistic system parameters from industry and various controller gains. The advantage of the creep-controlled wheelset" is discussed by comparison the simulation results with the dynamic behaviour of the special cases solid-axle wheelset" and loose wheelset" (independent rotation of the wheels). The stability is also investigated with a root-locus analysis.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Waste combustion has gone from being a volume reducing discarding-method to an energy recovery process for unwanted material that cannot be reused or recycled. Different fractions of waste are used as fuel today, such as; municipal solid waste, refuse derived fuel, and solid recovered fuel. Furthermore, industrial waste, normally a mixture between commercial waste and building and demolition waste, is common, either as separate fuels or mixed with, for example, municipal solid waste. Compared to fossil or biomass fuels, waste mixtures are extremely heterogeneous, making it a complicated fuel. Differences in calorific values, ash content, moisture content, and changing levels of elements, such as Cl and alkali metals, are common in waste fuel. Moreover, waste contains much higher levels of troublesome trace elements, such as Zn, which is thought to accelerate a corrosion process. Varying fuel quality can be strenuous on the boiler system and may cause fouling and corrosion of heat exchanger surfaces. This thesis examines waste fuels and waste combustion from different angles, with the objective of giving a better understanding of waste as an important fuel in today’s fuel economy. Several chemical characterisation campaigns of waste fuels over longer time periods (10-12 months) was used to determine the fossil content of Swedish waste fuels, to investigate possible seasonal variations, and to study the presence of Zn in waste. Data from the characterisation campaigns were used for thermodynamic equilibrium calculations to follow trends and determine the effect of changing concentrations of various elements. The thesis also includes a study of the thermal behaviour of Zn and a full—scale study of how the bed temperature affects the volatilisation of alkali metals and Zn from the fuel. As mixed waste fuel contains considerable amounts of fresh biomass, such as wood, food waste, paper etc. it would be wrong to classify it as a fossil fuel. When Sweden introduced waste combustion as a part of the European Union emission trading system in the beginning of 2013 there was a need for combustion plants to find a usable and reliable method to determine the fossil content. Four different methods were studied in full-scale of seven combustion plants; 14Canalysis of solid waste, 14C-analysis of flue gas, sorting analysis followed by calculations, and a patented balance method that is using a software program to calculate the fossil content based on parameters from the plant. The study showed that approximately one third of the coal in Swedish waste mixtures has fossil origins and presented the plants with information about the four different methods and their advantages and disadvantages. Characterisation campaigns also showed that industrial waste contain higher levels of trace elements, such as Zn. The content of Zn in Swedish waste fuels was determined to be approximately 800 mg kg-1 on average, based on 42 samples of solid waste from seven different plants with varying mixtures between municipal solid waste and industrial waste. A review study of the occurrence of Zn in fuels confirmed that the highest amounts of Zn are present in waste fuels rather than in fossil or biomass fuels. In tires, Zn is used as a vulcanizing agent and can reach concentration values of 9600-16800 mg kg-1. Waste Electrical and Electronic Equipment is the second Zn-richest fuel and even though on average Zn content is around 4000 mg kg-1, the values of over 19000 mg kg-1 were also reported. The increased amounts of Zn, 3000-4000 mg kg-1, are also found in municipal solid waste, sludge with over 2000 mg kg-1 on average (some exceptions up to 49000 mg kg-1), and other waste derived fuels (over 1000 mg kg-1). Zn is also found in fossil fuels. In coal, the average level of Zn is 100 mg kg-1, the higher amount of Zn was only reported for oil shale with values between 20-2680 mg kg-1. The content of Zn in biomass is basically determined by its natural occurrence and it is typically 10-100 mg kg-1. The thermal behaviour of Zn is of importance to understand the possible reactions taking place in the boiler. By using thermal analysis three common Zn-compounds were studied (ZnCl2, ZnSO4, and ZnO) and compared to phase diagrams produced with thermodynamic equilibrium calculations. The results of the study suggest that ZnCl2(s/l) cannot exist readily in the boiler due to its volatility at high temperatures and its conversion to ZnO in oxidising conditions. Also, ZnSO4 decomposes around 680°C, while ZnO is relatively stable in the temperature range prevailing in the boiler. Furthermore, by exposing ZnO to HCl in a hot environment (240-330°C) it was shown that chlorination of ZnO with HCl gas is possible. Waste fuel containing high levels of elements known to be corrosive, for example, Na and K in combination with Cl, and also significant amounts of trace elements, such as Zn, are demanding on the whole boiler system. A full-scale study of how the volatilisation of Na, K, and Zn is affected by the bed temperature in a fluidised bed boiler was performed parallel with a lab-scale study with the same conditions. The study showed that the fouling rate on deposit probes were decreased by 20 % when the bed temperature was decreased from 870°C to below 720°C. In addition, the lab-scale experiments clearly indicated that the amount of alkali metals and Zn volatilised depends on the reactor temperature.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Diabetic retinopathy, age-related macular degeneration and glaucoma are the leading causes of blindness worldwide. Automatic methods for diagnosis exist, but their performance is limited by the quality of the data. Spectral retinal images provide a significantly better representation of the colour information than common grayscale or red-green-blue retinal imaging, having the potential to improve the performance of automatic diagnosis methods. This work studies the image processing techniques required for composing spectral retinal images with accurate reflection spectra, including wavelength channel image registration, spectral and spatial calibration, illumination correction, and the estimation of depth information from image disparities. The composition of a spectral retinal image database of patients with diabetic retinopathy is described. The database includes gold standards for a number of pathologies and retinal structures, marked by two expert ophthalmologists. The diagnostic applications of the reflectance spectra are studied using supervised classifiers for lesion detection. In addition, inversion of a model of light transport is used to estimate histological parameters from the reflectance spectra. Experimental results suggest that the methods for composing, calibrating and postprocessing spectral images presented in this work can be used to improve the quality of the spectral data. The experiments on the direct and indirect use of the data show the diagnostic potential of spectral retinal data over standard retinal images. The use of spectral data could improve automatic and semi-automated diagnostics for the screening of retinal diseases, for the quantitative detection of retinal changes for follow-up, clinically relevant end-points for clinical studies and development of new therapeutic modalities.