854 resultados para Probabilistic functions
Resumo:
The peroxisome proliferator-activated receptors (PPARs) are fatty acid and eicosanoid inducible nuclear receptors, which occur in three different isotypes. Upon activator binding, they modulate the expression of various target genes implicated in several important physiological pathways. During the past few years, the identification of both PPAR ligands, natural and synthetic, and PPAR targets and their associated functions has been one of the most important achievements in the field. It underscores the potential therapeutic application of PPAR-specific compounds on the one side, and the crucial biological roles of endogenous PPAR ligands on the other.
Resumo:
After antigenic challenge, naive T lymphocytes enter a program of proliferation and differentiation during the course of which they acquire effector functions and may ultimately become memory cells. In humans, the pathways of effector and memory T-cell differentiation remain poorly defined. Here we describe the properties of 2 CD8+ T-lymphocyte subsets, RA+CCR7-27+28+ and RA+CCR7-27+28-, in human peripheral blood. These cells display phenotypic and functional features that are intermediate between naive and effector T cells. Like naive T lymphocytes, both subsets show relatively long telomeres. However, unlike the naive population, these T cells exhibit reduced levels of T-cell receptor excision circles (TRECs), indicating they have undergone additional rounds of in vivo cell division. Furthermore, we show that they also share effector-type properties. At equivalent in vivo replicative history, the 2 subsets express high levels of Fas/CD95 and CD11a, as well as increasing levels of effector mediators such as granzyme B, perforin, interferon gamma, and tumor necrosis factor alpha. Both display partial ex vivo cytolytic activity and can be found among cytomegalovirus-specific cytolytic T cells. Taken together, our data point to the presence of T cells with intermediate effector-like functions and suggest that these subsets consist of T lymphocytes that are evolving toward a more differentiated effector or effector-memory stage.
Resumo:
Unlike the evaluation of single items of scientific evidence, the formal study and analysis of the jointevaluation of several distinct items of forensic evidence has to date received some punctual, ratherthan systematic, attention. Questions about the (i) relationships among a set of (usually unobservable)propositions and a set of (observable) items of scientific evidence, (ii) the joint probative valueof a collection of distinct items of evidence as well as (iii) the contribution of each individual itemwithin a given group of pieces of evidence still represent fundamental areas of research. To somedegree, this is remarkable since both, forensic science theory and practice, yet many daily inferencetasks, require the consideration of multiple items if not masses of evidence. A recurrent and particularcomplication that arises in such settings is that the application of probability theory, i.e. the referencemethod for reasoning under uncertainty, becomes increasingly demanding. The present paper takesthis as a starting point and discusses graphical probability models, i.e. Bayesian networks, as frameworkwithin which the joint evaluation of scientific evidence can be approached in some viable way.Based on a review of existing main contributions in this area, the article here aims at presentinginstances of real case studies from the author's institution in order to point out the usefulness andcapacities of Bayesian networks for the probabilistic assessment of the probative value of multipleand interrelated items of evidence. A main emphasis is placed on underlying general patterns of inference,their representation as well as their graphical probabilistic analysis. Attention is also drawnto inferential interactions, such as redundancy, synergy and directional change. These distinguish thejoint evaluation of evidence from assessments of isolated items of evidence. Together, these topicspresent aspects of interest to both, domain experts and recipients of expert information, because theyhave bearing on how multiple items of evidence are meaningfully and appropriately set into context.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Abstract
Resumo:
Well developed experimental procedures currently exist for retrieving and analyzing particle evidence from hands of individuals suspected of being associated with the discharge of a firearm. Although analytical approaches (e.g. automated Scanning Electron Microscopy with Energy Dispersive X-ray (SEM-EDS) microanalysis) allow the determination of the presence of elements typically found in gunshot residue (GSR) particles, such analyses provide no information about a given particle's actual source. Possible origins for which scientists may need to account for are a primary exposure to the discharge of a firearm or a secondary transfer due to a contaminated environment. In order to approach such sources of uncertainty in the context of evidential assessment, this paper studies the construction and practical implementation of graphical probability models (i.e. Bayesian networks). These can assist forensic scientists in making the issue tractable within a probabilistic perspective. The proposed models focus on likelihood ratio calculations at various levels of detail as well as case pre-assessment.
Resumo:
[eng] In this paper we claim that capital is as important in the production of ideas as in the production of final goods. Hence, we introduce capital in the production of knowledge and discuss the associated problems arising from the public good nature of knowledge. We show that although population growth can affect economic growth, it is not necessary for growth to arise. We derive both the social planner and the decentralized economy growth rates and show the optimal subsidy that decentralizes it. We also show numerically that the effects of population growth on the market growth rate, the optimal growth rate and the optimal subsidy are small. Besides, we find that physical capital is more important for the production of knowledge than for the production of goods.
Resumo:
[eng] In this paper we claim that capital is as important in the production of ideas as in the production of final goods. Hence, we introduce capital in the production of knowledge and discuss the associated problems arising from the public good nature of knowledge. We show that although population growth can affect economic growth, it is not necessary for growth to arise. We derive both the social planner and the decentralized economy growth rates and show the optimal subsidy that decentralizes it. We also show numerically that the effects of population growth on the market growth rate, the optimal growth rate and the optimal subsidy are small. Besides, we find that physical capital is more important for the production of knowledge than for the production of goods.
Resumo:
The three peroxisome proliferator-activated receptors (PPAR alpha, PPAR beta, and PPAR gamma) are ligand-activated transcription factors belonging to the nuclear hormone receptor superfamily. They are regarded as being sensors of physiological levels of fatty acids and fatty acid derivatives. In the adult mouse skin, they are found in hair follicle keratinocytes but not in interfollicular epidermis keratinocytes. Skin injury stimulates the expression of PPAR alpha and PPAR beta at the site of the wound. Here, we review the spatiotemporal program that triggers PPAR beta expression immediately after an injury, and then gradually represses it during epithelial repair. The opposing effects of the tumor necrosis factor-alpha and transforming growth factor-beta-1 signalling pathways on the activity of the PPAR beta promoter are the key elements of this regulation. We then compare the involvement of PPAR beta in the skin in response to an injury and during hair morphogenesis, and underscore the similarity of its action on cell survival in both situations.
Resumo:
Calcineurin signaling plays diverse roles in fungi in regulating stress responses, morphogenesis and pathogenesis. Although calcineurin signaling is conserved among fungi, recent studies indicate important divergences in calcineurin-dependent cellular functions among different human fungal pathogens. Fungal pathogens utilize the calcineurin pathway to effectively survive the host environment and cause life-threatening infections. The immunosuppressive calcineurin inhibitors (FK506 and cyclosporine A) are active against fungi, making targeting calcineurin a promising antifungal drug development strategy. Here we summarize current knowledge on calcineurin in yeasts and filamentous fungi, and review the importance of understanding fungal-specific attributes of calcineurin to decipher fungal pathogenesis and develop novel antifungal therapeutic approaches.
Resumo:
A chronic inflammatory microenvironment favors tumor progression through molecular mechanisms that are still incompletely defined. In inflammation-induced skin cancers, IL-1 receptor- or caspase-1-deficient mice, or mice specifically deficient for the inflammasome adaptor protein ASC (apoptosis-associated speck-like protein containing a CARD) in myeloid cells, had reduced tumor incidence, pointing to a role for IL-1 signaling and inflammasome activation in tumor development. However, mice fully deficient for ASC were not protected, and mice specifically deficient for ASC in keratinocytes developed more tumors than controls, suggesting that, in contrast to its proinflammatory role in myeloid cells, ASC acts as a tumor-suppressor in keratinocytes. Accordingly, ASC protein expression was lost in human cutaneous squamous cell carcinoma, but not in psoriatic skin lesions. Stimulation of primary mouse keratinocytes or the human keratinocyte cell line HaCaT with UVB induced an ASC-dependent phosphorylation of p53 and expression of p53 target genes. In HaCaT cells, ASC interacted with p53 at the endogenous level upon UVB irradiation. Thus, ASC in different tissues may influence tumor growth in opposite directions: it has a proinflammatory role in infiltrating cells that favors tumor development, but it also limits keratinocyte proliferation in response to noxious stimuli, possibly through p53 activation, which helps suppressing tumors.
Resumo:
In this paper we study network structures in which the possibilities for cooperation are restricted and can not be described by a cooperative game. The benefits of a group of players depend on how these players are internally connected. One way to represent this type of situations is the so-called reward function, which represents the profits obtainable by the total coalition if links can be used to coordinate agents' actions. The starting point of this paper is the work of Vilaseca et al. where they characterized the reward function. We concentrate on those situations where there exist costs for establishing communication links. Given a reward function and a costs function, our aim is to analyze under what conditions it is possible to associate a cooperative game to it. We characterize the reward function in networks structures with costs for establishing links by means of two conditions, component permanence and component additivity. Finally, an economic application is developed to illustrate the main theoretical result.
Resumo:
In this paper, two probabilistic adaptive algorithmsfor jointly detecting active users in a DS-CDMA system arereported. The first one, which is based on the theory of hiddenMarkov models (HMM’s) and the Baum–Wech (BW) algorithm,is proposed within the CDMA scenario and compared withthe second one, which is a previously developed Viterbi-basedalgorithm. Both techniques are completely blind in the sense thatno knowledge of the signatures, channel state information, ortraining sequences is required for any user. Once convergencehas been achieved, an estimate of the signature of each userconvolved with its physical channel response (CR) and estimateddata sequences are provided. This CR estimate can be used toswitch to any decision-directed (DD) adaptation scheme. Performanceof the algorithms is verified via simulations as well as onexperimental data obtained in an underwater acoustics (UWA)environment. In both cases, performance is found to be highlysatisfactory, showing the near–far resistance of the analyzed algorithms.