172 resultados para Nonlinear Decision Functions
Resumo:
What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the trace's donor's possible genotypes with a loss function describing the scientist's preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the model's parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.
Resumo:
The purpose of this study was to evaluate longitudinally, using the Iowa Gambling Task (IGT), the dynamics of decision-making capacity at a two-year interval (median: 2.1 years) in a group of patients with multiple sclerosis (MS) (n = 70) and minor neurological disability [Expanded Disability Status Scale (EDSS) < or = 2.5 at baseline]. Cognition (memory, executive functions, attention), behavior, handicap, and perceived health status were also investigated. Standardized change scores [(score at retest-score at baseline)/standard deviation of baseline score] were computed. Results showed that IGT performances decreased from baseline to retest (from 0.3, SD = 0.4 to 0.1, SD = 0.3, p = .005). MS patients who worsened in the IGT were more likely to show a decreased perceived health status and emotional well-being (SEP-59; p = .05 for both). Relapsing rate, disability progression, cognitive, and behavioral changes were not associated with decreased IGT performances. In conclusion, decline in decision making can appear as an isolated deficit in MS.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
We all make decisions of varying levels of importance every day. Because making a decision implies that there are alternative choices to be considered, almost all decision involves some conflicts or dissatisfaction. Traditional economic models esteem that a person must weight the positive and negative outcomes of each option, and based on all these inferences, determines which option is the best for that particular situation. However, individuals rather act as irrational agents and tend to deviate from these rational choices. They somewhat evaluate the outcomes' subjective value, namely, when they face a risky choice leading to losses, people are inclined to have some preference for risk over certainty, while when facing a risky choice leading to gains, people often avoid to take risks and choose the most certain option. Yet, it is assumed that decision making is balanced between deliberative and emotional components. Distinct neural regions underpin these factors: the deliberative pathway that corresponds to executive functions, implies the activation of the prefrontal cortex, while the emotional pathway tends to activate the limbic system. These circuits appear to be altered in individuals with ADHD, and result, amongst others, in impaired decision making capacities. Their impulsive and inattentive behaviors are likely to be the cause of their irrational attitude towards risk taking. Still, a possible solution is to administrate these individuals a drug treatment, with the knowledge that it might have several side effects. However, an alternative treatment that relies on cognitive rehabilitation might be appropriate. This project was therefore aimed at investigate whether an intensive working memory training could have a spillover effect on decision making in adults with ADHD and in age-matched healthy controls. We designed a decision making task where the participants had to select an amount to gamble with the chance of 1/3 to win four times the chosen amount, while in the other cases they could loose their investment. Their performances were recorded using electroencephalography prior and after a one-month Dual N-Back training and the possible near and far transfer effects were investigated. Overall, we found that the performance during the gambling task was modulated by personality factors and by the importance of the symptoms at the pretest session. At posttest, we found that all individuals demonstrated an improvement on the Dual N-Back and on similar untrained dimensions. In addition, we discovered that not only the adults with ADHD showed a stable decrease of the symptomatology, as evaluated by the CAARS inventory, but this reduction was also detected in the control samples. In addition, Event-Related Potential (ERP) data are in favor of an change within prefrontal and parietal cortices. These results suggest that cognitive remediation can be effective in adults with ADHD, and in healthy controls. An important complement of this work would be the examination of the data in regard to the attentional networks, which could empower the fact that complex programs covering the remediation of several executive functions' dimensions is not required, a unique working memory training can be sufficient. -- Nous prenons tous chaque jour des décisions ayant des niveaux d'importance variables. Toutes les décisions ont une composante conflictuelle et d'insatisfaction, car prendre une décision implique qu'il y ait des choix alternatifs à considérer. Les modèles économiques traditionnels estiment qu'une personne doit peser les conséquences positives et négatives de chaque option et en se basant sur ces inférences, détermine quelle option est la meilleure dans une situation particulière. Cependant, les individus peuvent dévier de ces choix rationnels. Ils évaluent plutôt les valeur subjective des résultats, c'est-à-dire que lorsqu'ils sont face à un choix risqué pouvant les mener à des pertes, les gens ont tendance à avoir des préférences pour le risque à la place de la certitude, tandis que lorsqu'ils sont face à un choix risqué pouvant les conduire à un gain, ils évitent de prendre des risques et choisissent l'option la plus su^re. De nos jours, il est considéré que la prise de décision est balancée entre des composantes délibératives et émotionnelles. Ces facteurs sont sous-tendus par des régions neurales distinctes: le chemin délibératif, correspondant aux fonctions exécutives, implique l'activation du cortex préfrontal, tandis que le chemin émotionnel active le système limbique. Ces circuits semblent être dysfonctionnels chez les individus ayant un TDAH, et résulte, entre autres, en des capacités de prise de décision altérées. Leurs comportements impulsifs et inattentifs sont probablement la cause de ces attitudes irrationnelles face au risque. Cependant, une solution possible est de leur administrer un traitement médicamenteux, en prenant en compte les potentiels effets secondaires. Un traitement alternatif se reposant sur une réhabilitation cognitive pourrait être appropriée. Le but de ce projet est donc de déterminer si un entrainement intensif de la mémoire de travail peut avoir un effet sur la prise de décision chez des adultes ayant un TDAH et chez des contrôles sains du même âge. Nous avons conçu une tâche de prise de décision dans laquelle les participants devaient sélectionner un montant à jouer en ayant une chance sur trois de gagner quatre fois le montant choisi, alors que dans l'autre cas, ils pouvaient perdre leur investissement. Leurs performances ont été enregistrées en utilisant l'électroencéphalographie avant et après un entrainement d'un mois au Dual N-Back, et nous avons étudié les possibles effets de transfert. Dans l'ensemble, nous avons trouvé au pré-test que les performances au cours du jeu d'argent étaient modulées par les facteurs de personnalité, et par le degré des sympt^omes. Au post-test, nous avons non seulement trouvé que les adultes ayant un TDAH montraient une diminutions stable des symptômes, qui étaient évalués par le questionnaire du CAARS, mais que cette réduction était également perçue dans l'échantillon des contrôles. Les rsultats expérimentaux mesurés à l'aide de l'éléctroencéphalographie suggèrent un changement dans les cortex préfrontaux et pariétaux. Ces résultats suggèrent que la remédiation cognitive est efficace chez les adultes ayant un TDAH, mais produit aussi un effet chez les contrôles sains. Un complément important de ce travail pourrait examiner les données sur l'attention, qui pourraient renforcer l'idée qu'il n'est pas nécessaire d'utiliser des programmes complexes englobant la remédiation de plusieurs dimensions des fonctions exécutives, un simple entraiment de la mémoire de travail devrait suffire.
Resumo:
In rats, neonatal treatment with monosodium L-glutamate (MSG) induces several metabolic and neuroendocrine abnormalities, which result in hyperadiposity. No data exist, however, regarding neuroendocrine, immune and metabolic responses to acute endotoxemia in the MSG-damaged rat. We studied the consequences of MSG treatment during the acute phase response of inflammatory stress. Neonatal male rats were treated with MSG or vehicle (controls, CTR) and studied at age 90 days. Pituitary, adrenal, adipo-insular axis, immune, metabolic and gonadal functions were explored before and up to 5 h after single sub-lethal i.p. injection of bacterial lipopolysaccharide (LPS; 150 microg/kg). Our results showed that, during the acute phase response of inflammatory stress in MSG rats: (1) the corticotrope-adrenal, leptin, insulin and triglyceride responses were higher than in CTR rats, (2) pro-inflammatory (TNFalpha) cytokine response was impaired and anti-inflammatory (IL-10) cytokine response was normal, and (3) changes in peripheral estradiol and testosterone levels after LPS varied as in CTR rats. These data indicate that metabolic and neroendocrine-immune functions are altered in MSG-damaged rats. Our study also suggests that the enhanced corticotrope-corticoadrenal activity in MSG animals could be responsible, at least in part, for the immune and metabolic derangements characterizing hypothalamic obesity.
Resumo:
INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.
Resumo:
INTRODUCTION: This study sought to increase understanding of women's thoughts and feelings about decision making and the experience of subsequent pregnancy following stillbirth (intrauterine death after 24 weeks' gestation). METHODS: Eleven women were interviewed, 8 of whom were pregnant at the time of the interview. Modified grounded theory was used to guide the research methodology and to analyze the data. RESULTS: A model was developed to illustrate women's experiences of decision making in relation to subsequent pregnancy and of subsequent pregnancy itself. DISCUSSION: The results of the current study have significant implications for women who have experienced stillbirth and the health professionals who work with them. Based on the model, women may find it helpful to discuss their beliefs in relation to healing and health professionals to provide support with this in mind. Women and their partners may also benefit from explanations and support about the potentially conflicting emotions they may experience during this time.
Resumo:
The majority of diseases in the retina are caused by genetic mutations affecting the development and function of photoreceptor cells. The transcriptional networks directing these processes are regulated by genes such as nuclear hormone receptors. The nuclear hormone receptor gene Rev-erb alpha/Nr1d1 has been widely studied for its role in the circadian cycle and cell metabolism, however its role in the retina is unknown. In order to understand the role of Rev-erb alpha/Nr1d1 in the retina, we evaluated the effects of loss of Nr1d1 to the developing retina and its co-regulation with the photoreceptor-specific nuclear receptor gene Nr2e3 in the developing and mature retina. Knock-down of Nr1d1 expression in the developing retina results in pan-retinal spotting and reduced retinal function by electroretinogram. Our studies show that NR1D1 protein is co-expressed with NR2E3 in the outer neuroblastic layer of the developing mouse retina. In the adult retina, NR1D1 is expressed in the ganglion cell layer and is co-expressed with NR2E3 in the outer nuclear layer, within rods and cones. Several genes co-targeted by NR2E3 and NR1D1 were identified that include: Nr2c1, Recoverin, Rgr, Rarres2, Pde8a, and Nupr1. We examined the cyclic expression of Nr1d1 and Nr2e3 over a twenty-four hour period and observed that both nuclear receptors cycle in a similar manner. Taken together, these studies reveal a novel role for Nr1d1, in conjunction with its cofactor Nr2e3, in regulating transcriptional networks critical for photoreceptor development and function.
Resumo:
Cerebral microangiopathy (CMA) has been associated with executive dysfunction and fronto-parietal neural network disruption. Advances in magnetic resonance imaging allow more detailed analyses of gray (e.g., voxel-based morphometry-VBM) and white matter (e.g., diffusion tensor imaging-DTI) than traditional visual rating scales. The current study investigated patients with early CMA and healthy control subjects with all three approaches. Neuropsychological assessment focused on executive functions, the cognitive domain most discussed in CMA. The DTI and age-related white matter changes rating scales revealed convergent results showing widespread white matter changes in early CMA. Correlations were found in frontal and parietal areas exclusively with speeded, but not with speed-corrected executive measures. The VBM analyses showed reduced gray matter in frontal areas. All three approaches confirmed the hypothesized fronto-parietal network disruption in early CMA. Innovative methods (DTI) converged with results from conventional methods (visual rating) while allowing greater spatial and tissue accuracy. They are thus valid additions to the analysis of neural correlates of cognitive dysfunction. We found a clear distinction between speeded and nonspeeded executive measures in relationship to imaging parameters. Cognitive slowing is related to disease severity in early CMA and therefore important for early diagnostics.
Resumo:
Fatty acid degradation in most organisms occurs primarily via the beta-oxidation cycle. In mammals, beta-oxidation occurs in both mitochondria and peroxisomes, whereas plants and most fungi harbor the beta-oxidation cycle only in the peroxisomes. Although several of the enzymes participating in this pathway in both organelles are similar, some distinct physiological roles have been uncovered. Recent advances in the structural elucidation of numerous mammalian and yeast enzymes involved in beta-oxidation have shed light on the basis of the substrate specificity for several of them. Of particular interest is the structural organization and function of the type 1 and 2 multifunctional enzyme (MFE-1 and MFE-2), two enzymes evolutionarily distant yet catalyzing the same overall enzymatic reactions but via opposite stereochemistry. New data on the physiological roles of the various enzymes participating in beta-oxidation have been gathered through the analysis of knockout mutants in plants, yeast and animals, as well as by the use of polyhydroxyalkanoate synthesis from beta-oxidation intermediates as a tool to study carbon flux through the pathway. In plants, both forward and reverse genetics performed on the model plant Arabidopsis thaliana have revealed novel roles for beta-oxidation in the germination process that is independent of the generation of carbohydrates for growth, as well as in embryo and flower development, and the generation of the phytohormone indole-3-acetic acid and the signal molecule jasmonic acid.
Resumo:
BACKGROUND: We previously reported that myeloid cells can induce mucosal healing in a mouse model of acute colitis. Promotion of mucosal repair is becoming a major goal in the treatment of Crohn's disease. Our aim in this study is to investigate the pro-repair function of myeloid cells in healthy donor (HD) and Crohn's disease patients (CD). METHODS: Peripheral blood mononuclear cells (PBMC) from HD and CD patients were isolated from blood samples by Ficoll density gradient. Monocytic CD14+ cells were positively selected by Macs procedure and then differentiated ex-vivo into macrophages (Mφ). The repair function of PBMC, CD14+ monocytic cells and macrophages were evaluated in an in vitro wound healing assay. RESULTS: PBMC and CD14+ myeloid cells from HD and CD were not able to repair at any tested cell concentration. Remarkably, HD Mφ were able to induce wound healing only at high concentration (105 added Mφ), but, if activated with heat killed bacteria, they were able to repair even at very low concentration. On the contrary, not activated CD Mφ were not able to promote healing at any rate, but this function was restored upon activation. CONCLUSION: We showed that CD Mφ in their steady state, unlike HD Mφ, are defective in promoting wound healing. Our results are in keeping with the current theory of CD as an innate immunodeficiency. Defective Mφ may be responsible to the mucosal repair defects in CD patients and to the subsequent chronic activation of the adaptive immune response.