154 resultados para Bayesian approaches
Resumo:
Angiogenesis plays a key role in tumor growth and cancer progression. TIE-2-expressing monocytes (TEM) have been reported to critically account for tumor vascularization and growth in mouse tumor experimental models, but the molecular basis of their pro-angiogenic activity are largely unknown. Moreover, differences in the pro-angiogenic activity between blood circulating and tumor infiltrated TEM in human patients has not been established to date, hindering the identification of specific targets for therapeutic intervention. In this work, we investigated these differences and the phenotypic reversal of breast tumor pro-angiogenic TEM to a weak pro-angiogenic phenotype by combining Boolean modelling and experimental approaches. Firstly, we show that in breast cancer patients the pro-angiogenic activity of TEM increased drastically from blood to tumor, suggesting that the tumor microenvironment shapes the highly pro-angiogenic phenotype of TEM. Secondly, we predicted in silico all minimal perturbations transitioning the highly pro-angiogenic phenotype of tumor TEM to the weak pro-angiogenic phenotype of blood TEM and vice versa. In silico predicted perturbations were validated experimentally using patient TEM. In addition, gene expression profiling of TEM transitioned to a weak pro-angiogenic phenotype confirmed that TEM are plastic cells and can be reverted to immunological potent monocytes. Finally, the relapse-free survival analysis showed a statistically significant difference between patients with tumors with high and low expression values for genes encoding transitioning proteins detected in silico and validated on patient TEM. In conclusion, the inferred TEM regulatory network accurately captured experimental TEM behavior and highlighted crosstalk between specific angiogenic and inflammatory signaling pathways of outstanding importance to control their pro-angiogenic activity. Results showed the successful in vitro reversion of such an activity by perturbation of in silico predicted target genes in tumor derived TEM, and indicated that targeting tumor TEM plasticity may constitute a novel valid therapeutic strategy in breast cancer.
Resumo:
The peroxisome proliferator-activated receptors (PPARs) are a group of nuclear receptors that function as transcription factors regulating the expression of genes involved in cellular differentiation, development, metabolism and also tumorigenesis. Three PPAR isotypes (α, β/δ and γ) have been identified, among which PPARβ/δ is the most difficult to functionally examine due to its tissue-specific diversity in cell fate determination, energy metabolism and housekeeping activities. PPARβ/δ acts both in a ligand-dependent and -independent manner. The specific type of regulation, activation or repression, is determined by many factors, among which the type of ligand, the presence/absence of PPARβ/δ-interacting corepressor or coactivator complexes and PPARβ/δ protein post-translational modifications play major roles. Recently, new global approaches to the study of nuclear receptors have made it possible to evaluate their molecular activity in a more systemic fashion, rather than deeply digging into a single pathway/function. This systemic approach is ideally suited for studying PPARβ/δ, due to its ubiquitous expression in various organs and its overlapping and tissue-specific transcriptomic signatures. The aim of the present review is to present in detail the diversity of PPARβ/δ function, focusing on the different information gained at the systemic level, and describing the global and unbiased approaches that combine a systems view with molecular understanding.
Resumo:
The design of therapeutic cancer vaccines is aimed at inducing high numbers and potent T cells that are able to target and eradicate malignant cells. This calls for close collaboration between cells of the innate immune system, in particular dendritic cells (DCs), and cells of the adaptive immune system, notably CD4+ helper T cells and CD8+ cytotoxic T cells. Therapeutic vaccines are aided by adjuvants, which can be, for example, Toll¬like Receptor agonists or agents promoting the cytosolic delivery of antigens, among others. Vaccination with long synthetic peptides (LSPs) is a promising strategy, as the requirement for their intracellular processing will mainly target LSPs to professional antigen presenting cells (APCs), hence avoiding the immune tolerance elicited by the presentation of antigens by non-professional APCs. The unique property of antigen cross-processing and cross-presentation activity by DCs plays an important role in eliciting antitumour immunity given that antigens from engulfed dead tumour cells require this distinct biological process to be processed and presented to CD8+T cells in the context of MHC class I molecules. DCs expressing the XCR1 chemokine receptor are characterised by their superior capability of antigen cross- presentation and priming of highly cytotoxic T lymphocyte (CTL) responses. Recently, XCR1 was found to be also expressed in tissue-residents DCs in humans, with a simitar transcriptional profile to that of cross- presenting murine DCs. This shed light into the value of harnessing this subtype of XCR1+ cross-presenting DCs for therapeutic vaccination of cancer. In this study, we explored ways of adjuvanting and optimising LSP therapeutic vaccinations by the use, in Part I, of the XCLl chemokine that selectively binds to the XCR1 receptor, as a mean to target antigen to the cross-presenting XCR1+ DCs; and in Part II, by the inclusion of Q.S21 in the LSP vaccine formulation, a saponin with adjuvant activity, as well as the ability to promote cytosolic delivery of LSP antigens due to its intrinsic cell membrane insertion activity. In Part I, we designed and produced XCLl-(OVA LSP)-Fc fusion proteins, and showed that their binding to XCR1+ DCs mediate their chemoattraction. In addition, therapeutic vaccinations adjuvanted with XCLl-(OVA LSP)-Fc fusion proteins significantly enhanced the OVA-specific CD8+ T cell response, and led to complete tumour regression in the EL4-OVA model, and significant control of tumour growth in the B16.0VA tumour model. With the aim to optimise the co-delivery of LSP antigen and XCLl to skin-draining lymph nodes we also tested immunisations using nanoparticle (NP)-conjugated OVA LSP in the presence or absence of XCLl chemokine. The NP-mediated delivery of LSP potentiated the CTL response seen in the blood of vaccinated mice, and NP-OVA LSP vaccine in the presence of XCLl led to higher blood frequencies of OVA-specific memory-precursor effector cells. Nevertheless, in these settings, the addition XCLl to NP-OVA LSP vaccine formulation did not increase its antitumour therapeutic effect. In the Part II, we assessed in HLA-A2/DR1 mice the immunogenicity of the Melan-AA27L LSP or the Melan-A26. 35 AA27l short synthetic peptide (SSP) used in conjunction with the saponin adjuvant QS21, aiming to identify a potent adjuvant formulation that elicits a quantitatively and qualitatively strong immune response to tumour antigens. We showed a high CTL immune response elicited by the use of Melan-A LSP or SSP with QS21, which both exerted similar killing capacity upon in vivo transfer of target cells expressing the Melan-A peptide in the context of HLA-A2 molecules. However, the response generated by the LSP immunisation comprised higher percentages of CD8+T cells of the central memory phenotype (CD44hl CD62L+ and CCR7+ CD62L+) than those of SSP immunisation, and most importantly, the strong LSP+QS21 response was strictly CD4+T cell-dependent, as shown upon CD4 T cell depletion. Altogether, these results suggest that both XCLl and QS21 may enhance the ability of LSP to prime CD8 specific T cell responses, and promote a long-term memory response. Therefore, these observations may have important implications for the design of protein or LSP-based cancer vaccines for specific immunotherapy of cancer -- Les vacans thérapeutiques contre le cancer visent à induire une forte et durable réponse immunitaire contre des cellules cancéreuses résiduelles. Cette réponse requiert la collaboration entre le système immunitaire inné, en particulier les cellules dendrites (DCs), et le système immunitaire adaptatif, en l'occurrence les lymphocytes TCD4 hdper et CD8 cytotoxiques. La mise au point d'adjuvants et de molécules mimant un agent pathogène tels les ligands TLRs ou d'autres agents facilitant l'internalisation d'antigènes, est essentielle pour casser la tolérance du système immunitaire contre les cellules cancéreuses afin de générer une réponse effectrice et mémoire contre la tumeur. L'utilisation de longs peptides synthétiques (LSPs) est une approche prometteuse du fait que leur présentation en tant qu'antigénes requiert leur internalisation et leur transformation par les cellules dendrites (DCs, qui sont les mieux à même d'éviter la tolérance immunitaire. Récemment une sous-population de DCs exprimant le récepteur XCR1 a été décrite comme ayant une capacité supérieure dans la cross-présentation d'antigènes, d'où un intérêt à développer des vaccins ciblant les DCs exprimant le XCR1. Durant ma thèse de doctorat, j'ai exploré différentes approches pour optimiser les vaccins avec LSPs. La première partie visait à cibler les XCR1-DCs à l'aide de la chemokine XCL1 spécifique du récepteur XCR1, soit sou s la forme de protéine de fusion XCL1-OVA LSP-Fc, soit associée à des nanoparticules. La deuxième partie a consisté à tester l'association des LSPs avec I adjuvant QS21 dérivant d'une saponine dans le but d'optimiser l'internalisation cytosolique des longs peptides. Les protéines de fusion XCLl-OVA-Fc développées dans la première partie de mon travail, ont démontré leur capacité de liaison spécifique sur les XCRl-DCs associée à leur capacité de chemo-attractio. Lorsque inclues dans une mmunisation de souris porteuse de tumeurs établies, ces protéines de fusion XCL1-0VA LSP-Fc et XCLl-Fc plus OVA LSP ont induites une forte réponse CDS OVA spécifique permettant la complète régression des tumeurs de modèle EL4- 0VA et un retard de croissance significatif de tumeurs de type B16-0VA. Dans le but d'optimiser le drainage des LSPs vers es noyaux lymphatiques, nous avons également testé les LSPs fixés de manière covalente à des nanoparticules co- injectees ou non avec la chemokine XCL1. Cette formulation a également permis une forte réponse CD8 accompagnée d'un effet thérapeutique significatif, mais l'addition de la chemokine XCL1 n'a pas ajouté d'effet anti-tumeur supplémentaire. Dans la deuxième partie de ma thèse, j'ai comparé l'immunogénicité de l'antigène humain Melan A soit sous la forme d un LSP incluant un épitope CD4 et CD8 ou sous la forme d'un peptide ne contenant que l'épitope CD8 (SSP) Les peptides ont été formulés avec l'adjuvant QS21 et testés dans un modèle de souris transgéniques pour les MHC let II humains, respectivement le HLA-A2 et DR1. Les deux peptides LSP et SSP ont généré une forte réponse CD8 similaire assoc.ee a une capacité cytotoxique équivalente lors du transfert in vivo de cellules cibles présentant le peptide SSP' Cependant les souris immunisées avec le Melan A LSP présentaient un pourcentage plus élevé de CD8 ayant un Phénotype «centra, memory» (CD44h' CD62L+ and CCR7+ CD62L+) que les souris immunisées avec le SSP, même dix mois après I'immunisation. Par ailleurs, la réponse CD8 au Melan A LSP était strictement dépendante des lymphocytes CD4, contrairement à l'immunisation par le Melan A SSP qui n'était pas affectée. Dans l'ensemble ces résultats suggèrent que la chemokine XCL1 et l'adjuvant QS21 améliorent la réponse CD8 à un long peptide synthétique, favorisant ainsi le développement d'une réponse anti-tumeur mémoire durable. Ces observations pourraient être utiles au développement de nouveau vaccins thérapeutiques contre les tumeurs.
Resumo:
The variability observed in drug exposure has a direct impact on the overall response to drug. The largest part of variability between dose and drug response resides in the pharmacokinetic phase, i.e. in the dose-concentration relationship. Among possibilities offered to clinicians, Therapeutic Drug Monitoring (TDM; Monitoring of drug concentration measurements) is one of the useful tool to guide pharmacotherapy. TDM aims at optimizing treatments by individualizing dosage regimens based on blood drug concentration measurement. Bayesian calculations, relying on population pharmacokinetic approach, currently represent the gold standard TDM strategy. However, it requires expertise and computational assistance, thus limiting its large implementation in routine patient care. The overall objective of this thesis was to implement robust tools to provide Bayesian TDM to clinician in modern routine patient care. To that endeavour, aims were (i) to elaborate an efficient and ergonomic computer tool for Bayesian TDM: EzeCHieL (ii) to provide algorithms for drug concentration Bayesian forecasting and software validation, relying on population pharmacokinetics (iii) to address some relevant issues encountered in clinical practice with a focus on neonates and drug adherence. First, the current stage of the existing software was reviewed and allows establishing specifications for the development of EzeCHieL. Then, in close collaboration with software engineers a fully integrated software, EzeCHieL, has been elaborated. EzeCHieL provides population-based predictions and Bayesian forecasting and an easy-to-use interface. It enables to assess the expectedness of an observed concentration in a patient compared to the whole population (via percentiles), to assess the suitability of the predicted concentration relative to the targeted concentration and to provide dosing adjustment. It allows thus a priori and a posteriori Bayesian drug dosing individualization. Implementation of Bayesian methods requires drug disposition characterisation and variability quantification trough population approach. Population pharmacokinetic analyses have been performed and Bayesian estimators have been provided for candidate drugs in population of interest: anti-infectious drugs administered to neonates (gentamicin and imipenem). Developed models were implemented in EzeCHieL and also served as validation tool in comparing EzeCHieL concentration predictions against predictions from the reference software (NONMEM®). Models used need to be adequate and reliable. For instance, extrapolation is not possible from adults or children to neonates. Therefore, this work proposes models for neonates based on the developmental pharmacokinetics concept. Patients' adherence is also an important concern for drug models development and for a successful outcome of the pharmacotherapy. A last study attempts to assess impact of routine patient adherence measurement on models definition and TDM interpretation. In conclusion, our results offer solutions to assist clinicians in interpreting blood drug concentrations and to improve the appropriateness of drug dosing in routine clinical practice.
Resumo:
The perceived low levels of genetic diversity, poor interspecific competitive and defensive ability, and loss of dispersal capacities of insular lineages have driven the view that oceanic islands are evolutionary dead ends. Focusing on the Atlantic bryophyte flora distributed across the archipelagos of the Azores, Madeira, the Canary Islands, Western Europe, and northwestern Africa, we used an integrative approach with species distribution modeling and population genetic analyses based on approximate Bayesian computation to determine whether this view applies to organisms with inherent high dispersal capacities. Genetic diversity was found to be higher in island than in continental populations, contributing to mounting evidence that, contrary to theoretical expectations, island populations are not necessarily genetically depauperate. Patterns of genetic variation among island and continental populations consistently fitted those simulated under a scenario of de novo foundation of continental populations from insular ancestors better than those expected if islands would represent a sink or a refugium of continental biodiversity. We, suggest that the northeastern Atlantic archipelagos have played a key role as a stepping stone for transoceanic migrants. Our results challenge the traditional notion that oceanic islands are the end of the colonization road and illustrate the significant role of oceanic islands as reservoirs of novel biodiversity for the assembly of continental floras.
Resumo:
PURPOSE: According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. METHOD: About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). RESULTS: The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. CONCLUSION: Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables like soil gas radon measurements as well as more detailed geological information.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Two cost-efficient genome-scale methodologies to assess DNA-methylation are MethylCap-seq and Illumina's Infinium HumanMethylation450 BeadChips (HM450). Objective information regarding the best-suited methodology for a specific research question is scant. Therefore, we performed a large-scale evaluation on a set of 70 brain tissue samples, i.e. 65 glioblastoma and 5 non-tumoral tissues. As MethylCap-seq coverages were limited, we focused on the inherent capacity of the methodology to detect methylated loci rather than a quantitative analysis. MethylCap-seq and HM450 data were dichotomized and performances were compared using a gold standard free Bayesian modelling procedure. While conditional specificity was adequate for both approaches, conditional sensitivity was systematically higher for HM450. In addition, genome-wide characteristics were compared, revealing that HM450 probes identified substantially fewer regions compared to MethylCap-seq. Although results indicated that the latter method can detect more potentially relevant DNA-methylation, this did not translate into the discovery of more differentially methylated loci between tumours and controls compared to HM450. Our results therefore indicate that both methodologies are complementary, with a higher sensitivity for HM450 and a far larger genome-wide coverage for MethylCap-seq, but also that a more comprehensive character does not automatically imply more significant results in biomarker studies.
Resumo:
The survival of preterm babies has increased over the last few decades. However, disorders associated with preterm birth, known as oxygen radical diseases of neonatology, such as retinopathy, bronchopulmonary dysplasia, periventricular leukomalacia, and necrotizing enterocolitis are severe complications related to oxidative stress, which can be defined by an imbalance between oxidative reactive species production and antioxidant defenses. Oxidative stress causes lipid, protein, and DNA damage. Preterm infants have decreased antioxidant defenses in response to oxidative challenges, because the physiologic increase of antioxidant capacity occurs at the end of gestation in preparation for the transition to extrauterine life. Therefore, preterm infants are more sensitive to neonatal oxidative stress, notably when supplemental oxygen is being delivered. Furthermore, despite recent advances in the management of neonatal respiratory distress syndrome, controversies persist concerning the oxygenation saturation targets that should be used in caring for preterm babies. Identification of adequate biomarkers of oxidative stress in preterm infants such as 8-iso-prostaglandin F2α, and adduction of malondialdehyde to hemoglobin is important to promote specific therapeutic approaches. At present, no therapeutic strategy has been validated as prevention or treatment against oxidative stress. Breastfeeding should be considered as the main measure to improve the antioxidant status of preterm infants. In the last few years, melatonin has emerged as a protective molecule against oxidative stress, with antioxidant and free-radical scavenger roles, in experimental and preliminary human studies, giving hope that it can be used in preterm infants in the near future.
Resumo:
Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.
Resumo:
In the past few decades, the rise of criminal, civil and asylum cases involving young people lacking valid identification documents has generated an increase in the demand of age estimation. The chronological age or the probability that an individual is older or younger than a given age threshold are generally estimated by means of some statistical methods based on observations performed on specific physical attributes. Among these statistical methods, those developed in the Bayesian framework allow users to provide coherent and transparent assignments which fulfill forensic and medico-legal purposes. The application of the Bayesian approach is facilitated by using probabilistic graphical tools, such as Bayesian networks. The aim of this work is to test the performances of the Bayesian network for age estimation recently presented in scientific literature in classifying individuals as older or younger than 18 years of age. For these exploratory analyses, a sample related to the ossification status of the medial clavicular epiphysis available in scientific literature was used. Results obtained in the classification are promising: in the criminal context, the Bayesian network achieved, on the average, a rate of correct classifications of approximatively 97%, whilst in the civil context, the rate is, on the average, close to the 88%. These results encourage the continuation of the development and the testing of the method in order to support its practical application in casework.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been used successfully in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits; to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been successfully used in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits, to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.