141 resultados para Hedge and Offer
Resumo:
PURPOSE: Recurrent head and neck cancer is associated to a poor survival prognosis. A high toxicity rate is demonstrated when surgery and/or radiotherapy and/or chemotherapy are combined. Furthermore, the duration of treatment is often not ethically compatible with the expected survival (median survival<1year). Normal tissues tolerance limits the use of reirradiation and stereotactic body radiotherapy (SBRT) could offer precise irradiation while sparing healthy tissues. After completion of a feasibility study, results of a multicentric study (Lille, Nancy & Nice) using SBRT with cetuximab are reported. The aim of the study was to deliver non toxic short course SBRT (2weeks) in order to get the same local control as the one demonstrated with longer protocols. METHODS AND MATERIALS: Patients with inoperable recurrent, or new primary tumor in a previously irradiated area, were included (WHO<3). Reirradiation (RT) dose was 36Gy in six fractions of 6Gy to the 85% isodose line covering 95% of the PTV with 5 injections of concomitant cetuximab (CT). All patients had previous radiotherapy, 85% had previous surgery and 48% previous chemotherapy. RESULTS: Between 11/2007 and 08/2010, 60 were included (46 men and 14 women), 56 received CT+RT, 3 were not treated and 1 received only CT. Median age was 60 (42-87)) and all 56 patients had squamous carcinoma and received concomitant cetuximab. Mean time between previous radiotherapy and the start of SBRT was 38months. Cutaneous toxicity was observed for 41 patients. There was one toxic death from hemorrhage and denutrition. Median follow-up was 11.4months. At 3months, response rate was 58.4% (95% CI: 43.2-72.4%) and disease control rate was 91.7% (95% CI: 80.0-97.7%). The one-year OS rate was 47.5% (95% CI: 30.8-62.4). CONCLUSION: These results suggest that short SBRT with cetuximab is an effective salvage treatment with good response rate in this poor prognosis population with previously irradiated HNC. Treatment is feasible and, with appropriate care to limiting critical structure, acute toxicities are acceptable. This combination may be the reference treatment is this population.
Resumo:
A geophysical and geochemical study has been conducted in a fractured carbonate aquifer located at Combioula in the southwestern Swiss Alps with the objective to detect and characterize hydraulically active fractures along a 260-m-deep borehole. Hydrochemical analyses, borehole diameter, temperature and fluid electrical conductivity logging data were integrated in order to relate electrokinetic self-potential signals to groundwater flow inside the fracture network. The results show a generally good, albeit locally variable correlation of variations of the self-potential signals with variations in temperature, fluid electrical conductivity and borehole diameter. Together with the hydrochemical evidence, which was found to be critical for the interpretation of the self-potential data, these measurements not only made it possible to detect the hydraulically active fractures but also to characterize them as zones of fluid gain or fluid loss. The results complement the available information from the corresponding litholog and illustrate the potential of electrokinetic self-potential signals in conjunction with temperature, fluid electrical conductivity and hydrochemical analyses for the characterization of fractured aquifers, and thus may offer a perspective for an effective quantitative characterization of this increasingly important class of aquifers and geothermal reservoirs.
Resumo:
Structures built by animals are a widespread and ecologically important 'extended phenotype'. While its taxonomic diversity has been well described, factors affecting short-term evolution of building behavior within a species have received little experimental attention. Here we describe how, given the opportunity, wandering Drosophila melanogaster larvae often build long tunnels in agar substrates and embed their pupae within them. These embedded larvae are characterized by a longer egg-to-pupariation developmental time than larvae that pupate on the surface. Assuming that such building behaviors are likely to be energetically costly and/or time consuming, we hypothesized that they should evolve to be less pronounced under resource or time limitation. In accord with this prediction, larvae from populations evolved for 160 generations under a regime that combines larval malnutrition with limited developmental time dug shorter tunnels than larvae from control unselected populations. However, the proportion of larvae that embedded before pupation did not differ between the malnutrition-adapted and control populations, suggesting that tunnel length and likelihood of embedding before pupation are controlled by different genetic loci. The behaviors exhibited by wandering larvae of Drosophila melanogaster prior to pupation offer a model system to study evolution of animal building behaviors because the tunneling and embedding phenotypes are simple, facultative and highly variable.
Resumo:
La réponse métabolique de l'obèse apparemment « sainen situation d'agression aiguë (polytraumatisés, traumatisés crâniens, patients chirurgicaux, grands brûlés, opérations électives) ne se distingue pas ou peu de celle de l'individu non-obèse. Cependant, les complications médicales liées à l'agression (insuffisances respiratoire et cardiaque, bronchopneumonie, infections de plaies, thrombophlébites et embolies) demeurent plus importantes chez l'obèse morbide que chez l'individu de poids normal. Grâce à l'inflation de ses réserves énergétiques, l'obèse apparemment sain est avantagé, par rapport au sujet mince, au cours d'une agression nutritionnelle chronique telle que le jeûne prolongé. Le facteur fonctionnel limitant la survie dépend avant tout de la composition corporelle initiale et du degré d'adaptation métabolique (et comportementale) en particulier du degré de conservation de la masse maigre par rapport à la masse grasse. La mobilisation accrue de la masse grasse associée à la perte de poids chez l'obèse (par rapport à son homologue non-obèse) est favorable à une prolongation de la vie, car, en brûlant davantage de graisse corporelle, la part des protéines corporelles endogènes utilisée à des fins énergétiques est plus faible. Il s'ensuit chez l'obèse qu'un niveau de masse maigre critique pour la survie n'est atteint qu'après une réduction très marquée de ses réserves énergétiques. En revanche, le sujet mince perd davantage de masse maigre lors de l'amaigrissement et, par conséquent, son métabolisme de repos diminuera plus rapidement que celui du sujet obèse. Cela peut constituer un avantage énergétique évident en termes d'économie d'énergie consécutive à l'adaptation métabolique, mais un inconvénient majeur quant à la durée de la survie. The metabolic response of « apparently healthyobese individuals following acute injury (multiple trauma, head injury and surgical patients, extended burns, elective surgery) is not dramatically different from that of a non-obese individuals. However, the medical complications following the injury (respiratory and cardiac insufficiency, broncho-pneumonia, infections of wounds, trombophlebitis and embolism) are more prevalent in morbid obese patients than in individuals of normal body weight. Because of a large increase in their individuals energy store, "apparently healthy" obese individuals have an advantage over very lean subjects when exposed to a chronic nutritional aggression such as total fasting. The functional limiting factor for survival depends primarily on initial body composition and the magnitude of metabolic adaptation (including behavioral adaptation). The key factor is the extent to which the fat-free mass is maintained (versus to the fat mass) during weight loss. The increased proportion of body fat mobilized during weight loss in obese patients, compared with their non-obese counterparts, favors prolonged survival, because more adipose tissue is burned off, the fraction of body protein endogenously utilized for energy purpose individuals, is smaller. This implies that obese individuals do not reach a fat-free mass "critical" for their survival until their energy stores reach very low values. In contrast, lean subject tend to lose more fat-free mass during weight loss than obese subjects and, as a result, their energy expenditure drops more rapidly. This may offer a potential advantage in terms of energy economy (more energy saving) but a major disadvantage in terms of duration of survival.
Resumo:
Posaconazole (POS) is a new antifungal agent for prevention and therapy of mycoses in immunocompromised patients. Variable POS pharmacokinetics after oral dosing may influence efficacy: a trough threshold of 0.5 ?g/ml has been recently proposed. Measurement of POS plasma concentrations by complex chromatographic techniques may thus contribute to optimize prevention and management of life-threatening infections. No microbiological analytical method is available. The objective of this study was to develop and validate a new simplified ultra-performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS) method and a sensitive bioassay for quantification of POS over the clinical plasma concentration range. The UPLC-MS/MS equipment consisted of a triple quadrupole mass spectrometer, an electrospray ionization (ESI) source, and a C(18) analytical column. The Candida albicans POS-hypersusceptible mutant (MIC of 0.002 ?g/ml) ?cdr1 ?cdr2 ?flu ?mdr1 ?can constructed by targeted deletion of multidrug efflux transporters and calcineurin genes was used for the bioassay. POS was extracted from plasma by protein precipitation with acetonitrile-methanol (75%/25%, vol/vol). Reproducible standard curves were obtained over the range 0.014 to 12 (UPLC-MS/MS) and 0.028 to 12 ?g/ml (bioassay). Intra- and interrun accuracy levels were 106% ± 2% and 103% ± 4% for UPLC-MS/MS and 102% ± 8% and 104% ± 1% for bioassay, respectively. The intra- and interrun coefficients of variation were 7% ± 4% and 7% ± 3% for UPLC-MS/MS and 5% ± 3% and 4% ± 2% for bioassay, respectively. An excellent correlation between POS plasma concentrations measured by UPLC-MS/MS and bioassay was found (concordance, 0.96). In 26 hemato-oncological patients receiving oral POS, 27/69 (39%) trough plasma concentrations were lower than 0.5 ?g/ml. The UPLC-MS/MS method and sensitive bioassay offer alternative tools for accurate and precise quantification of the plasma concentrations in patients receiving oral posaconazole.
Resumo:
The implementation of new techniques of imaging in the daily practice of the radiation oncologist is a major advance in these last 10 years. This allows optimizing the therapeutic intervals and locoregional control of the disease while limiting side effects. Among them, positron emission tomography (PET) offers an opportunity to the clinician to obtain data relative to the tumoral biological mechanisms, while benefiting from the morphological images of the computed tomography (CT) scan. Recently hybrid PET/CT has been developed and numerous studies aimed at optimizing its use in the planning, the evaluation of the treatment response and the prognostic value. The choice of the radiotracer (according to the type of cancer and to the studied biological mechanism) and the various methods of tumoral delineation, require a regular update to optimize the practices. We propose throughout this article, an exhaustive review of the published researches (and in process of publication) until December 2011, as user guide of PET/CT in all the aspects of the modern radiotherapy (from the diagnosis to the follow-up): biopsy guiding, optimization of treatment planning and dosimetry, evaluation of tumor response and prognostic value, follow-up and early detection of recurrence versus tumoral necrosis. In a didactic purpose, each of these aspects is approached by primary tumoral location, and illustrated with representative iconographic examples. The current contribution of PET/CT and its perspectives of development are described to offer to the radiation oncologist a clear and up to date reading in this expanding domain.
Resumo:
Polyploidization, which is expected to trigger major genomic reorganizations, occurs much less commonly in animals than in plants, possibly because of constraints imposed by sex-determination systems. We investigated the origins and consequences of allopolyploidization in Palearctic green toads (Bufo viridis subgroup) from Central Asia, with three ploidy levels and different modes of genome transmission (sexual versus clonal), to (i) establish a topology for the reticulate phylogeny in a species-rich radiation involving several closely related lineages and (ii) explore processes of genomic reorganization that may follow polyploidization. Sibship analyses based on 30 cross-amplifying microsatellite markers substantiated the maternal origins and revealed the paternal origins and relationships of subgenomes in allopolyploids. Analyses of the synteny of linkage groups identified three markers affected by translocation events, which occurred only within the paternally inherited subgenomes of allopolyploid toads and exclusively affected the linkage group that determines sex in several diploid species of the green toad radiation. Recombination rates did not differ between diploid and polyploid toad species, and were overall much reduced in males, independent of linkage group and ploidy levels. Clonally transmitted subgenomes in allotriploid toads provided support for strong genetic drift, presumably resulting from recombination arrest. The Palearctic green toad radiation seems to offer unique opportunities to investigate the consequences of polyploidization and clonal transmission on the dynamics of genomes in vertebrates.
Resumo:
Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
PURPOSE: The aim of this study was to compare multidetector CT (MDCT), MRI, and FDG PET/CT imaging for the detection of peritoneal carcinomatosis (PC) in ovarian cancer. PATIENTS AND METHODS: Fifteen women with ovarian cancer and suspected PC underwent MDCT, MRI, and FDG PET/CT, shortly before surgery. Nine abdominopelvic regions were defined according to the peritoneal cancer index. We applied lesion size scores on MDCT and MR and measured FDG PET/CT standard uptake. We blindly read MDCT, MR, and PET/CT before joint review and comparison with histopathology. Receiver operating characteristics analysis was performed. RESULTS: Ten women had PC (67%). Altogether, 135 abdominopelvic sites were compared. Multidetector CT, MRI, and FDG PET/CT had a sensitivity of 96%, 98%, and 95%, and specificity was 92%, 84%, and 96%, respectively. Corresponding receiver operating characteristics area was 0.94, 0.90, and 0.96, respectively, without any significant differences between them (P = 0.12). FDG PET/CT detected supradiaphragmatic disease in 3 women (20%) not seen by MDCT or MRI. CONCLUSIONS: Although MRI had the highest sensitivity and FDG PET/CT had the highest specificity, no significant differences were found between the 3 techniques. Thus, MDCT, as the fastest, most economical, and most widely available modality, is the examination of choice, if a stand-alone technique is required. If inconclusive, PET/CT or MRI may offer additional insights. Whole-body FDG PET/CT may be more accurate for supradiaphragmatic metastatic extension.
Resumo:
Schizophrenia is a neurodevelopmental disorder reflecting a convergence of genetic risk and early life stress. The slow progression to first psychotic episode represents both a window of vulnerability as well as opportunity for therapeutic intervention. Here, we consider recent neurobiological insight into the cellular and molecular components of developmental critical periods and their vulnerability to redox dysregulation. In particular, the consistent loss of parvalbumin-positive interneuron (PVI) function and their surrounding perineuronal nets (PNNs) as well as myelination in patient brains is consistent with a delayed or extended period of circuit instability. This linkage to critical period triggers (PVI) and brakes (PNN, myelin) implicates mistimed trajectories of brain development in mental illness. Strategically introduced antioxidant treatment or later reinforcement of molecular brakes may then offer a novel prophylactic psychiatry.
Resumo:
Lactate has been shown to offer neuroprotection in several pathologic conditions. This beneficial effect has been attributed to its use as an alternative energy substrate. However, recent description of the expression of the HCA1 receptor for lactate in the central nervous system calls for reassessment of the mechanism by which lactate exerts its neuroprotective effects. Here, we show that HCA1 receptor expression is enhanced 24 hours after reperfusion in an middle cerebral artery occlusion stroke model, in the ischemic cortex. Interestingly, intravenous injection of L-lactate at reperfusion led to further enhancement of HCA1 receptor expression in the cortex and striatum. Using an in vitro oxygen-glucose deprivation model, we show that the HCA1 receptor agonist 3,5-dihydroxybenzoic acid reduces cell death. We also observed that D-lactate, a reputedly non-metabolizable substrate but partial HCA1 receptor agonist, also provided neuroprotection in both in vitro and in vivo ischemia models. Quite unexpectedly, we show D-lactate to be partly extracted and oxidized by the rodent brain. Finally, pyruvate offered neuroprotection in vitro whereas acetate was ineffective. Our data suggest that L- and D-lactate offer neuroprotection in ischemia most likely by acting as both an HCA1 receptor agonist for non-astrocytic (most likely neuronal) cells as well as an energy substrate.
Resumo:
This paper contains a joint ESHG/ASHG position document with recommendations regarding responsible innovation in prenatal screening with non-invasive prenatal testing (NIPT). By virtue of its greater accuracy and safety with respect to prenatal screening for common autosomal aneuploidies, NIPT has the potential of helping the practice better achieve its aim of facilitating autonomous reproductive choices, provided that balanced pretest information and non-directive counseling are available as part of the screening offer. Depending on the health-care setting, different scenarios for NIPT-based screening for common autosomal aneuploidies are possible. The trade-offs involved in these scenarios should be assessed in light of the aim of screening, the balance of benefits and burdens for pregnant women and their partners and considerations of cost-effectiveness and justice. With improving screening technologies and decreasing costs of sequencing and analysis, it will become possible in the near future to significantly expand the scope of prenatal screening beyond common autosomal aneuploidies. Commercial providers have already begun expanding their tests to include sex-chromosomal abnormalities and microdeletions. However, multiple false positives may undermine the main achievement of NIPT in the context of prenatal screening: the significant reduction of the invasive testing rate. This document argues for a cautious expansion of the scope of prenatal screening to serious congenital and childhood disorders, only following sound validation studies and a comprehensive evaluation of all relevant aspects. A further core message of this document is that in countries where prenatal screening is offered as a public health programme, governments and public health authorities should adopt an active role to ensure the responsible innovation of prenatal screening on the basis of ethical principles. Crucial elements are the quality of the screening process as a whole (including non-laboratory aspects such as information and counseling), education of professionals, systematic evaluation of all aspects of prenatal screening, development of better evaluation tools in the light of the aim of the practice, accountability to all stakeholders including children born from screened pregnancies and persons living with the conditions targeted in prenatal screening and promotion of equity of access.
Resumo:
Regulation has in many cases been delegated to independent agencies, which has led to the question of how democratic accountability of these agencies is ensured. There are few empirical approaches to agency accountability. We offer such an approach, resting upon three propositions. First, we scrutinize agency accountability both de jure (accountability is ensured by formal rights of accountability 'fora' to receive information and impose consequences) and de facto (the capability of fora to use these rights depends on resources and decision costs that affect the credibility of their sanctioning capacity). Second, accountability must be evaluated separately at political, operational and managerial levels. And third, at each level accountability is enacted by a system of several (partially) interdependent fora, forming together an accountability regime. The proposed framework is applied to the case of the German Bundesnetzagentur's accountability regime, which shows its suitability for empirical purposes. Regulatory agencies are often considered as independent, yet accountable. This article provides a realistic framework for the study of accountability 'regimes' in which they are embedded. It emphasizes the need to identify the various actors (accountability fora) to which agencies are formally accountable (parliamentary committees, auditing bodies, courts, and so on) and to consider possible relationships between them. It argues that formal accountability 'on paper', as defined in official documents, does not fully account for de facto accountability, which depends on the resources possessed by the fora (mainly information-processing and decision-making capacities) and the credibility of their sanctioning capacities. The article applies this framework to the German Bundesnetzagentur.
Resumo:
OBJECTIVES: To determine inter-session and intra/inter-individual variations of the attenuations of aortic blood/myocardium with MDCT in the context of calcium scoring. To evaluate whether these variations are dependent on patients' characteristics. METHODS: Fifty-four volunteers were evaluated with calcium scoring non-enhanced CT. We measured attenuations (inter-individual variation) and standard deviations (SD, intra-individual variation) of the blood in the ascending aorta and of the myocardium of left ventricle. Every volunteer was examined twice to study the inter-session variation. The fat pad thickness at the sternum and noise (SD of air) were measured too. These values were correlated with the measured aortic/ventricular attenuations and their SDs (Pearson). Historically fixed thresholds (90 and 130 HU) were tested against different models based on attenuations of blood/ventricle. RESULTS: The mean attenuation was 46 HU (range, 17-84 HU) with mean SD 23 HU for the blood, and 39 HU (10-82 HU) with mean SD 18 HU for the myocardium. The attenuation/SD of the blood were significantly higher than those of the myocardium (p < 0.01). The inter-session variation was not significant. There was a poor correlation between SD of aortic blood/ventricle with fat thickness/noise. Based on existing models, 90 HU threshold offers a confidence interval of approximately 95% and 130 HU more than 99%. CONCLUSIONS: Historical thresholds offer high confidence intervals for exclusion of aortic blood/myocardium and by the way for detecting calcifications. Nevertheless, considering the large variations of blood/myocardium CT values and the influence of patient's characteristics, a better approach might be an adaptive threshold.