923 resultados para complex place-based initiatives


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: This study aimed to characterize myocardial infarction after percutaneous coronary intervention (PCI) based on cardiac marker elevation as recommended by the new universal definition and on the detection of late gadolinium enhancement (LGE) by cardiovascular magnetic resonance (CMR). It is also assessed whether baseline inflammatory biomarkers are higher in patients developing myocardial injury. BACKGROUND: Cardiovascular magnetic resonance accurately assesses infarct size. Baseline C-reactive protein (CRP) and neopterin predict prognosis after stent implantation. METHODS: Consecutive patients with baseline troponin (Tn) I within normal limits and no LGE in the target vessel underwent baseline and post-PCI CMR. The Tn-I was measured until 24 h after PCI. Serum high-sensitivity CRP and neopterin were assessed before coronary angiography. RESULTS: Of 45 patients, 64 (53 to 72) years of age, 33% developed LGE with infarct size of 0.83 g (interquartile range: 0.32 to 1.30 g). A Tn-I elevation >99% upper reference limit (i.e., myocardial necrosis) (median Tn-I: 0.51 μg/l, interquartile range: 0.16 to 1.23) and Tn-I > 3× upper reference limit (i.e., type 4a myocardial infarction [MI]) occurred in 58% and 47% patients, respectively. LGE was undetectable in 42% and 43% of patients with periprocedural myocardial necrosis and type 4a MI, respectively. Agreement between LGE and type 4a MI was moderate (kappa = 0.45). The levels of CRP or neopterin did not significantly differ between patients with or without myocardial injury, detected by CMR or according to the new definition (p = NS). CONCLUSIONS: This study reports the lack of substantial agreement between the new universal definition and CMR for the diagnosis of small-size periprocedural myocardial damage after complex PCI. Baseline levels of CRP or neopterin were not predictive for the development of periprocedural myocardial damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to assess the effects of conventional tillage and of different direct seeding mulch-based cropping systems (DMC) on soil nematofauna characteristics. The long-term field experiment was carried out in the highlands of Madagascar on an andic Dystrustept soil. Soil samples were taken once a year during three successive years (14 to 16 years after installation of the treatments) from a 0-5-cm soil layer of a conventional tillage system and of three kinds of DMC: direct seeding on mulch from rotation soybean-maize residues; direct seeding of maize-maize rotation on living mulch of silverleaf (Desmodium uncinatum); direct seeding of bean (Phaseolus vulgaris)-soybean rotation on living mulch of kikuyu grass (Pennisetum clandestinum). The samples were compared with samples from natural fallows. The soil nematofauna, characterized by the abundance of different trophic groups and indices (MI, maturity index; EI and SI, enrichment and structure indices), allowed the discrimination of the different cropping systems. The different DMC treatments had a more complex soil food web than the tillage treatment: SI and MI were significantly greater in DMC systems. Moreover, DMC with dead mulch had a lower density of free-living nematodes than DMC with living mulch, which suggested a lower microbial activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We uncover the global organization of clustering in real complex networks. To this end, we ask whether triangles in real networks organize as in maximally random graphs with given degree and clustering distributions, or as in maximally ordered graph models where triangles are forced into modules. The answer comes by way of exploring m-core landscapes, where the m-core is defined, akin to the k-core, as the maximal subgraph with edges participating in at least m triangles. This property defines a set of nested subgraphs that, contrarily to k-cores, is able to distinguish between hierarchical and modular architectures. We find that the clustering organization in real networks is neither completely random nor ordered although, surprisingly, it is more random than modular. This supports the idea that the structure of real networks may in fact be the outcome of self-organized processes based on local optimization rules, in contrast to global optimization principles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A straightforward methodology for the synthesis of conjugates between a cytotoxic organometallic ruthenium(II) complex and amino- and guanidinoglycosides, as potential RNA-targeted anticancer compounds, is described. Under microwave irradiation, the imidazole ligand incorporated on the aminoglycoside moiety (neamine or neomycin) was found to replace one triphenylphosphine ligand from the ruthenium precursor [(η6-p-cym)RuCl(PPh3)2]+, allowing the assembly of the target conjugates. The guanidinylated analogue was easily prepared from the neomycin-ruthenium conjugate by reaction with N,N′-di-Boc-N″-triflylguanidine, a powerful guanidinylating reagent that was compatible with the integrity of the metal complex. All conjugates were purified by semipreparative high-performance liquid chromatography (HPLC) and characterized by electrospray ionization (ESI) and matrix-assisted laser desorptionionization time-of-flight (MALDI-TOF) mass spectrometry (MS) and NMR spectroscopy. The cytotoxicity of the compounds was tested in MCF-7 (breast) and DU-145 (prostate) human cancer cells, as well as in the normal HEK293 (Human Embryonic Kidney) cell line, revealing a dependence on the nature of the glycoside moiety and the type of cell (cancer or healthy). Indeed, the neomycinruthenium conjugate (2) displayed moderate antiproliferative activity in both cancer cell lines (IC50 ≈ 80 μM), whereas the neamine conjugate (4) was inactive (IC50 ≈ 200 μM). However, the guanidinylated analogue of the neomycinruthenium conjugate (3) required much lower concentrations than the parent conjugate for equal effect (IC50 = 7.17 μM in DU-145 and IC50 = 11.33 μM in MCF-7). Although the same ranking in antiproliferative activity was found in the nontumorigenic cell line (3 2 > 4), IC50 values indicate that aminoglycoside-containing conjugates are about 2-fold more cytotoxic in normal cells (e.g., IC50 = 49.4 μM for 2) than in cancer cells, whereas an opposite tendency was found with the guanidinylated conjugate, since its cytotoxicity in the normal cell line (IC50 = 12.75 μM for 3) was similar or even lower than that found in MCF-7 and DU-145 cancer cell lines, respectively. Cell uptake studies performed by ICP-MS with conjugates 2 and 3 revealed that guanidinylation of the neomycin moiety had a positive effect on accumulation (about 3-fold higher in DU-145 and 4-fold higher in HEK293), which correlates well with the higher antiproliferative activity of 3. Interestingly, despite the slightly higher accumulation in the normal cell than in the cancer cell line (about 1.4-fold), guanidinoneomycinruthenium conjugate (3) was more cytotoxic to cancer cells (about 1.8-fold), whereas the opposite tendency applied for neomycinruthenium conjugate (2). Such differences in cytotoxic activity and cellular accumulation between cancer and normal cells open the way to the creation of more selective, less toxic anticancer metallodrugs by conjugating cytotoxic metal-based complexes such as ruthenium(II) arene derivatives to guanidinoglycosides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human brain is the most complex structure known. With its high number of cells, number of connections and number of pathways it is the source of every thought in the world. It consumes 25% of our oxygen and suffers very fast from a disruption of its supply. An acute event, like a stroke, results in rapid dysfunction referable to the affected area. A few minutes without oxygen and neuronal cells die and subsequently degenerate. Changes in the brains incoming blood flow alternate the anatomy and physiology of the brain. All stroke events leave behind a brain tissue lesion. To rapidly react and improve the prediction of outcome in stroke patients, accurate lesion detection and reliable lesion-based function correlation would be very helpful. With a number of neuroimaging and clinical data of cerebral injured patients this study aims to investigate correlations of structural lesion locations with sensory functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decades drug discovery practice has escaped from the complexity of the formerly used phenotypic screening in animals to focus on assessing drug effects on isolated protein targets in the search for drugs that exclusively and potently hit one selected target, thought to be critical for a given disease, while not affecting at all any other target to avoid the occurrence of side-effects. However, reality does not conform to these expectations, and, conversely, this approach has been concurrent with increased attrition figures in late-stage clinical trials, precisely due to lack of efficacy and safety. In this context, a network biology perspective of human disease and treatment has burst into the drug discovery scenario to bring it back to the consideration of the complexity of living organisms and particularly of the (patho)physiological environment where protein targets are (mal)functioning and where drugs have to exert their restoring action. Under this perspective, it has been found that usually there is not one but several disease-causing genes and, therefore, not one but several relevant protein targets to be hit, which do not work on isolation but in a highly interconnected manner, and that most known drugs are inherently promiscuous. In this light, the rationale behind the currently prevailing single-target-based drug discovery approach might even seem a Utopia, while, conversely, the notion that the complexity of human disease must be tackled with complex polypharmacological therapeutic interventions constitutes a difficult-torefuse argument that is spurring the development of multitarget therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery in 1988 of endothelin, the most potent human endogenous vasoconstrictor, has opened the race to the discovery of a new weapon against arterial hypertension. The development of the endothelin receptors antagonists (ERAs) and the demonstration of their efficacy in preclinical models initially raised a wave of enthusiasm, which was however tempered due to their unfavorable side effect profile. In this article we will review the phases of the development ERAs, and their current and future place as therapeutic tool against arterial hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recognition by the T-cell receptor (TCR) of immunogenic peptides presented by class I major histocompatibility complexes (MHCs) is the determining event in the specific cellular immune response against virus-infected cells or tumor cells. It is of great interest, therefore, to elucidate the molecular principles upon which the selectivity of a TCR is based. These principles can in turn be used to design therapeutic approaches, such as peptide-based immunotherapies of cancer. In this study, free energy simulation methods are used to analyze the binding free energy difference of a particular TCR (A6) for a wild-type peptide (Tax) and a mutant peptide (Tax P6A), both presented in HLA A2. The computed free energy difference is 2.9 kcal/mol, in good agreement with the experimental value. This makes possible the use of the simulation results for obtaining an understanding of the origin of the free energy difference which was not available from the experimental results. A free energy component analysis makes possible the decomposition of the free energy difference between the binding of the wild-type and mutant peptide into its components. Of particular interest is the fact that better solvation of the mutant peptide when bound to the MHC molecule is an important contribution to the greater affinity of the TCR for the latter. The results make possible identification of the residues of the TCR which are important for the selectivity. This provides an understanding of the molecular principles that govern the recognition. The possibility of using free energy simulations in designing peptide derivatives for cancer immunotherapy is briefly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oculo-auriculo-vertebral spectrum is a complex developmental disorder characterised mainly by anomalies of the ear, hemifacial microsomia, epibulbar dermoids and vertebral anomalies. The aetiology is largely unknown, and the epidemiological data are limited and inconsistent. We present the largest population-based epidemiological study to date, using data provided by the large network of congenital anomalies registries in Europe. The study population included infants diagnosed with oculo-auriculo-vertebral spectrum during the 1990-2009 period from 34 registries active in 16 European countries. Of the 355 infants diagnosed with oculo-auriculo-vertebral spectrum, there were 95.8% (340/355) live born, 0.8% (3/355) fetal deaths, 3.4% (12/355) terminations of pregnancy for fetal anomaly and 1.5% (5/340) neonatal deaths. In 18.9%, there was prenatal detection of anomaly/anomalies associated with oculo-auriculo-vertebral spectrum, 69.7% were diagnosed at birth, 3.9% in the first week of life and 6.1% within 1 year of life. Microtia (88.8%), hemifacial microsomia (49.0%) and ear tags (44.4%) were the most frequent anomalies, followed by atresia/stenosis of external auditory canal (25.1%), diverse vertebral (24.3%) and eye (24.3%) anomalies. There was a high rate (69.5%) of associated anomalies of other organs/systems. The most common were congenital heart defects present in 27.8% of patients. The prevalence of oculo-auriculo-vertebral spectrum, defined as microtia/ear anomalies and at least one major characteristic anomaly, was 3.8 per 100,000 births. Twinning, assisted reproductive techniques and maternal pre-pregnancy diabetes were confirmed as risk factors. The high rate of different associated anomalies points to the need of performing an early ultrasound screening in all infants born with this disorder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the circum-Pacific ophiolitic belts, when no other biogenic constituents are found, radiolarians have the potential to provide significant biostratigraph- ic information. The Santa Rosa Accretionary Complex, which crops out in several half-windows (Carrizal, Sitio Santa Rosa, Bahia Nancite, Playa Naranjo) along the south shores of the Santa Elena Peninsula in northwestern Costa Rica, is one of these little-known ophiolitic mélanges. It contains various oceanic assemblages of alkaline basalt, radiolarite and polymictic breccias. The radiolarian biochronology presented in this work is mainly based by correlation on the biozonations of Carter et al. (2010), Baumgartner et al. (1995b), and O'Dogherty (1994) and indicate an Early Jurassic to early Late Cretaceous (early Pliensbachian to earliest Turonian) age for the sediments associated with oceanic basalts or recovered from blocks in breccias or megabreccias. The 19 illus- trated assemblages from the Carrizal tectonic window and Sitio Santa Rosa contain in total 162 species belonging to 65 genera. The nomenclature of tecton- ic units is the one presented by (Baumgartner and Denyer, 2006). This study brings to light the Early Jurassic age of a succession of radiolarite, which was previously thought to be of Cretaceous age, intruded by alkaline basalts sills (Unit 3). The presence of Early Jurassic large reworked blocks in a polymictic megabreccia, firstly reported by De Wever et al. (1985) is confirmed (Unit 4). Therefore, the alkaline basalt associated with the radiolarites of these two units (and maybe also Units 5 and 8) could be of Jurassic age. In the Carrizal tectonic window, Middle to early Late Jurassic radiolarian chert blocks associ- ated with massive tholeitic basalts and Early Cretaceous brick-red ribbon cherts overlying pillow basalts are interpreted as fragments of a Middle Jurassic oceanic basement accreted to an Early Cretaceous oceanic Plate, in an intra-oceanic subduction context. Whereas, the knobby radiolarites and black shales of Playa Carrizal are indicative of a shallower middle Cretaceous paleoenvironment. Other remnants of this oceanic basin are found in Units 2, 6, and 7, which documented the rapid approach of the depocentre to a subduction trench during the late Early Cretaceous (Albian-Cenomanian), to possibly early Late Cretaceous (Turonian).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffusion MRI has evolved towards an important clinical diagnostic and research tool. Though clinical routine is using mainly diffusion weighted and tensor imaging approaches, Q-ball imaging and diffusion spectrum imaging techniques have become more widely available. They are frequently used in research-oriented investigations in particular those aiming at measuring brain network connectivity. In this work, we aim at assessing the dependency of connectivity measurements on various diffusion encoding schemes in combination with appropriate data modeling. We process and compare the structural connection matrices computed from several diffusion encoding schemes, including diffusion tensor imaging, q-ball imaging and high angular resolution schemes, such as diffusion spectrum imaging with a publically available processing pipeline for data reconstruction, tracking and visualization of diffusion MR imaging. The results indicate that the high angular resolution schemes maximize the number of obtained connections when applying identical processing strategies to the different diffusion schemes. Compared to the conventional diffusion tensor imaging, the added connectivity is mainly found for pathways in the 50-100mm range, corresponding to neighboring association fibers and long-range associative, striatal and commissural fiber pathways. The analysis of the major associative fiber tracts of the brain reveals striking differences between the applied diffusion schemes. More complex data modeling techniques (beyond tensor model) are recommended 1) if the tracts of interest run through large fiber crossings such as the centrum semi-ovale, or 2) if non-dominant fiber populations, e.g. the neighboring association fibers are the subject of investigation. An important finding of the study is that since the ground truth sensitivity and specificity is not known, the comparability between results arising from different strategies in data reconstruction and/or tracking becomes implausible to understand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metadherin (MTDH), the newly discovered gene, is overexpressed in more than 40% of breast cancers. Recent studies have revealed that MTDH favors an oncogenic course and chemoresistance. With a number of breast cancer cell lines and breast tumor samples, we found that the relative expression of MTDH correlated with tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) sensitivity in breast cancer. In this study, we found that knockdown of endogenous MTDH cells sensitized the MDA-MB-231 cells to TRAIL-induced apoptosis both in vitro and in vivo. Conversely, stable overexpression of MTDH in MCF-7 cells enhanced cell survival with TRAIL treatment. Mechanically, MTDH down-regulated caspase-8, decreased caspase-8 recruitment into the TRAIL death-inducing signaling complex, decreased caspase-3 and poly(ADP-ribose) polymerase-2 processing, increased Bcl-2 expression, and stimulated TRAIL-induced Akt phosphorylation, without altering death receptor status. In MDA-MB-231 breast cancer cells, sensitization to TRAIL upon MTDH down-regulation was inhibited by the caspase inhibitor Z-VAD-fmk (benzyloxycarbonyl-VAD-fluoromethyl ketone), suggesting that MTDH depletion stimulates activation of caspases. In MCF-7 breast cancer cells, resistance to TRAIL upon MTDH overexpression was abrogated by depletion of Bcl-2, suggesting that MTDH-induced Bcl-2 expression contributes to TRAIL resistance. We further confirmed that MTDH may control Bcl-2 expression partly by suppressing miR-16. Collectively, our results point to a protective function of MTDH against TRAIL-induced death, whereby it inhibits the intrinsic apoptosis pathway through miR-16-mediated Bcl-2 up-regulation and the extrinsic apoptosis pathway through caspase-8 down-regulation.