947 resultados para SYNTAX SYNERGY
Resumo:
Eukaryotic gene expression depends on a complex interplay between the transcriptional apparatus and chromatin structure. We report here a yeast model system for investigating the functional interaction between the human estrogen receptor (hER) and CTF1, a member of the CTF/NFI transcription factor family. We show that a CTF1-fusion protein and the hER transactivate a synthetic promoter in yeast in a synergistic manner. This interaction requires the proline-rich transactivation domain of CTF1. When the natural estrogen-dependent vitellogenin B1 promoter is tested in yeast, CTF1 and CTF1-fusion proteins are unable to activate transcription, and no synergy is observed between hER, which activates the B1 promoter, and these factors. Chromatin structure analysis on this promoter reveals positioned nucleosomes at -430 to -270 (+/-20 bp) and at -270 to - 100 (+/-20 bp) relative to the start site of transcription. The positions of the nucleosomes remain unchanged upon hormone-dependent transcriptional activation of the promoter, and the more proximal nucleosome appears to mask the CTF/NFI site located at - 101 to -114. We conclude that a functional interaction of hER with the estrogen response element located upstream of a basal promoter occurs in yeast despite the nucleosomal organization of this promoter, whereas the interaction of CTF1 with its target site is apparently precluded by a nucleosome.
Resumo:
PURPOSE: We preoperatively assessed neurovesical function and spinal cord function in children with anorectal malformations. In cases of neurovesical dysfunction we looked for an association with vertebral malformation or myelodysplasia. MATERIALS AND METHODS: We prospectively evaluated 80 children with anorectal malformations via preoperative urodynamics and magnetic resonance imaging of the spine. Bladder compliance and volume, detrusor activity and vesicosphincteric synergy during voiding allowed urodynamic evaluation. Results were reported according to Wingspread and Krickenbeck classifications of anorectal malformations. RESULTS: Urodynamic findings were pathological in 14 children (18%). Pathological evaluations did not seem related to type of fistula or level of anorectal malformation. Vertebral anomalies were seen in 34 patients (43%) and myelodysplasia in 16 (20%). Neither vertebral anomaly nor myelodysplasia seemed associated with type of fistula or severity of anorectal malformation. Of 14 children with pathological urodynamics no vertebral anomaly or myelodysplasia was found in 7. Of 66 children with normal urodynamics 40 presented with vertebral or spinal malformation. CONCLUSIONS: Lower urinary tract dysfunction is common in patients with anorectal malformations. Normal spine or spinal cord does not exclude neurovesical dysfunction. Myelodysplasia or vertebral anomaly does not determine lower urinary tract dysfunction. Thus, we recommend preoperative urodynamic assessment of the bladder and magnetic resonance imaging of the spine in children with anorectal malformations.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Les investigations dans le milieu des accidents de la circulation sont très complexes. Elles nécessitent la mise en oeuvre d'un grand nombre de spécialités venant de domaines très différents. Si certains de ces domaines sont déjà bien exploités, d'autres demeurent encore incomplets et il arrive de nos jours d'observer des lacunes dans la pratique, auxquelles il est primordial de remédier.Ce travail de thèse, intitulé « l'exploitation des traces dans les accidents de la circulation », est issu d'une réflexion interdisciplinaire entre de multiples aspects des sciences forensiques. Il s'agit principalement d'une recherche ayant pour objectif de démontrer les avantages découlant d'une synergie entre les microtraces et l'étude de la dynamique d'un accident. Afin de donner une dimension très opérationnelle à ce travail, l'ensemble des démarches entreprises a été axé de manière à optimiser l'activité des premiers intervenants sur les lieux.Après une partie introductive et ayant trait au projet de recherche, traitant des aspects théoriques de la reconstruction d'une scène d'accident, le lecteur est invité à prendre connaissance de cinq chapitres pratiques, abordés selon la doctrine « du général au particulier ». La première étape de cette partie pratique concerne l'étude de la morphologie des traces. Des séquences d'examens sont proposées pour améliorer l'interprétation des contacts entre véhicules et obstacles impliqués dans un accident. Les mécanismes de transfert des traces de peinture sont ensuite étudiés et une série de tests en laboratoire est pratiquée sur des pièces de carrosseries automobiles. Différents paramètres sont ainsi testés afin de comprendre leur impact sur la fragilité d'un système de peinture. Par la suite, une liste de cas traités (crash-tests et cas réels), apportant des informations intéressantes sur le traitement d'une affaire et permettant de confirmer les résultats obtenus est effectuée. Il s'ensuit un recueil de traces, issu de l'expérience pratique acquise et ayant pour but d'aiguiller la recherche et le prélèvement sur les lieux. Finalement, la problématique d'une banque de données « accident », permettant une gestion optimale des traces récoltées est abordée.---The investigations of traffic accidents are very complex. They require the implementation of a large number of specialties coming from very different domains. If some of these domains are already well exploited, others remain still incomplete and it happens nowadays to observe gaps in the practice, which it is essential to remedy. This thesis, entitled "the exploitation of traces in traffic accidents", arises from a multidisciplinary reflection between the different aspects of forensic science. It is primarily a research aimed to demonstrate the benefits of synergy between microtrace evidence and accidents dynamics. To give a very operational dimension to this work, all the undertaken initiatives were centred so as to optimise the activity of the first participants on the crime scene.After an introductory part treating theoretical aspects of the reconstruction of an accident scene the reader is invited to get acquainted with five practical chapters, according to the doctrine "from general to particular". For the first stage of this practical part, the problem of the morphology of traces is approached and sequences of examinations are proposed to improve the interpretation of the contacts between vehicles and obstacles involved in an accident. Afterwards, the mechanisms of transfer of traces of paint are studied and a series of tests in laboratory is practised on pieces of automobile bodies. Various parameters are thus tested to understand their impact on the fragility of a system of paint. It follows that a list of treated cases (crash-tests and real cases) is created, allowing to bring interesting information on the treatment of a case and confirm the obtained results. Then, this work goes on with a collection of traces, stemming from the acquired experience that aims to steer the research and the taking of evidence on scenes. Finally, the practical part of this thesis ends with the problem of a database « accident », allowing an optimal management of the collected traces.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
This is the second edition of the compendium. Since the first edition a number of important initiatives have been launched in the shape of large projects targeting integration of research infrastructure and new technology for toxicity studies and exposure monitoring.The demand for research in the area of human health and environmental safety management of nanotechnologies is present since a decade and identified by several landmark reports and studies. Several guidance documents have been published. It is not the intention of this compendium to report on these as they are widely available.It is also not the intention to publish scientific papers and research results as this task is covered by scientific conferences and the peer reviewed press.The intention of the compendium is to bring together researchers, create synergy in their work, and establish links and communication between them mainly during the actual research phase before publication of results. Towards this purpose we find useful to give emphasis to communication of projects strategic aims, extensive coverage of specific work objectives and of methods used in research, strengthening human capacities and laboratories infrastructure, supporting collaboration for common goals and joint elaboration of future plans, without compromising scientific publication potential or IP Rights.These targets are far from being achieved with the publication in its present shape. We shall continue working, though, and hope with the assistance of the research community to make significant progress. The publication will take the shape of a dynamic, frequently updated, web-based document available free of charge to all interested parties. Researchers in this domain are invited to join the effort, communicating the work being done. [Auteurs]
Resumo:
This article reviews: 1) some of the results of drug eluting stents (SYNTAX and FAME); 2) the questionnable benefit of physical training in heart failure patients (HF-ACTION); 3) the benefit on cardiac remodelling of cardiac resynchronisation in heart failure patients (REVERSE study) and 4) the role of rate over rhythm control in patients with atrial fibrillation and heart failure (AF-CHF study); this article also reports the encouraging evolution of new technology allowing percutaneous implantation of stents-valves. Finally, this article address the screening of athletes for cardiac diseases.
Resumo:
Background: Well-conducted behavioural surveillance (BS) is essential for policy planning and evaluation. Data should be comparable across countries. In 2008, the European Centre for Disease Prevention and Control (ECDC) began a programme to support Member States in the implementation of BS for Second Generation Surveillance. Methods: Data from a mapping exercise on current BS activities in EU/EFTA countries led to recommendations for establishing national BS systems and international coordination, and the definition of a set of core and transversal (UNGASS-Dublin compatible) indicators for BS in the general and eight specific populations. A toolkit for establishing BS has been developed and a BS needs-assessment survey has been launched in 30 countries. Tools for BS self-assessment and planning are currently being tested during interactive workshops with country representatives. Results: The mapping exercise revealed extreme diversity between countries. Around half had established a BS system, but this did not always correspond to the epidemiological situation. Challenges to implementation and harmonisation at all levels emerged from survey findings and workshop feedback. These include: absence of synergy between biological and behavioural surveillance and of actors having an overall view of all system elements; lack of awareness of the relevance of BS and of coordination between agencies; insufficient use of available data; financial constraints; poor sustainability, data quality and access to certain key populations; unfavourable legislative environments. Conclusions: There is widespread need in the region not only for technical support but also for BS advocacy: BS remains the neglected partner of second generation surveillance and requires increased political support and capacity-building in order to become effective. Dissemination of validated tools for BS, developed in interaction with country experts, proves feasible and acceptable.
Resumo:
The purpose of this work is to analyze the bronze table found in Montealegre de Campos. The study of Syntax, Anthroponimy and formular expression of this text shows two different chronological levels, with the insertion of un older text as a fragmentum into another of trajanean datation.
Resumo:
Summary: Syntactic features of Finno-Swedish
Resumo:
Interactions between zinc (Zn) and phosphate (Pi) nutrition in plants have long been recognized, but little information is available on their molecular bases and biological significance. This work aimed at examining the effects of Zn deficiency on Pi accumulation in Arabidopsis thaliana and uncovering genes involved in the Zn-Pi synergy. Wild-type plants as well as mutants affected in Pi signalling and transport genes, namely the transcription factor PHR1, the E2-conjugase PHO2, and the Pi exporter PHO1, were examined. Zn deficiency caused an increase in shoot Pi content in the wild type as well as in the pho2 mutant, but not in the phr1 or pho1 mutants. This indicated that PHR1 and PHO1 participate in the coregulation of Zn and Pi homeostasis. Zn deprivation had a very limited effect on transcript levels of Pi-starvation-responsive genes such as AT4, IPS1, and microRNA399, or on of members of the high-affinity Pi transporter family PHT1. Interestingly, one of the PHO1 homologues, PHO1;H3, was upregulated in response to Zn deficiency. The expression pattern of PHO1 and PHO1;H3 were similar, both being expressed in cells of the root vascular cylinder and both localized to the Golgi when expressed transiently in tobacco cells. When grown in Zn-free medium, pho1;h3 mutant plants displayed higher Pi contents in the shoots than wild-type plants. This was, however, not observed in a pho1 pho1;h3 double mutant, suggesting that PHO1;H3 restricts root-to-shoot Pi transfer requiring PHO1 function for Pi homeostasis in response to Zn deficiency.
Resumo:
Finding an adequate paraphrase representation formalism is a challenging issue in Natural Language Processing. In this paper, we analyse the performance of Tree Edit Distance as a paraphrase representation baseline. Our experiments using Edit Distance Textual Entailment Suite show that, as Tree Edit Distance consists of a purely syntactic approach, paraphrase alternations not based on structural reorganizations do not find an adequate representation. They also show that there is much scope for better modelling of the way trees are aligned.