434 resultados para LDPC decoding
Resumo:
La présente thèse examine les associations entre les dimensions du TDAH et les habiletés en lecture sur les plans phénotypique, génétique et cognitif. En premier lieu, les associations entre les dimensions du TDAH (inattention et hyperactivité/impulsivité) et les habiletés en lecture (décodage et compréhension en lecture) chez des enfants au début du primaire (6-8 ans) ont été examinées. Les résultats révèlent des associations similaires. Toutefois, seules celles entre l’inattention et les habiletés en lecture demeurent après que l’hyperactivité/impulsivité, les symptômes de trouble du comportement et les habiletés non verbales aient été contrôlés. De plus, les associations entre l’inattention et les habiletés en lecture s’expliquent en grande partie par des facteurs génétiques. En second lieu, les associations entre les dimensions du TDAH et les habiletés en lecture (lecture de mots et exactitude/vitesse lors de la lecture d’un texte) ont été étudiées à 14-15 ans. Seule l’inattention demeure associée aux habiletés en lecture après que l’hyperactivité/impulsivité, les habiletés verbales et les habiletés non verbales aient été contrôlées. L’inattention et les habiletés en lecture sont aussi corrélées sur le plan génétique, mais ces corrélations deviennent non significatives lorsque les habiletés verbales sont contrôlées. En dernier lieu, des habiletés cognitives ont été étudiées comme mécanismes sous-jacents potentiels de l’association entre l’inattention et les habiletés en lecture (décodage et compréhension en lecture) à l’enfance. Il apparait que la conscience phonologique, la vitesse de dénomination de chiffres, le traitement temporel bimodal et le vocabulaire sont des médiateurs de l’association entre l’inattention et le décodage alors que la conscience phonologique, la vitesse de dénomination de chiffres et de couleurs et le vocabulaire sont des médiateurs de l’association entre l’inattention et la compréhension en lecture. De plus, des facteurs génétiques communs ont été observés entre certains médiateurs (conscience phonologique, vitesse de dénomination des chiffres et traitement temporel bimodal), l’inattention et le décodage. Somme toute, la présente thèse montre que des facteurs génétiques expliquent en partie ces associations à l’enfance et l’adolescence. Des médiateurs cognitifs sous-tendent ces associations, possiblement par des processus génétiques et environnementaux qui devront être précisés dans le futur.
Resumo:
Ce mémoire doctoral s’intéresse aux pratiques parentales en littéracie comme prédicteurs des différences précoces en lecture. Les études antérieures ont rapporté des liens préférentiels entre deux types de pratiques parentales en littéracie, deux habiletés préalables à la lecture et le développement de la lecture au primaire. D’une part, l’enseignement des lettres par le parent, une pratique formelle, contribuait à la connaissance des lettres, laquelle était un prédicteur des habiletés de lecture de l’enfant. D’autre part, la lecture parent-enfant ainsi que l’exposition aux livres, des pratiques informelles, prédisaient le développement langagier de l’enfant, lequel contribuerait plus spécifiquement à la compréhension en lecture. L’hypothèse selon laquelle des processus de médiation sont impliqués a été proposé par plusieurs chercheurs mais, aucun n’avait testé formellement cette hypothèse. De plus, la contribution des pratiques parentales en littéracie était évaluée une seule fois, en maternelle ou au début de la première année du primaire, ce qui ne permettait pas d’identifier l’âge vers lequel il devient pertinent d’introduire l’enfant au monde littéraire. L’objectif du mémoire était donc d’évaluer formellement un modèle de double médiation à l’aide d’analyses acheminatoires tout en considérant l’exposition à la littéracie tout au long de la petite enfance. En accord avec les liens préférentiels suggérés dans la littérature, on a constaté que l’enseignement des lettres par les parents à 4 et 5 ans prédisent indirectement les habiletés en lecture (8 ans; décodage et compréhension en lecture) via leur contribution à la connaissance des lettres (5 ans). Également, le vocabulaire réceptif de l’enfant (5 ans) était un médiateur des contributions de la lecture parent-enfant à 2.5, 4 et 5 ans, à la compréhension en lecture (8 ans). Ce mémoire souligne l’importance d’initier les enfants à la littéracie en bas âge afin de supporter leur acquisition subséquente de la lecture.
Resumo:
A survey of primary schools in England found that girls outperform boys in English across all phases (Ofsted in Moving English forward. Ofsted, Manchester, 2012). The gender gap remains an on-going issue in England, especially for reading attainment. This paper presents evidence of gender differences in learning to read that emerged during the development of a reading scheme for 4- and 5-year-old children in which 372 children from Reception classes in sixteen schools participated in 12-month trials. There were three arms per trial: Intervention non-PD (non-phonically decodable text with mixed methods teaching); Intervention PD (phonically decodable text with mixed methods teaching); and a ‘business as usual’ control condition SP (synthetic phonics and decodable text). Assignment to Intervention condition was randomised. Standardised measures of word reading and comprehension were used. The research provides statistically significant evidence suggesting that boys learn more easily using a mix of whole-word and synthetic phonics approaches. In addition, the evidence indicates that boys learn to read more easily using the natural-style language of ‘real’ books including vocabulary which goes beyond their assumed decoding ability. At post-test, boys using the nonphonically decodable text with mixed methods (Intervention A) were 8 months ahead in reading comprehension compared to boys using a wholly synthetic phonics approach.
Resumo:
Cette thèse présente la découverte de nouveaux inhibiteurs de l’amidotransférase ARNt-dépendante (AdT), et résume les connaissances récentes sur la biosynthèse du Gln-ARNtGln et de l’Asn-ARNtAsn par la voie indirecte chez la bactérie Helicobacter pylori. Dans le cytoplasme des eucaryotes, vingt acides aminés sont liés à leur ARNt correspondant par vingt aminoacyl-ARNt synthétases (aaRSs). Ces enzymes sont très spécifiques, et leur fonction est importante pour le décodage correct du code génétique. Cependant, la plupart des bactéries, dont H. pylori, sont dépourvues d’asparaginyl-ARNt synthétase et/ou de glutaminyl-ARNt synthétase. Pour former le Gln-ARNtGln, H. pylori utilise une GluRS noncanonique nommée GluRS2 qui glutamyle spécifiquement l’ARNtGln ; ensuite, une AdT trimérique, la GatCAB corrige le Glu-ARNtGln mésapparié en le transamidant pour former le Gln-ARNtGln, qui lira correctement les codons glutamine pendant la biosynthèse des protéines sur les ribosomes. La formation de l’Asn-ARNtAsn est similaire à celle du Gln-ARNtGln, et utilise la même GatCAB et une AspRS non-discriminatrice. Depuis des années 2000, la GatCAB est considérée comme une cible prometteuse pour le développement de nouveaux antibiotiques, puisqu’elle est absente du cytoplasme de l’être humain, et qu’elle est encodée dans le génome de plusieurs bactéries pathogènes. Dans le chapitre 3, nous présentons la découverte par la technique du « phage display » de peptides cycliques riches en tryptophane et en proline, et qui inhibent l’activité de la GatCAB de H. pylori. Les peptides P10 (CMPVWKPDC) et P9 (CSAHNWPNC) inhibent cette enzyme de façon compétitive par rapport au substrat Glu-ARNtGln. Leur constante d’inhibition (Ki) est 126 μM pour P10, et 392 μM pour P9. Des modèles moléculaires ont montré qu’ils lient le site actif de la réaction de transmidation catalysée par la GatCAB, grâce à la formation d’une interaction π-π entre le résidu Trp de ces peptides et le résidu Tyr81 de la sous-unité GatB, comme fait le A76 3’-terminal de l’ARNt. Dans une autre étude concernant des petits composés contenant un groupe sulfone, et qui mimiquent l’intermédiaire de la réaction de transamidation, nous avons identifié des composés qui inhibent la GatCAB de H. pylori de façon compétitive par rapport au substrat Glu-ARNtGln. Cinq fois plus petits que les peptides cycliques mentionnés plus haut, ces composés inhibent l’activité de la GatCAB avec des Ki de 139 μM pour le composé 7, et de 214 μM pour le composé 4. Ces inhibiteurs de GatCAB pourraient être utiles pour des études mécanistiques, et pourraient être des molécules de base pour le développement de nouvelles classes d’antibiotiques contre des infections causées par H. pylori.
Resumo:
The non-standard decoding of the CUG codon in Candida cylindracea raises a number of questions about the evolutionary process of this organism and other species Candida clade for which the codon is ambiguous. In order to find some answers we studied the transcriptome of C. cylindracea, comparing its behavior with that of Saccharomyces cerevisiae (standard decoder) and Candida albicans (ambiguous decoder). The transcriptome characterization was performed using RNA-seq. This approach has several advantages over microarrays and its application is booming. TopHat and Cufflinks were the software used to build the protocol that allowed for gene quantification. About 95% of the reads were mapped on the genome. 3693 genes were analyzed, of which 1338 had a non-standard start codon (TTG/CTG) and the percentage of expressed genes was 99.4%. Most genes have intermediate levels of expression, some have little or no expression and a minority is highly expressed. The distribution profile of the CUG between the three species is different, but it can be significantly associated to gene expression levels: genes with fewer CUGs are the most highly expressed. However, CUG content is not related to the conservation level: more and less conserved genes have, on average, an equal number of CUGs. The most conserved genes are the most expressed. The lipase genes corroborate the results obtained for most genes of C. cylindracea since they are very rich in CUGs and nothing conserved. The reduced amount of CUG codons that was observed in highly expressed genes may be due, possibly, to an insufficient number of tRNA genes to cope with more CUGs without compromising translational efficiency. From the enrichment analysis, it was confirmed that the most conserved genes are associated with basic functions such as translation, pathogenesis and metabolism. From this set, genes with more or less CUGs seem to have different functions. The key issues on the evolutionary phenomenon remain unclear. However, the results are consistent with previous observations and shows a variety of conclusions that in future analyzes should be taken into consideration, since it was the first time that such a study was conducted.
Resumo:
With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.
Resumo:
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.
Resumo:
Positive-sense RNA viruses are important animal, plant, insect and bacteria pathogens and constitute the largest group of RNA viruses. Due to the relatively small size of their genomes, these viruses have evolved a variety of non-canonical translation mechanisms to optimize coding capacity expanding their proteome diversity. One such strategy is codon redefinition or recoding. First described in viruses, recoding is a programmed translation event in which codon alterations are context dependent. Recoding takes place in a subset of messenger RNA (mRNAs) with some products reflecting new, and some reflecting standard, meanings. The ratio between the two is both critical and highly regulated. While a variety of recoding mechanisms have been documented, (ribosome shunting, stop-carry on, termination-reinitiation, and translational bypassing), the two most extensively employed by RNA viruses are Programmed Ribosomal Frameshifting (PRF) and Programmed Ribosomal Readthrough (PRT). While both PRT and PRF subvert normal decoding for expression of C-terminal extension products, the former involves an alteration of reading frame, and the latter requires decoding of a non-sense codon. Both processes occur at a low but defined frequency, and both require Recoding Stimulatory Elements (RSE) for regulation and optimum functionality. These stimulatory signals can be embedded in the RNA in the form of sequence or secondary structure, or trans-acting factors outside the mRNA such as proteins or micro RNAs (miRNA). Despite 40+ years of study, the precise mechanisms by which viral RSE mediate ribosome recoding for the synthesis of their proteins, or how the ratio of these products is maintained, is poorly defined. This study reveals that in addition to a long distance RNA:RNA interaction, three alternate conformations and a phylogenetically conserved pseudoknot regulate PRT in the carmovirus Turnip crinkle virus (TCV).
Resumo:
Several studies have reported impairments in decoding emotional facial expressions in intimate partner violence (IPV) perpetrators. However, the mechanisms that underlie these impaired skills are not well known. Given this gap in the literature, we aimed to establish whether IPV perpetrators (n = 18) differ in their emotion decoding process, attentional skills, and testosterone (T), cortisol (C) levels and T/C ratio in comparison with controls (n = 20), and also to examine the moderating role of the group and hormonal parameters in the relationship between attention skills and the emotion decoding process. Our results demonstrated that IPV perpetrators showed poorer emotion recognition and higher attention switching costs than controls. Nonetheless, they did not differ in attention to detail and hormonal parameters. Finally, the slope predicting emotion recognition from deficits in attention switching became steeper as T levels increased, especially in IPV perpetrators, although the basal C and T/C ratios were unrelated to emotion recognition and attention deficits for both groups. These findings contribute to a better understanding of the mechanisms underlying emotion recognition deficits. These factors therefore constitute the target for future interventions.
Resumo:
Cette thèse de doctorat s’intéresse à mieux comprendre, d’une part, ce qui influence la sécrétion de cortisol salivaire, et d’autre part, ce qui influence l’épuisement professionnel. Plusieurs objectifs en découlent. D’abord, elle vise à mieux cerner la contribution des conditions de l’organisation du travail (utilisation des compétences, autorité décisionnelle, demandes psychologiques, demandes physiques, horaire de travail irrégulier, nombre d’heures travaillées, soutien social des collègues, soutien social des superviseurs, insécurité d’emploi) sur la sécrétion de cortisol salivaire, ainsi que le rôle modérateur de certains traits de personnalité (extraversion, agréabilité, névrosisme, conscience, ouverture d’esprit, estime de soi, centre de contrôle) sur la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire. Par ailleurs, cette thèse vise à établir la contribution des conditions de l’organisation du travail sur l’épuisement professionnel, ainsi que le rôle modérateur des traits de personnalité sur la relation entre les conditions de l’organisation du travail et l’épuisement professionnel. Finalement, cette thèse vise à vérifier si la sécrétion de cortisol salivaire joue un rôle médiateur sur la relation entre les conditions de l’organisation du travail et l’épuisement professionnel, ainsi qu’à identifier les effets de médiation modérés par les traits de personnalité sur la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire. Ces objectifs sont inspirés de nombreuses limites observées dans la littérature, principalement l’intégration de déterminants à la fois biologiques, psychologiques et du travail dans la compréhension de l’épuisement professionnel. La thèse propose un modèle conceptuel qui tente de savoir comment ces différents stresseurs entraînent une dérégulation de la sécrétion de cortisol dans la salive des travailleurs. Ensuite, ce modèle conceptuel vise à voir si cette dérégulation s’associe à l’épuisement professionnel. Finalement, ce modèle conceptuel cherche à expliquer comment la personnalité peut influencer la manière dont ces variables sont reliées entre elles, c’est-à-dire de voir si la personnalité joue un rôle modérateur. Ce modèle découle de quatre théories particulières, notamment la perspective biologique de Selye (1936). Les travaux de Selye s’orientent sur l’étude de la réaction physiologique d’un organisme soumis à un stresseur. Dans ces circonstances, l’organisme est en perpétuel effort de maintien de son équilibre (homéostasie) et ne tolère que très peu de modifications à cet équilibre. En cas de modifications excessives, une réponse de stress est activée afin d’assurer l’adaptation en maintenant l’équilibre de base de l’organisme. Ensuite, le modèle conceptuel s’appuie sur le modèle de Lazarus et Folkman (1984) qui postule que la réponse de stress dépend plutôt de l’évaluation que font les individus de la situation stressante, et également sur le modèle de Pearlin (1999) qui postule que les individus exposés aux mêmes stresseurs ne sont pas nécessairement affectés de la même manière. Finalement, le modèle conceptuel de cette thèse s’appuie sur le modèle de Marchand (2004) qui postule que les réactions dépendent du décodage que font les acteurs des contraintes et ressources qui les affectent. Diverses hypothèses émergent de cette conceptualisation théorique. La première est que les conditions de l’organisation du travail contribuent directement aux variations de la sécrétion de cortisol salivaire. La deuxième est que les conditions de l’organisation du travail contribuent directement à l’épuisement professionnel. La troisième est que la sécrétion de cortisol salivaire médiatise la relation entre les conditions de l’organisation du travail et l’épuisement professionnel. La quatrième est que la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire est modérée par les traits de personnalité. La cinquième est que la relation entre les conditions de l’organisation du travail, la sécrétion de cortisol salivaire et l’épuisement professionnel est modérée par les traits de personnalité. Des modèles de régression multiniveaux et des analyses de cheminement de causalité ont été effectués sur un échantillon de travailleurs canadiens provenant de l’étude SALVEO. Les résultats obtenus sont présentés sous forme de trois articles, soumis pour publication, lesquels constituent les chapitres 4 à 6 de cette thèse. Dans l’ensemble, le modèle intégrateur biopsychosocial proposé dans le cadre de cette thèse de doctorat permet de mieux saisir la complexité de l’épuisement professionnel qui trouve une explication biologique, organisationnelle et individuelle. Ce constat permet d’offrir une compréhension élargie et multiniveaux et assure l’avancement des connaissances sur une problématique préoccupante pour les organisations, la société ainsi que pour les travailleurs. Effectivement, la prise en compte des traits de personnalité et de la sécrétion du cortisol salivaire dans l’étude de l’épuisement professionnel assure une analyse intégrée et plus objective. Cette thèse conclue sur les implications de ces résultats pour la recherche, et sur les retombées qui en découlent pour les milieux de travail.