632 resultados para Decoding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Positive-sense RNA viruses are important animal, plant, insect and bacteria pathogens and constitute the largest group of RNA viruses. Due to the relatively small size of their genomes, these viruses have evolved a variety of non-canonical translation mechanisms to optimize coding capacity expanding their proteome diversity. One such strategy is codon redefinition or recoding. First described in viruses, recoding is a programmed translation event in which codon alterations are context dependent. Recoding takes place in a subset of messenger RNA (mRNAs) with some products reflecting new, and some reflecting standard, meanings. The ratio between the two is both critical and highly regulated. While a variety of recoding mechanisms have been documented, (ribosome shunting, stop-carry on, termination-reinitiation, and translational bypassing), the two most extensively employed by RNA viruses are Programmed Ribosomal Frameshifting (PRF) and Programmed Ribosomal Readthrough (PRT). While both PRT and PRF subvert normal decoding for expression of C-terminal extension products, the former involves an alteration of reading frame, and the latter requires decoding of a non-sense codon. Both processes occur at a low but defined frequency, and both require Recoding Stimulatory Elements (RSE) for regulation and optimum functionality. These stimulatory signals can be embedded in the RNA in the form of sequence or secondary structure, or trans-acting factors outside the mRNA such as proteins or micro RNAs (miRNA). Despite 40+ years of study, the precise mechanisms by which viral RSE mediate ribosome recoding for the synthesis of their proteins, or how the ratio of these products is maintained, is poorly defined. This study reveals that in addition to a long distance RNA:RNA interaction, three alternate conformations and a phylogenetically conserved pseudoknot regulate PRT in the carmovirus Turnip crinkle virus (TCV).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several studies have reported impairments in decoding emotional facial expressions in intimate partner violence (IPV) perpetrators. However, the mechanisms that underlie these impaired skills are not well known. Given this gap in the literature, we aimed to establish whether IPV perpetrators (n = 18) differ in their emotion decoding process, attentional skills, and testosterone (T), cortisol (C) levels and T/C ratio in comparison with controls (n = 20), and also to examine the moderating role of the group and hormonal parameters in the relationship between attention skills and the emotion decoding process. Our results demonstrated that IPV perpetrators showed poorer emotion recognition and higher attention switching costs than controls. Nonetheless, they did not differ in attention to detail and hormonal parameters. Finally, the slope predicting emotion recognition from deficits in attention switching became steeper as T levels increased, especially in IPV perpetrators, although the basal C and T/C ratios were unrelated to emotion recognition and attention deficits for both groups. These findings contribute to a better understanding of the mechanisms underlying emotion recognition deficits. These factors therefore constitute the target for future interventions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cette thèse de doctorat s’intéresse à mieux comprendre, d’une part, ce qui influence la sécrétion de cortisol salivaire, et d’autre part, ce qui influence l’épuisement professionnel. Plusieurs objectifs en découlent. D’abord, elle vise à mieux cerner la contribution des conditions de l’organisation du travail (utilisation des compétences, autorité décisionnelle, demandes psychologiques, demandes physiques, horaire de travail irrégulier, nombre d’heures travaillées, soutien social des collègues, soutien social des superviseurs, insécurité d’emploi) sur la sécrétion de cortisol salivaire, ainsi que le rôle modérateur de certains traits de personnalité (extraversion, agréabilité, névrosisme, conscience, ouverture d’esprit, estime de soi, centre de contrôle) sur la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire. Par ailleurs, cette thèse vise à établir la contribution des conditions de l’organisation du travail sur l’épuisement professionnel, ainsi que le rôle modérateur des traits de personnalité sur la relation entre les conditions de l’organisation du travail et l’épuisement professionnel. Finalement, cette thèse vise à vérifier si la sécrétion de cortisol salivaire joue un rôle médiateur sur la relation entre les conditions de l’organisation du travail et l’épuisement professionnel, ainsi qu’à identifier les effets de médiation modérés par les traits de personnalité sur la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire. Ces objectifs sont inspirés de nombreuses limites observées dans la littérature, principalement l’intégration de déterminants à la fois biologiques, psychologiques et du travail dans la compréhension de l’épuisement professionnel. La thèse propose un modèle conceptuel qui tente de savoir comment ces différents stresseurs entraînent une dérégulation de la sécrétion de cortisol dans la salive des travailleurs. Ensuite, ce modèle conceptuel vise à voir si cette dérégulation s’associe à l’épuisement professionnel. Finalement, ce modèle conceptuel cherche à expliquer comment la personnalité peut influencer la manière dont ces variables sont reliées entre elles, c’est-à-dire de voir si la personnalité joue un rôle modérateur. Ce modèle découle de quatre théories particulières, notamment la perspective biologique de Selye (1936). Les travaux de Selye s’orientent sur l’étude de la réaction physiologique d’un organisme soumis à un stresseur. Dans ces circonstances, l’organisme est en perpétuel effort de maintien de son équilibre (homéostasie) et ne tolère que très peu de modifications à cet équilibre. En cas de modifications excessives, une réponse de stress est activée afin d’assurer l’adaptation en maintenant l’équilibre de base de l’organisme. Ensuite, le modèle conceptuel s’appuie sur le modèle de Lazarus et Folkman (1984) qui postule que la réponse de stress dépend plutôt de l’évaluation que font les individus de la situation stressante, et également sur le modèle de Pearlin (1999) qui postule que les individus exposés aux mêmes stresseurs ne sont pas nécessairement affectés de la même manière. Finalement, le modèle conceptuel de cette thèse s’appuie sur le modèle de Marchand (2004) qui postule que les réactions dépendent du décodage que font les acteurs des contraintes et ressources qui les affectent. Diverses hypothèses émergent de cette conceptualisation théorique. La première est que les conditions de l’organisation du travail contribuent directement aux variations de la sécrétion de cortisol salivaire. La deuxième est que les conditions de l’organisation du travail contribuent directement à l’épuisement professionnel. La troisième est que la sécrétion de cortisol salivaire médiatise la relation entre les conditions de l’organisation du travail et l’épuisement professionnel. La quatrième est que la relation entre les conditions de l’organisation du travail et la sécrétion de cortisol salivaire est modérée par les traits de personnalité. La cinquième est que la relation entre les conditions de l’organisation du travail, la sécrétion de cortisol salivaire et l’épuisement professionnel est modérée par les traits de personnalité. Des modèles de régression multiniveaux et des analyses de cheminement de causalité ont été effectués sur un échantillon de travailleurs canadiens provenant de l’étude SALVEO. Les résultats obtenus sont présentés sous forme de trois articles, soumis pour publication, lesquels constituent les chapitres 4 à 6 de cette thèse. Dans l’ensemble, le modèle intégrateur biopsychosocial proposé dans le cadre de cette thèse de doctorat permet de mieux saisir la complexité de l’épuisement professionnel qui trouve une explication biologique, organisationnelle et individuelle. Ce constat permet d’offrir une compréhension élargie et multiniveaux et assure l’avancement des connaissances sur une problématique préoccupante pour les organisations, la société ainsi que pour les travailleurs. Effectivement, la prise en compte des traits de personnalité et de la sécrétion du cortisol salivaire dans l’étude de l’épuisement professionnel assure une analyse intégrée et plus objective. Cette thèse conclue sur les implications de ces résultats pour la recherche, et sur les retombées qui en découlent pour les milieux de travail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese (doutorado)—Universidade de Brasília, Faculdade de Comunicação, Programa de Pós-Graduação em Comunicação, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy (≈10%). © 2010 IEEE

Relevância:

10.00% 10.00%

Publicador:

Resumo:

mRNA translation in many ciliates utilizes variant genetic codes where stop codons are reassigned to specify amino acids. To characterize the repertoire of ciliate genetic codes, we analyzed ciliate transcriptomes from marine environments. Using codon substitution frequencies in ciliate protein-coding genes and their orthologs, we inferred the genetic codes of 24 ciliate species. Nine did not match genetic code tables currently assigned by NCBI. Surprisingly, we identified a novel genetic code where all three standard stop codons (TAA, TAG, and TGA) specify amino acids in Condylostoma magnum. We provide evidence suggesting that the functions of these codons in C. magnum depend on their location within mRNA. They are decoded as amino acids at internal positions, but specify translation termination when in close proximity to an mRNA 3' end. The frequency of stop codons in protein coding sequences of closely related Climacostomum virens suggests that it may represent a transitory state.mRNA translation in many ciliates utilizes variant genetic codes where stop codons are reassigned to specify amino acids. To characterize the repertoire of ciliate genetic codes, we analyzed ciliate transcriptomes from marine environments. Using codon substitution frequencies in ciliate protein-coding genes and their orthologs, we inferred the genetic codes of 24 ciliate species. Nine did not match genetic code tables currently assigned by NCBI. Surprisingly, we identified a novel genetic code where all three standard stop codons (TAA, TAG, and TGA) specify amino acids in Condylostoma magnum. We provide evidence suggesting that the functions of these codons in C. magnum depend on their location within mRNA. They are decoded as amino acids at internal positions, but specify translation termination when in close proximity to an mRNA 3' end. The frequency of stop codons in protein coding sequences of closely related Climacostomum virens suggests that it may represent a transitory state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ribosome profiling (Ribo-seq), a promising technology for exploring ribosome decoding rates, is characterized by the presence of infrequent high peaks in ribosome footprint density and by long alignment gaps. Here, to reduce the impact of data heterogeneity we introduce a simple normalization method, Ribo-seq Unit Step Transformation (RUST). RUST is robust and outperforms other normalization techniques in the presence of heterogeneous noise. We illustrate how RUST can be used for identifying mRNA sequence features that affect ribosome footprint densities globally. We show that a few parameters extracted with RUST are sufficient for predicting experimental densities with high accuracy. Importantly the application of RUST to 30 publicly available Ribo-seq data sets revealed a substantial variation in sequence determinants of ribosome footprint frequencies, questioning the reliability of Ribo-seq as an accurate representation of local ribosome densities without prior quality control. This emphasizes our incomplete understanding of how protocol parameters affect ribosome footprint densities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectif du chapitre consiste à dresser un bilan des acquis concernant les relations entre lecture-compréhension et écriture-rédaction. Deux approches sont envisagées. La première s’appuie sur des évaluations globales des dimensions relatives dites au bas niveau (au code : écriture et décodage) et des dimensions de haut niveau (compréhension et production textuelle). Elle met en évidence des relations fortes entre les premières, mais plutôt faibles entre les secondes. La seconde approche procède à la mise en relation entre composantes des deux activités : lexique (en lecture et en production), syntaxe, structure des textes, etc. Elle fait apparaître une plus grande complexité des relations et une relative indépendance des composantes. Elle conduit à reposer la question des impacts respectifs et réciproques de la lecture-compréhension sur l’écriture-rédaction. Elle amène à s’interroger sur ce que seraient les effets d’une pratique accordant une priorité à la production plutôt qu’à la lecture-compréhension et à rechercher les modalités d’intervention visant à améliorer l’une à partir de l’autre, et réciproquement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé : Les maladies cardiovasculaires représentent la principale cause de mortalité mondiale, soit le tiers des décès annuels selon l’Organisation mondiale de la Santé. L’hypercholestérolémie, caractérisée par une élévation des niveaux plasmatiques de lipoprotéines de faible densité (LDL), est l’un des facteurs de risque majeur pour les maladies cardiovasculaires. La proprotéine convertase subtilisine/kexine type 9 (PCSK9) joue un rôle essentiel dans l’homéostasie du cholestérol sanguin par la régulation des niveaux protéiques du récepteur LDL (LDLR). PCSK9 est capable de se lier au LDLR et favorise l’internalisation et la dégradation du récepteur dans les lysosomes. L’inhibition de PCSK9 s’avère une cible thérapeutique validée pour le traitement de l’hypercholestérolémie et la prévention des maladies cardiovasculaires. Par contre, plusieurs mécanismes responsables de la régulation et la dégradation du complexe PCSK9-LDLR n’ont pas encore été complètement caractérisés comme la régulation par la protéine annexin A2 (AnxA2), un inhibiteur endogène de PCSK9. De plus, plusieurs évidences suggèrent la présence d’une ou plusieurs protéines, encore inconnues, impliquées dans le mécanisme d’action de PCSK9. Celles-ci pourraient réguler l’internalisation et le transport du complexe PCSK9-LDLR vers les lysosomes. Les objectifs de cette thèse sont de mieux définir le rôle et l’impact de l’AnxA2 sur la protéine PCSK9 en plus d’identifier de nouveaux partenaires d’interactions de PCSK9 pour mieux caractériser son mécanisme d’action sur la régulation des niveaux de LDLR. Nous avons démontré que l’inhibition de PCSK9 par l’AnxA2 extracellulaire s’effectue via sa liaison aux domaines M1+M2 de la région C-terminale de PCSK9 et nous avons mis en évidence les premières preuves d’un contrôle intracellulaire de l’AnxA2 sur la traduction de l’ARNm de PCSK9. Nos résultats révèlent une liaison de l’AnxA2 à l’ARN messager de PCSK9 qui cause une répression traductionnelle. Nous avons également identifié la protéine glypican-3 (GPC3) comme un nouveau partenaire d’interaction extracellulaire avec le PCSK9 et intracellulaire avec le complexe PCSK9-LDLR dans le réticulum endoplasmique des cellules HepG2 et Huh7. Nos études démontrent que GPC3 réduit l’activité extracellulaire de PCSK9 en agissant comme un compétiteur du LDLR pour la liaison avec PCSK9. Une meilleure compréhension des mécanismes de régulation et de dégradation du complexe PCKS9-LDLR permettra de mieux évaluer l’impact et l’efficacité des inhibiteurs de la protéine PCSK9.