522 resultados para Strands
Resumo:
Permanent magnet generators (PMG) represent the cutting edge technology in modern wind mills. The efficiency remains high (over 90%) at partial loads. To improve the machine efficiency even further, every aspect of machine losses has to be analyzed. Additional losses are often given as a certain percentage without providing any detailed information about the actual calculation process; meanwhile, there are many design-dependent losses that have an effect on the total amount of additional losses and that have to be taken into consideration. Additional losses are most often eddy current losses in different parts of the machine. These losses are usually difficult to calculate in the design process. In this doctoral thesis, some additional losses are identified and modeled. Further, suggestions on how to minimize the losses are given. Iron losses can differ significantly between the measured no-load values and the loss values under load. In addition, with embedded magnet rotors, the quadrature-axis armature reaction adds losses to the stator iron by manipulating the harmonic content of the flux. It was, therefore, re-evaluated that in salient pole machines, to minimize the losses and the loss difference between the no-load and load operation, the flux density has to be kept below 1.5 T in the stator yoke, which is the traditional guideline for machine designers. Eddy current losses may occur in the end-winding area and in the support structure of the machine, that is, in the finger plate and the clamping ring. With construction steel, these losses account for 0.08% of the input power of the machine. These losses can be reduced almost to zero by using nonmagnetic stainless steel. In addition, the machine housing may be subjected to eddy current losses if the flux density exceeds 1.5 T in the stator yoke. Winding losses can rise rapidly when high frequencies and 10–15 mm high conductors are used. In general, minimizing the winding losses is simple. For example, it can be done by dividing the conductor into transposed subconductors. However, this comes with the expense of an increase in the DC resistance. In the doctoral thesis, a new method is presented to minimize the winding losses by applying a litz wire with noninsulated strands. The construction is the same as in a normal litz wire but the insulation between the subconductors has been left out. The idea is that the connection is kept weak to prevent harmful eddy currents from flowing. Moreover, the analytical solution for calculating the AC resistance factor of the litz-wire is supplemented by including an end-winding resistance in the analytical solution. A simple measurement device is developed to measure the AC resistance in the windings. In the case of a litz-wire with originally noninsulated strands, vacuum pressure impregnation (VPI) is used to insulate the subconductors. In one of the two cases studied, the VPI affected the AC resistance factor, but in the other case, it did not have any effect. However, more research is needed to determine the effect of the VPI on litz-wire with noninsulated strands. An empirical model is developed to calculate the AC resistance factor of a single-layer formwound winding. The model includes the end-winding length and the number of strands and turns. The end winding includes the circulating current (eddy currents that are traveling through the whole winding between parallel strands) and the main current. The end-winding length also affects the total AC resistance factor.
Resumo:
Despite the suggestions of its pectic composition, no clear evidence for this has been presented. Here we show the occurrence of such a structure in walls of cells from cotyledons of Hymenaea courbariI L. These cells are known to accumulate large amounts of storage xyloglucan in the wall and, in this case, the protuberances seem to contain this storage polysaccharide rather than pectin. A hypothetical sequence of events leading from wall strands to protuberances was assembled based on scanning electron microscopy observations. On this basis, a tentative model for how polysaccharides are distributed into the wall, near the regions where protuberances are found, is proposed to explain the presence of storage xyloglucan in their composition.
Resumo:
The lianas observed in this study, Abuta convexa (Vell.) Diels, Abuta imene (Mart.) Eichler, and Chondrodendron platiphyllum (A. St.-Hil.) Miers, all have successive cambia in their stems. The terminology applied to stem histology in species with successive cambia is as diverse as the interpretations of the origins of this cambial variant. Therefore, this study specifically investigates the origin of successive cambia through a developmental analysis of the above-mentioned species, including an analysis of the terminology used to describe this cambial variation. For the first time, we have identified several developmental stages giving rise to the origins of successive cambia in this family. First, the pericycle originates in 1-3 layers of conjunctive tissue. After the differentiation of the first ring, the conjunctive tissue undergoes new divisions, developing approximately 10 rows of parenchyma cells. In the middle portion, a layer of sclereids is formed, again subdividing the conjunctive tissue into two parts: internal and external. New cambia originate in the internal part, from which new secondary vascular strands will originate, giving rise to the second successive vascular ring of the stem. The external part remains parenchymatous during the installation of the second ring and will undergo new periclinal division, repeating the entire process. New cambia will originate from the neoformed strands, which will form only rays. In the literature, successive cambia are formed by a meristem called "diffuse lateral meristem."However, based on the species of Menispermaceae studied in this report, it is demonstrated that the diffuse lateral meristem is the pericycle itself.
Resumo:
A master in the periphery of capitalism. Maria da Conceição Tavares is an eminent figure in Brazilian economic thought, especially in heterodox circles. She has tackled various issues, such as underdevelopment, from the perspective of a "critique of political economy". The purpose of this article is to identify the main theoretical references, as well as the methodological stance, in Tavares's works, by revisiting the author's critical dialogue with some strands of Political Economy. Although Tavares's work sets up a dialogue with various economists, the paper will focus on her interpretation of Marx, Keynes and Kalecki, whose ideas are of utmost importance for the construction of her analytical framework.
Resumo:
Metal-ion-mediated base-pairing of nucleic acids has attracted considerable attention during the past decade, since it offers means to expand the genetic code by artificial base-pairs, to create predesigned molecular architecture by metal-ion-mediated inter- or intra-strand cross-links, or to convert double stranded DNA to a nano-scale wire. Such applications largely depend on the presence of a modified nucleobase in both strands engaged in the duplex formation. Hybridization of metal-ion-binding oligonucleotide analogs with natural nucleic acid sequences has received much less attention in spite of obvious applications. While the natural oligonucleotides hybridize with high selectivity, their affinity for complementary sequences is inadequate for a number of applications. In the case of DNA, for example, more than 10 consecutive Watson-Crick base pairs are required for a stable duplex at room temperature, making targeting of sequences shorter than this challenging. For example, many types of cancer exhibit distinctive profiles of oncogenic miRNA, the diagnostics of which is, however, difficult owing to the presence of only short single stranded loop structures. Metallo-oligonucleotides, with their superior affinity towards their natural complements, would offer a way to overcome the low stability of short duplexes. In this study a number of metal-ion-binding surrogate nucleosides were prepared and their interaction with nucleoside 5´-monophosphates (NMPs) has been investigated by 1H NMR spectroscopy. To find metal ion complexes that could discriminate between natural nucleobases upon double helix formation, glycol nucleic acid (GNA) sequences carrying a PdII ion with vacant coordination sites at a predetermined position were synthesized and their affinity to complementary as well as mismatched counterparts quantified by UV-melting measurements.
Resumo:
IT outsourcing (ITO) refers to the shift of IT/IS activities from internal to external of an organization. In prior research, the governance of ITO is recognized with persistent strategic importance for practice, because it is tightly related to ITO success. Under the rapid transformation of global market, the evolving practice of ITO requires updated knowledge on effective governance. However, research on ITO governance is still under developed due to the lack of integrated theoretical frameworks and the variety of empirical settings besides dyadic client-vendor relationships. Especially, as multi-sourcing has become an increasingly common practice in ITO, its new governance challenges must be attended by both ITO researchers and practitioners. To address this research gap, this study aims to understand multi-sourcing governance with an integrated theoretical framework incorporating both governance structure and governance mechanisms. The focus is on the emerging deviations among formal, perceived and practiced governance. With an interpretive perspective, a single case study is conducted with mixed methods of Social Network Analysis (SNA) and qualitative inquiries. The empirical setting embraces one client firm and its two IT suppliers for IT infrastructure services. The empirical material is analyzed at three levels: within one supplier firm, between the client and one supplier, and among all three firms. Empirical evidences, at all levels, illustrate various deviations in governance mechanisms, with which emerging governance structures are shaped. This dissertation contributes to the understanding of ITO governance in three domains: the governance of ITO in general, the governance of multi-sourcing in particular, and research methodology. For ITO governance in general, this study has identified two research strands of governance structure and governance mechanisms, and integrated both concepts under a unified framework. The composition of four research papers contributes to multi-sourcing research by illustrating the benefits of zooming in and out across the multilateral relationships with different aspects and scopes. Methodologically, the viability and benefit of mixed-method is illustrated and confirmed for both researchers and practitioners.
Resumo:
Introduction The question of the meaning, methods and philosophical manifestations of history is currently rife with contention. The problem that I will address in an exposition of the thought of Wilhelm Dilthey and Martin Heidegger, centers around the intersubjectivity of an historical world. Specifically, there are two interconnected issues. First, since all knowledge occurs to a person from within his or her historical age how can any person in any age make truth claims? In order to answer this concern we must understand the essence and role of history. Yet how can we come to an individual understanding ofwhat history is when the meanings that we use are themselves historically enveloped? But can we, we who are well aware of the knowledge that archaeology has dredged up from old texts or even from 'living' monuments of past ages, really neglect to notice these artifacts that exist within and enrich our world? Charges of wilful blindness would arise if any attempt were made to suggest that certain things of our world did not come down to us from the past. Thus it appears more important 2 to determine what this 'past' is and therefore how history operates than to simply derail the possibility for historical understanding. Wilhelm Dilthey, the great German historicist from the 19th century, did not question the existence of historical artifacts as from the past, but in treating knowledge as one such artifact placed the onus on knowledge to show itself as true, or meaningful, in light ofthe fact that other historical periods relied on different facts and generated different truths or meanings. The problem for him was not just determining what the role of history is, but moreover to discover how knowledge could make any claim as true knowledge. As he stated, there is a problem of "historical anarchy"!' Martin Heidegger picked up these two strands of Dilthey's thought and wanted to answer the problem of truth and meaning in order to solve the problem of historicism. This problem underscored, perhaps for the first time, that societal presuppositions about the past and present oftheir era are not immutable. Penetrating to the core of the raison d'etre of the age was an historical reflection about the past which was now conceived as separated both temporally and attitudinally from the present. But further than this, Heidegger's focus on asking the question of the meaning of Being meant that history must be ontologically explicated not merely ontically treated. Heidegger hopes to remove barriers to a genuine ontology by II 1 3 including history into an assessment ofprevious philosophical systems. He does this in order that the question of Being be more fully explicated, which necessarily for him includes the question of the Being of history. One approach to the question ofwhat history is, given the information that we get from historical knowledge, is whether such knowledge can be formalized into a science. Additionally, we can approach the question of what the essence and role of history is by revealing its underlying characteristics, that is, by focussing on historicality. Thus we will begin with an expository look at Dilthey's conception of history and historicality. We will then explore these issues first in Heidegger's Being and Time, then in the third chapter his middle and later works. Finally, we shall examine how Heidegger's conception may reflect a development in the conception of historicality over Dilthey's historicism, and what such a conception means for a contemporary historical understanding. The problem of existing in a common world which is perceived only individually has been philosophically addressed in many forms. Escaping a pure subjectivist interpretation of 'reality' has occupied Western thinkers not only in order to discover metaphysical truths, but also to provide a foundation for politics and ethics. Many thinkers accept a solipsistic view as inevitable and reject attempts at justifying truth in an intersubjective world. The problem ofhistoricality raises similar problems. We 4 -. - - - - exist in a common historical age, presumably, yet are only aware ofthe historicity of the age through our own individual thoughts. Thus the question arises, do we actually exist within a common history or do we merely individually interpret this as communal? What is the reality of history, individual or communal? Dilthey answers this question by asserting a 'reality' to the historical age thus overcoming solipsism by encasing individual human experience within the historical horizon of the age. This however does nothing to address the epistemological concern over the discoverablity of truth. Heidegger, on the other hand, rejects a metaphysical construel of history and seeks to ground history first within the ontology ofDasein, and second, within the so called "sending" of Being. Thus there can be no solipsism for Heidegger because Dasein's Being is necessarily "cohistorical", Being-with-Others, and furthermore, this historical-Being-in-the-worldwith- Others is the horizon of Being over which truth can appear. Heidegger's solution to the problem of solipsism appears to satisfy that the world is not just a subjective idealist creation and also that one need not appeal to any universal measures of truth or presumed eternal verities. Thus in elucidating Heidegger's notion of history I will also confront the issues ofDasein's Being-alongside-things as well as the Being of Dasein as Being-in-the-world so that Dasein's historicality is explicated vis-a-vis the "sending of Being" (die Schicken des S eins).
Resumo:
I am a part-time graduate student who works in industry. This study is my narrative about how six workers and I describe shop-floor learning activities, that is learning activities that occur where work is done, outside a classroom. Because this study is narrative inquiry, you wilileam about me, the narrator, more than you would in a more conventional study. This is a common approach in narrative inquiry and it is important because my intentions shape the way that I tell these six workers' stories. I developed a typology of learning activities by synthesizing various theoretical frameworks. This typology categorizes shop-floor learning activities into five types: onthe- job training, participative learning, educational advertising, incidental learning, and self-directed learning. Although learning can occur in each of these activities in isolation, it is often comprised of a mixture of these activities. The literature review contains a number of cases that have been developed from situations described in the literature. These cases are here to make the similarities and differences between the types of learning activities that they represent more understandable to the reader and to ground the typology in practice as well as in theory. The findings are presented as reader's theatre, a dramatic presentation of these workers' narratives. The workers tell us that learning involves "being shown," and if this is not done properly they "learn the hard way." I found that many of their best case lean1ing activities involved on-the-job training, participative learning, incidentalleaming, and self-directed learning. Worst case examples were typically lacking in properly designed and delivered participative learning activities and to a lesser degree lacking carefully planned and delivered on-the-job training activities. Included are two reflective chapters that describe two cases: Learning "Engels" (English), and Learning to Write. In these chapters you will read about how I came to see that my own shop-floor learning-learning to write this thesis-could be enhanced through participative learning activities. I came to see my thesis supervisor as not only my instructor who directed and judged my learning activities, but also as a more experienced researcher who was there to participate in this process with me and to help me begin to enter the research community. Shop-floor learning involves learners and educators participating in multistranded learning activities, which require an organizational factor of careful planning and delivery. As with learning activities, which can be multi-stranded, so too, there can be multiple orientations to learning on the shop floor. In our stories, you will see that these six workers and I didn't exhibit just one orientation to learning in our stories. Our stories demonstrate that we could be behaviorist and cognitivist and humanist and social learners and constructivist in our orientation to learning. Our stories show that learning is complex and involves multiple strands, orientations, and factors. Our stories show that learning narratives capture the essence of learning-the learners, the educators, the learning activities, the organizational factors, and the learning orientations. Learning narratives can help learners and educators make sense of shop-floor learning.
Resumo:
Forty grade 9 students were selected from a small rural board in southern Ontario. The students were in two classes and were treated as two groups. The treatment group received instruction in the Logical Numerical Problem Solving Strategy every day for 37 minutes over a 6 week period. The control group received instruction in problem solving without this strategy over the same time period. Then the control group received the treat~ent and the treatment group received the instruction without the strategy. Quite a large variance was found in the problem solving ability of students in grade 9. It was also found that the growth of the problem solving ability achievement of students could be measured using growth strands based upon the results of the pilot study. The analysis of the results of the study using t-tests and a MANOVA demonstrated that the teaching of the strategy did not significaritly (at p s 0.05) increase the problem solving achievement of the students. However, there was an encouraging trend seen in the data.
Resumo:
The design of a large and reliable DNA codeword library is a key problem in DNA based computing. DNA codes, namely sets of fixed length edit metric codewords over the alphabet {A, C, G, T}, satisfy certain combinatorial constraints with respect to biological and chemical restrictions of DNA strands. The primary constraints that we consider are the reverse--complement constraint and the fixed GC--content constraint, as well as the basic edit distance constraint between codewords. We focus on exploring the theory underlying DNA codes and discuss several approaches to searching for optimal DNA codes. We use Conway's lexicode algorithm and an exhaustive search algorithm to produce provably optimal DNA codes for codes with small parameter values. And a genetic algorithm is proposed to search for some sub--optimal DNA codes with relatively large parameter values, where we can consider their sizes as reasonable lower bounds of DNA codes. Furthermore, we provide tables of bounds on sizes of DNA codes with length from 1 to 9 and minimum distance from 1 to 9.
Resumo:
Learning to write is a daunting task for many young children. The purpose of this study was to examine the impact of a combined approach to writing instruction and assessment on the writing performance of students in two grade 3 classes. Five forms and traits of writing were purposefully connected during writing lessons while exhibiting links to the four strands of the grade 3 Ontario science curriculum. Students then had opportunities to engage in the writing process and to self-assess their compositions using either student-developed (experimental group/teacher-researcher's class) or teachercreated (control group/teacher-participant's class) rubrics. Paired samples t-tests revealed that both the experimental and control groups exhibited statistically significant growth from pretest to posttest on all five integrated writing units. Independent samples t-tests showed that the experimental group outperformed the control group on the persuasive + sentence fluency and procedure + word choice writing tasks. Pearson product-moment correlation r tests revealed significant correlations between the experimental group and the teacher-researcher on the recount + ideas and report + organization tasks, while students in the control group showed significant correlations with the teacher-researcher on the narrative + voice and procedure + word choice tasks. Significant correlations between the control group and the teacher-participant were evident on the persuasive + sentence fluency and procedure + word choice tasks. Qualitative analyses revealed five themes that highlighted how students' self-assessments and reflections can be used to guide teachers in their instructional decision making. These findings suggest that educators should adopt an integrated writing program in their classrooms, while working with students to create and utilize purposeful writing assessment tools.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
A classical argument of de Finetti holds that Rationality implies Subjective Expected Utility (SEU). In contrast, the Knightian distinction between Risk and Ambiguity suggests that a rational decision maker would obey the SEU paradigm when the information available is in some sense good, and would depart from it when the information available is not good. Unlike de Finetti's, however, this view does not rely on a formal argument. In this paper, we study the set of all information structures that might be availabe to a decision maker, and show that they are of two types: those compatible with SEU theory and those for which SEU theory must fail. We also show that the former correspond to "good" information, while the latter correspond to information that is not good. Thus, our results provide a formalization of the distinction between Risk and Ambiguity. As a consequence of our main theorem (Theorem 2, Section 8), behavior not-conforming to SEU theory is bound to emerge in the presence of Ambiguity. We give two examples of situations of Ambiguity. One concerns the uncertainty on the class of measure zero events, the other is a variation on Ellberg's three-color urn experiment. We also briefly link our results to two other strands of literature: the study of ambiguous events and the problem of unforeseen contingencies. We conclude the paper by re-considering de Finetti's argument in light of our findings.
Resumo:
La Sclérose en plaques (SEP) est une maladie auto-immune inflammatoire démyélinisante du système nerveux central (SNC), lors de laquelle des cellules inflammatoires du sang périphérique infiltrent le SNC pour y causer des dommages cellulaires. Dans ces réactions neuroinflammatoires, les cellules immunitaires traversent le système vasculaire du SNC, la barrière hémo-encéphalique (BHE), pour avoir accès au SNC et s’y accumuler. La BHE est donc la première entité que rencontrent les cellules inflammatoires du sang lors de leur migration au cerveau. Ceci lui confère un potentiel thérapeutique important pour influencer l’infiltration de cellules du sang vers le cerveau, et ainsi limiter les réactions neuroinflammatoires. En effet, les interactions entre les cellules immunitaires et les parois vasculaires sont encore mal comprises, car elles sont nombreuses et complexes. Différents mécanismes pouvant influencer la perméabilité de la BHE aux cellules immunitaires ont été décrits, et représentent aujourd’hui des cibles potentielles pour le contrôle des réactions neuro-immunes. Cette thèse a pour objectif de décrire de nouveaux mécanismes moléculaires opérant au niveau de la BHE qui interviennent dans les réactions neuroinflammatoires et qui ont un potentiel thérapeutique pour influencer les interactions neuro-immunologiques. Ce travail de doctorat est séparé en trois sections. La première section décrit la caractérisation du rôle de l’angiotensine II dans la régulation de la perméabilité de la BHE. La seconde section identifie et caractérise la fonction d’une nouvelle molécule d’adhérence de la BHE, ALCAM, dans la transmigration de cellules inflammatoires du sang vers le SNC. La troisième section traite des propriétés sécrétoires de la BHE et du rôle de la chimiokine MCP-1 dans les interactions entre la BHE et les cellules souches. Dans un premier temps, nous démontrons l’importance de l’angiotensinogène (AGT) dans la régulation de la perméabilité de la BHE. L’AGT est sécrété par les astrocytes et métabolisé en angiotensine II pour pouvoir agir au niveau des CE de la BHE à travers le récepteur à l’angiotensine II, AT1 et AT2. Au niveau de la BHE, l’angiotensine II entraîne la phosphorylation et l’enrichissement de l’occludine au sein de radeaux lipidiques, un phénomène associé à l’augmentation de l’étanchéité de la BHE. De plus, dans les lésions de SEP, on retrouve une diminution de l’expression de l’AGT et de l’occludine. Ceci est relié à nos observations in vitro, qui démontrent que des cytokines pro-inflammatoires limitent la sécrétion de l’AGT. Cette étude élucide un nouveau mécanisme par lequel les astrocytes influencent et augmentent l’étanchéité de la BHE, et implique une dysfonction de ce mécanisme dans les lésions de la SEP où s’accumulent les cellules inflammatoires. Dans un deuxième temps, les techniques établies dans la première section ont été utilisées afin d’identifier les protéines de la BHE qui s’accumulent dans les radeaux lipidiques. En utilisant une technique de protéomique nous avons identifié ALCAM (Activated Leukocyte Cell Adhesion Molecule) comme une protéine membranaire exprimée par les CE de la BHE. ALCAM se comporte comme une molécule d’adhérence typique. En effet, ALCAM permet la liaison entre les cellules du sang et la paroi vasculaire, via des interactions homotypiques (ALCAM-ALCAM pour les monocytes) ou hétérotypiques (ALCAM-CD6 pour les lymphocytes). Les cytokines inflammatoires augmentent le niveau d’expression d’ALCAM par la BHE, ce qui permet un recrutement local de cellules inflammatoires. Enfin, l’inhibition des interactions ALCAM-ALCAM et ALCAM-CD6 limite la transmigration des cellules inflammatoires (monocytes et cellules T CD4+) à travers la BHE in vitro et in vivo dans un modèle murin de la SEP. Cette deuxième partie identifie ALCAM comme une cible potentielle pour influencer la transmigration de cellules inflammatoires vers le cerveau. Dans un troisième temps, nous avons pu démontrer l’importance des propriétés sécrétoires spécifiques à la BHE dans les interactions avec les cellules souches neurales (CSN). Les CSN représentent un potentiel thérapeutique unique pour les maladies du SNC dans lesquelles la régénération cellulaire est limitée, comme dans la SEP. Des facteurs qui limitent l’utilisation thérapeutique des CSN sont le mode d’administration et leur maturation en cellules neurales ou gliales. Bien que la route d’administration préférée pour les CSN soit la voie intrathécale, l’injection intraveineuse représente la voie d’administration la plus facile et la moins invasive. Dans ce contexte, il est important de comprendre les interactions possibles entre les cellules souches et la paroi vasculaire du SNC qui sera responsable de leur recrutement dans le parenchyme cérébral. En collaborant avec des chercheurs de la Belgique spécialisés en CSN, nos travaux nous ont permis de confirmer, in vitro, que les cellules souches neurales humaines migrent à travers les CE humaines de la BHE avant d’entamer leur différenciation en cellules du SNC. Suite à la migration à travers les cellules de la BHE les CSN se différencient spontanément en neurones, en astrocytes et en oligodendrocytes. Ces effets sont notés préférentiellement avec les cellules de la BHE par rapport aux CE non cérébrales. Ces propriétés spécifiques aux cellules de la BHE dépendent de la chimiokine MCP-1/CCL2 sécrétée par ces dernières. Ainsi, cette dernière partie suggère que la BHE n’est pas un obstacle à la migration de CSN vers le SNC. De plus, la chimiokine MCP-1 est identifiée comme un facteur sécrété par la BHE qui permet l’accumulation et la différentiation préférentielle de cellules souches neurales dans l’espace sous-endothélial. Ces trois études démontrent l’importance de la BHE dans la migration des cellules inflammatoires et des CSN vers le SNC et indiquent que de multiples mécanismes moléculaires contribuent au dérèglement de l’homéostasie du SNC dans les réactions neuro-immunes. En utilisant des modèles in vitro, in situ et in vivo, nous avons identifié trois nouveaux mécanismes qui permettent d’influencer les interactions entre les cellules du sang et la BHE. L’identification de ces mécanismes permet non seulement une meilleure compréhension de la pathophysiologie des réactions neuroinflammatoires du SNC et des maladies qui y sont associées, mais suggère également des cibles thérapeutiques potentielles pour influencer l’infiltration des cellules du sang vers le cerveau
Resumo:
Les protéines amyloïdes sont retrouvées sous forme de fibres dans de nombreuses maladies neurodégénératives. En tentant d’élucider le mécanisme de fibrillation, les chercheurs ont découvert que cette réaction se fait par un phénomène de nucléation passant par des oligomères. Il semblerait que ces espèces soient la principale cause de la toxicité observée dans les cellules des patients atteints d’amyloïdose. C’est pourquoi un intérêt particulier est donc porté aux premières étapes d’oligomérisation. Dans ce mémoire, nous nous intéressons à une séquence d’acide aminé fortement hydrophobe de l’α-synucléine appelée composante non β -amyloïde (Non-Amyloid β Component ou NAC). Cette dernière est retrouvée sous forme de fibres dans les corps et les neurites de Lewy des patients atteints de la maladie de Parkinson. De plus, elle constitue une composante minoritaire des fibres impliquées dans la maladie d’Alzheimer. Nous avons observé les changements structuraux qui ont lieu pour le monomère, le dimère et le trimère de la séquence NAC de l’α-synucléine. Nous nous sommes aussi intéressés aux conséquences structurelles observées dans des oligomères hétérogènes qui impliqueraient, Aβ1−40. Pour cela nous utilisons des dynamiques moléculaires, d’échange de répliques couplées au potentiel gros-grain, OPEP. Nous constatons une disparition des hélices α au profit des feuillets β , ainsi que le polymorphisme caractéristique des fibres amyloïdes. Certaines régions se sont démarquées par leurs capacités à former des feuillets β . La disparition de ces régions lorsque NAC est combinée à Aβ laisse entrevoir l’importance de l’emplacement des résidus hydrophobes dans des structures susceptibles de former des fibres amyloïdes.