987 resultados para Software Defined Receiver
Resumo:
Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.
Resumo:
The human Me14-D12 antigen is a cell surface glycoprotein regulated by interferon-gamma (IFN-gamma) on tumor cell lines of neuroectodermal origin. It consists of two non-convalently linked subunits with apparent mol. wt sizes of 33,000 and 38,000. Here we describe the molecular cloning of a genomic probe for the Me14-D12 gene using the gene transfer approach. Mouse Ltk- cells were stably cotransfected with human genomic DNA and the Herpes Simplex virus thymidine kinase (TK) gene. Primary and secondary transfectants expressing the Me14-D12 antigen were isolated after selection in HAT medium by repeated sorting on a fluorescence activated cell sorter (FACS). A recombinant phage harboring a 14.3 kb insert of human DNA was isolated from a genomic library made from a positive secondary transfectant cell line. A specific probe derived from the phage DNA insert allowed the identification of two mRNAs of 3.5 kb and 2.2 kb in primary and secondary L cell transfectants, as well as in human melanoma cell lines expressing the Me14-D12 antigen. The regulation of Me14-D12 antigen by INF-gamma was retained in the L cell transfectants and could be detected both at the level of protein and mRNA expression.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
Barrels are discrete cytoarchitectonic neurons cluster located in the layer IV of the somatosensory¦cortex in mice brain. Each barrel is related to a specific whisker located on the mouse snout. The¦whisker-to-barrel pathway is a part of the somatosensory system that is intensively used to explore¦sensory activation induced plasticity in the cerebral cortex.¦Different recording methods exist to explore the cortical response induced by whisker deflection in¦the cortex of anesthetized mice. In this work, we used a method called the Single-Unit Analysis by¦which we recorded the extracellular electric signals of a single barrel neuron using a microelectrode.¦After recording the signal was processed by discriminators to isolate specific neuronal shape (action¦potentials).¦The objective of this thesis was to familiarize with the barrel cortex recording during whisker¦deflection and its theoretical background and to compare two different ways of discriminating and¦sorting cortical signal, the Waveform Window Discriminator (WWD) or the Spike Shape Discriminator (SSD).¦WWD is an electric module allowing the selection of specific electric signal shape. A trigger and a¦window potential level are set manually. During measurements, every time the electric signal passes¦through the two levels a dot is generated on time line. It was the method used in previous¦extracellular recording study in the Département de Biologie Cellulaire et de Morphologie (DBCM) in¦Lausanne.¦SSD is a function provided by the signal analysis software Spike2 (Cambridge Electronic Design). The¦neuronal signal is discriminated by a complex algorithm allowing the creation of specific templates.¦Each of these templates is supposed to correspond to a cell response profile. The templates are saved¦as a number of points (62 in this study) and are set for each new cortical location. During¦measurements, every time the cortical recorded signal corresponds to a defined number of templates¦points (60% in this study) a dot is generated on time line. The advantage of the SSD is that multiple¦templates can be used during a single stimulation, allowing a simultaneous recording of multiple¦signals.¦It exists different ways to represent data after discrimination and sorting. The most commonly used¦in the Single-Unit Analysis of the barrel cortex are the representation of the time between stimulation¦and the first cell response (the latency), the representation of the Response Magnitude (RM) after¦whisker deflection corrected for spontaneous activity and the representation of the time distribution¦of neuronal spikes on time axis after whisker stimulation (Peri-Stimulus Time Histogram, PSTH).¦The results show that the RMs and the latencies in layer IV were significantly different between the¦WWD and the SSD discriminated signal. The temporal distribution of the latencies shows that the¦different values were included between 6 and 60ms with no peak value for SSD while the WWD¦data were all gathered around a peak of 11ms (corresponding to previous studies). The scattered¦distribution of the latencies recorded with the SSD did not correspond to a cell response.¦The SSD appears to be a powerful tool for signal sorting but we do not succeed to use it for the¦Single-Unit Analysis extracellular recordings. Further recordings with different SSD templates settings¦and larger sample size may help to show the utility of this tool in Single-Unit Analysis studies.
Resumo:
Objective: Although 24-hour arterial blood pressure can be monitored in a free-moving animal using pressure telemetric transmitter mostly from Data Science International (DSI), accurate monitoring of 24-hour mouse left ventricular pressure (LVP) is not available because of its insufficient frequency response to a high frequency signal such as the maximum derivative of mouse LVP (LVdP/dtmax and LVdP/dtmin). The aim of the study was to develop a tiny implantable flow-through LVP telemetric transmitter for small rodent animals, which can be potentially adapted for human 24 hour BP and LVP accurate monitoring. Design and Method: The mouse LVP telemetric transmitter (Diameter: _12 mm, _0.4 g) was assembled by a pressure sensor, a passive RF telemetry chip, and to a 1.2F Polyurethane (PU) catheter tip. The device was developed in two configurations and compared with existing DSI system: (a) prototype-I: a new flow-through pressure sensor with wire link and (b) prototype-II: prototype-I plus a telemetry chip and its receiver. All the devices were applied in C57BL/6J mice. Data are mean_SEM. Results: A high frequency response (>100 Hz) PU heparin saline-filled catheter was inserted into mouse left ventricle via right carotid artery and implanted, LV systolic pressure (LVSP), LVdP/dtmax, and LVdP/dtmin were recorded on day2, 3, 4, 5, and 7 in conscious mice. The hemodynamic values were consistent and comparable (139_4 mmHg, 16634_319, - 12283_184 mmHg/s, n¼5) to one recorded by a validated Pebax03 catheter (138_2mmHg, 16045_443 and -12112_357 mmHg/s, n¼9). Similar LV hemodynamic values were obtained with Prototype-I. The same LVP waveforms were synchronically recorded by Notocord wire and Senimed wireless software through prototype-II in anesthetized mice. Conclusion: An implantable flow-through LVP transmitter (prototype-I) is generated for LVP accurate assessment in conscious mice. The prototype-II needs a further improvement on data transmission bandwidth and signal coupling distance to its receiver for accurate monitoring of LVP in a freemoving mouse.
Resumo:
Background: The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties.Results: We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php.Conclusions: BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.
Resumo:
O trabalho que ora apresentamos tem como tema “ O Ensino da Expressão e Educação Físico-Motora nas Escolas do Ensino Básico de São Lourenço dos Órgãos – Pós – Reforma.” O mesmo tem como horizonte temporal o ano lectivo 2006/07 e abarca as Escolas Básicas do novel Município de São Lourenço dos Órgãos. O ensino da Expressão e Educação Físico – Motora (E.E.F.M.) reveste-se de enorme importância e, nos últimos anos, tem-se emprestado muita atenção a esta área, quer seja a nível planetário, quer seja a nível nacional. Hoje, muitos estudiosos preocupam-se com o ensino desta área e, em consequência procuram fazer algo que permita a promoção da mesma e seja vista como as outras ditas académicas, isto porque todos estão cientes da sua importância e pertinência dentro do sistema escolar. Diante de tudo o que já foi dito e na possibilidade de poder dar um contributo a bem da educação no nosso pais, em geral, e no Município de São Lourenço dos Órgãos, em particular, pretendemos compreender, entretanto, porque é que muitos professores não leccionam ou raras vezes leccionam a área da Expressão e Educação Físico – Motora.
Resumo:
MHC class II-peptide multimers are important tools for the detection, enumeration and isolation of antigen-specific CD4+ Τ cells. However, their erratic and often poor performance impeded their broad application and thus in-depth analysis of key aspects of antigen-specific CD4+ Τ cell responses. In the first part of this thesis we demonstrate that a major cause for poor MHC class II tetramer staining performance is incomplete peptide loading on MHC molecules. We observed that peptide binding affinity for "empty" MHC class II molecules poorly correlates with peptide loading efficacy. Addition of a His-tag or desthiobiotin (DTB) at the peptide N-terminus allowed us to isolate "immunopure" MHC class II-peptide monomers by affinity chromatography; this significantly, often dramatically, improved tetramer staining of antigen-specific CD4+ Τ cells. Insertion of a photosensitive amino acid between the tag and the peptide, permitted removal of the tag from "immunopure" MHC class II-peptide complex by UV irradiation, and hence elimination of its potential interference with TCR and/or MHC binding. Moreover, to improve loading of self and tumor antigen- derived peptides onto "empty" MHC II molecules, we first loaded these with a photocleavable variant of the influenza A hemagglutinin peptide HA306-318 and subsequently exchanged it with a poorly loading peptide (e.g. NY-ESO-1119-143) upon photolysis of the conditional ligand. Finally, we established a novel type of MHC class II multimers built on reversible chelate formation between 2xHis-tagged MHC molecules and a fluorescent nitrilotriacetic acid (NTA)-containing scaffold. Staining of antigen-specific CD4+ Τ cells with "NTAmers" is fully reversible and allows gentle cell sorting. In the second part of the thesis we investigated the role of the CD8α transmembrane domain (TMD) for CD8 coreceptor function. The sequence of the CD8α TMD, but not the CD8β TMD, is highly conserved and homodimerizes efficiently. We replaced the CD8α TMD with the one of the interleukin-2 receptor a chain (CD8αTac) and thus ablated CD8α TMD interactions. We observed that ΤΙ Τ cell hybridomas expressing CD8αTacβ exhibited severely impaired intracellular calcium flux, IL-2 responses and Kd/PbCS(ABA) P255A tetramer binding. By means of fluorescence resonance energy transfer experiments (FRET) we established that CD8αTacβ associated with TCR:CD3 considerably less efficiently than CD8αβ, both in the presence and the absence of Kd/PbCS(ABA) complexes. Moreover, we observed that CD8αTacβ partitioned substantially less in lipid rafts, and related to this, associated less efficiently with p56Lck (Lck), a Src kinase that plays key roles in TCR proximal signaling. Our results support the view that the CD8α TMD promotes the formation of CD8αβP-CD8αβ dimers on cell surfaces. Because these contain two CD8β chains and that CD8β, unlike CD8α, mediates association of CD8 with TCR:CD3 as well as with lipid rafts and hence with Lck, we propose that the CD8αTMD plays an important and hitherto unrecognized role for CD8 coreceptor function, namely by promoting CD8αβ dimer formation. We discuss what implications this might have on TCR oligomerization and TCR signaling. - Les multimères de complexes MHC classe II-peptide sont des outils importants pour la détection, le dénombrement et l'isolation des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. Cependant, leur performance erratique et souvent inadéquate a empêché leur utilisation généralisée, limitant ainsi l'analyse des aspects clés des réponses des lymphocytes Τ CD4+. Dans la première partie de cette thèse, nous montrons que la cause principale de la faible efficacité des multimères de complexes MHC classe II-peptide est le chargement incomplet des molécules MHC par des peptides. Nous montrons également que l'affinité du peptide pour la molécule MHC classe II "vide" n'est pas nécessairement liée au degré du chargement. Grâce à l'introduction d'une étiquette d'histidines (His-tag) ou d'une molécule de desthiobiotine à l'extrémité N-terminale du peptide, des monomères MHC classe II- peptide dits "immunopures" ont pu être isolés par chromatographic d'affinité. Ceci a permis d'améliorer significativement et souvent de façon spectaculaire, le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. L'insertion d'un acide aminé photosensible entre l'étiquette et le peptide a permis la suppression de l'étiquette du complexe MHC classe- Il peptide "immunopure" par irradiation aux UV, éliminant ainsi de potentielles interférences de liaison au TCR et/ou au MHC. De plus, afin d'améliorer le chargement des molécules MHC classe II "vides" avec des peptides dérivés d'auto-antigènes ou d'antigènes tumoraux, nous avons tout d'abord chargé les molécules MHC "vides" avec un analogue peptidique photoclivable issu du peptide HA306-318 de l'hémagglutinine de la grippe de type A, puis, sous condition de photolyse, nous l'avons échangé avec de peptides à chargement faible (p.ex. NY-ESO-1119-143). Finalement, nous avons construit un nouveau type de multimère réversible, appelé "NTAmère", basé sur la formation chélatante reversible entre les molécules MHC-peptide étiquettés par 2xHis et un support fluorescent contenant des acides nitrilotriacetiques (NTA). Le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt avec les "NTAmères" est pleinement réversible et permet également un tri cellulaire plus doux. Dans la deuxième partie de cette thèse nous avons étudié le rôle du domaine transmembranaire (TMD) du CD8α pour la fonction coréceptrice du CD8. La séquence du TMD du CD8α, mais pas celle du TMD du CD8β, est hautement conservée et permet une homodimérisation efficace. Nous avons remplacé le TMD du CD8α avec celui de la chaîne α du récepteur à l'IL-2 (CD8αTac), éliminant ainsi les interactions du TMD du CD8α. Nous avons montré que les cellules des hybridomes Τ T1 exprimant le CD8αTacβ présentaient une atteinte sévère du flux du calcium intracellulaire, des réponses d'IL-2 et de la liaison des tétramères Kd/PbCS(ABA) P255A. Grâce aux expériences de transfert d'énergie entre molécules fluorescentes (FRET), nous avons montré que l'association du CD8αTacβ avec le TCR:CD3 est considérablement moins efficace qu'avec le CD8αβ, et ceci aussi bien en présence qu'en absence de complexes Kd/PbCS(ABA). De plus, nous avons observé que le CD8αTacβ se distribuait beaucoup moins bien dans les radeaux lipidiques, engendrant ainsi, une association moins efficace avec p56Lck (Lck), une kinase de la famille Src qui joue un rôle clé dans la signalisation proximale du TCR. Nos résultats soutiennent l'hypothèse que le TMD du CD8αβ favorise la formation des dimères de CD8αβ à la surface des cellules. Parce que ces derniers contiennent deux chaînes CD8β et que CD8β, contrairement à CD8α, favorise l'association du CD8 au TCR:CD3 aussi bien qu'aux radeaux lipidiques et par conséquent à Lck, nous proposons que le TMD du CD8α joue un rôle important, jusqu'alors inconnu, pour la fonction coreceptrice du CD8, en encourageant la formation des dimères CD8αβ. Nous discutons des implications possibles sur l'oligomerisation du TCR et la signalisation du TCR.
Resumo:
A `next' operator, s, is built on the set R1=(0,1]-{ 1-1/e} defining a partial order that, with the help of the axiom of choice, can be extended to a total order in R1. Besides, the orbits {sn(a)}nare all dense in R1 and are constituted by elements of the samearithmetical character: if a is an algebraic irrational of degreek all the elements in a's orbit are algebraic of degree k; if a istranscendental, all are transcendental. Moreover, the asymptoticdistribution function of the sequence formed by the elements in anyof the half-orbits is a continuous, strictly increasing, singularfunction very similar to the well-known Minkowski's ?(×) function.
Resumo:
Current methods for constructing house price indices are based on comparisons of sale prices of residential properties sold two or more times and on regression of the sale prices on the attributes of the properties and of their locations. The two methods have well recognised deficiencies, selection bias and model assumptions, respectively. We introduce a new method based on propensity score matching. The average house prices for two periods are compared by selecting pairs of properties, one sold in each period, that are as similar on a set of available attributes (covariates) as is feasible to arrange. The uncertainty associated with such matching is addressed by multiple imputation, framing the problem as involving missing values. The method is applied to aregister of transactions ofresidential properties in New Zealand and compared with the established alternatives.
Resumo:
Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.
Resumo:
MHC-peptide tetramers have become essential tools for T-cell analysis, but few MHC class II tetramers incorporating peptides from human tumor and self-antigens have been developed. Among limiting factors are the high polymorphism of class II molecules and the low binding capacity of the peptides. Here, we report the generation of molecularly defined tetramers using His-tagged peptides and isolation of folded MHC/peptide monomers by affinity purification. Using this strategy we generated tetramers of DR52b (DRB3*0202), an allele expressed by approximately half of Caucasians, incorporating an epitope from the tumor antigen NY-ESO-1. Molecularly defined tetramers avidly and stably bound to specific CD4(+) T cells with negligible background on nonspecific cells. Using molecularly defined DR52b/NY-ESO-1 tetramers, we could demonstrate that in DR52b(+) cancer patients immunized with a recombinant NY-ESO-1 vaccine, vaccine-induced tetramer-positive cells represent ex vivo in average 1:5,000 circulating CD4(+) T cells, include central and transitional memory polyfunctional populations, and do not include CD4(+)CD25(+)CD127(-) regulatory T cells. This approach may significantly accelerate the development of reliable MHC class II tetramers to monitor immune responses to tumor and self-antigens.
Resumo:
This paper extends the theory of network competition betweentelecommunications operators by allowing receivers to derive a surplusfrom receiving calls (call externality) and to affect the volume ofcommunications by hanging up (receiver sovereignty). We investigate theextent to which receiver charges can lead to an internalization of thecalling externality. When the receiver charge and the termination(access) charge are both regulated, there exists an e±cient equilibrium.Effciency requires a termination discount. When reception charges aremarket determined, it is optimal for each operator to set the prices foremission and reception at their off-net costs. For an appropriately chosentermination charge, the symmetric equilibrium is again effcient. Lastly,we show that network-based price discrimination creates strong incentivesfor connectivity breakdowns, even between equal networks.