15 resultados para supply chains and system supplier

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to document the outcome of a global three-year long supply chain improvement initiative at a multi-national producer of branded sporting goods that is transforming from a holding structure to an integrated company. The case company is comprised of seven internationally well-known sport brands, which form a diverse set of independent sub-cases, on which the same supply chain metrics and change project approach was applied to improve supply chain performance. Design/methodology/approach - By using in-depth case study and statistical analysis the paper analyzes across the brands how supply chain complexity (SKU count), supply chain type (make or buy) and seasonality affect completeness and punctuality of deliveries, and inventory as the change project progresses. Findings - Results show that reduction in supply chain complexity improves delivery performance, but has no impact on inventory. Supply chain type has no impact on service level, but brands with in-house production are better in improving inventory than those with outsourced production. Non-seasonal business units improve service faster than seasonal ones, yet there is no impact on inventory. Research limitations/implications - The longitudinal data used for the analysis is biased with the general business trend, yet the rich data from different cases and three-years of data collection enables generalizations to a certain level. Practical implications - The in-depth case study serves as an example for other companies on how to initiate a supply chain improvement project across business units with tangible results. Originality/value - The seven sub-cases with their different characteristics on which the same improvement initiative was applied sets a unique ground for longitudinal analysis to study supply chain complexity, type and seasonality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The competitiveness of businesses is increasingly dependent on their electronic networks with customers, suppliers, and partners. While the strategic and operational impact of external integration and IOS adoption has been extensively studied, much less attention has been paid to the organizational and technical design of electronic relationships. The objective of our longitudinal research project is the development of a framework for understanding and explaining B2B integration. Drawing on existing literature and empirical cases we present a reference model (a classification scheme for B2B Integration). The reference model comprises technical, organizational, and institutional levels to reflect the multiple facets of B2B integration. In this paper we onvestigate the current state of electronic collaboration in global supply chains focussing on the technical view. Using an indepth case analysis we identify five integration scenarios. In the subsequent confirmatory phase of the research we analyse 112 real-world company cases to validate these five integration scenarios. Our research advances and deepens existing studies by developing a B2B reference model, which reflects the current state of practice and is independent of specific implementation technologies. In the next stage of the research the emerging reference model will be extended to create an assessment model for analysing the maturity level of a given company in a specific supply chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biofuels are considered as a promising substitute for fossil fuels when considering the possible reduction of greenhouse gases emissions. However limiting their impacts on potential benefits for reducing climate change is shortsighted. Global sustainability assessments are necessary to determine the sustainability of supply chains. We propose a new global criterion based framework enabling a comprehensive international comparison of bioethanol supply chains. The interest of this framework is that the selection of the sustainability indicators is qualified on three criterions: relevance, reliability and adaptability to the local context. Sustainability issues have been handled along environmental, social and economical issues. This new framework has been applied for a specific issue: from a Swiss perspective, is locally produced bioethanol in Switzerland more sustainable than imported from Brazil? Thanks to this framework integrating local context in its indicator definition, Brazilian production of bioethanol is shown as energy efficient and economically interesting for Brazil. From a strictly economic point of view, bioethanol production within Switzerland is not justified for Swiss consumption and questionable for the environmental issue. The social dimension is delicate to assess due to the lack of reliable data and is strongly linked to the agricultural policy in both countries. There is a need of establishing minimum sustainability criteria for imported bioethanol to avoid unwanted negative or leakage effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quand on parle de l'acide lactique (aussi connu sous le nom de lactate) une des premières choses qui vient à l'esprit, c'est son implication en cas d'intense activité musculaire. Sa production pendant une activité physique prolongée est associée avec la sensation de fatigue. Il n'est donc pas étonnant que cette molécule ait été longtemps considérée comme un résidu du métabolisme, possiblement toxique et donc à éliminer. En fait, il a été découvert que le lactate joue un rôle prépondérant dans le métabolisme grâce à son fort potentiel énergétique. Le cerveau, en particulier les neurones qui le composent, est un organe très gourmand en énergie. Récemment, il a été démontré que les astrocytes, cellules du cerveau faisant partie de la famille des cellules gliales, utilisent le glucose pour produire du lactate comme source d'énergie et le distribue aux neurones de manière adaptée à leur activité. Cette découverte a renouvelé l'intérêt scientifique pour le lactate. Aujourd'hui, plusieurs études ont démontré l'implication du lactate dans d'autres fonctions de la physiologie cérébrale. Dans le cadre de notre étude, nous nous sommes intéressés au rapport entre neurones et astrocytes avec une attention particulière pour le rôle du lactate. Nous avons découvert que le lactate possède la capacité de modifier la communication entre les neurones. Nous avons aussi décrypté le mécanisme grâce auquel le lactate agit, qui est basé sur un récepteur présent à la surface des neurones. Cette étude montre une fonction jusque-là insoupçonnée du lactate qui a un fort impact sur la compréhension de la relation entre neurones et astrocytes. - Relatively to its volume, the brain uses a large amount of glucose as energy source. Furthermore, a tight link exists between the level of synaptic activity and the consumption of energy equivalents. Astrocytes have been shown to play a central role in the regulation of this so-called neurometabolic coupling. They are thought to deliver the metabolic substrate lactate to neurons in register to glutamatergic activity. The astrocytic uptake of glutamate, released in the synaptic cleft, is the trigger signal that activates an intracellular cascade of events that leads to the production and release of lactate from astrocytes. The main goal of this thesis work was to obtain detailed information on the metabolic and functional interplay between neurons and astrocytes, in particular on the influence of lactate besides its metabolic effects. To gain access to both spatial and temporal aspects of these dynamic interactions, we used optical microscopy associated with specific fluorescent indicators, as well as electrophysiology. In the first part of this thesis, we show that lactate decreases spontaneous neuronal, activity in a concentration-dependent manner and independently of its metabolism. We further identified a receptor-mediated pathway underlying this modulatory action of lactate. This finding constituted a novel mechanism for the modulation of neuronal transmission by lactate. In the second part, we have undergone a characterization of a new pharmacological tool, a high affinity glutamate transporter inhibitor. The finality of this study was to investigate the detailed pharmacological properties of the compound to optimize its use as a suppressor of glutamate signal from neuron to astrocytes. In conclusion, both studies have implications not only for the understanding of the metabolic cooperation between neurons and astrocytes, but also in the context of the glial modulation of neuronal activity. - Par rapport à son volume, le cerveau utilise une quantité massive de glucose comme source d'énergie. De plus, la consommation d'équivalents énergétiques est étroitement liée au niveau d'activité synaptique. Il a été montré que dans ce couplage neurométabolique, un rôle central est joué par les astrocytes. Ces cellules fournissent le lactate, un substrat métabolique, aux neurones de manière adaptée à leur activité glutamatergique. Plus précisément, le glutamate libéré dans la fente synaptique par les neurones, est récupéré par les astrocytes et déclenche ainsi une cascade d'événements intracellulaires qui conduit à la production et libération de lactate. Les travaux de cette thèse ont visé à étudier la relation métabolique et fonctionnelle entre neurones et astrocytes, avec une attention particulière pour des rôles que pourrait avoir le lactate au-delà de sa fonction métabolique. Pour étudier les aspects spatio-temporels de ces interactions dynamiques, nous avons utilisé à la fois la microscopie optique associée à des indicateurs fluorescents spécifiques, ainsi que l'électrophysiologie. Dans la première partie de cette thèse, nous montrons que le lactate diminue l'activité neuronale spontanée de façon concentration-dépendante et indépendamment de son métabolisme. Nous avons identifié l'implication d'un récepteur neuronal au lactate qui sous-tend ce mécanisme de régulation. La découverte de cette signalisation via le lactate constitue un mode d'interaction supplémentaire et nouveau entre neurones et astrocytes. Dans la deuxième partie, nous avons caractérisé un outil pharmacologique, un inhibiteur des transporteurs du glutamate à haute affinité. Le but de cette étude était d'obtenir un agent pharmacologique capable d'interrompre spécifiquement le signal médié par le glutamate entre neurones et astrocytes pouvant permettre de mieux comprendre leur relation. En conclusion, ces études ont une implication non seulement pour la compréhension de la coopération entre neurones et astrocytes mais aussi dans le contexte de la modulation de l'activité neuronale par les cellules gliales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RPE65 is a retinoid isomerase required for the production of 11-cis-retinal, the chromophore of both cone and rod visual pigments. We recently established an R91W knock-in mouse strain as homologous animal model for patients afflicted by this mutation in RPE65. These mice have impaired vision and can only synthesize minute amounts of 11-cis-retinal. Here, we investigated the consequences of this chromophore insufficiency on cone function and pathophysiology. We found that the R91W mutation caused cone opsin mislocalization and progressive geographic cone atrophy. Remnant visual function was mostly mediated by rods. Ablation of rod opsin corrected the localization of cone opsin and improved cone retinal function. Thus, our analyses indicate that under conditions of limited chromophore supply rods and cones compete for 11-cis-retinal that derives from regeneration pathway(s) which are reliant on RPE65. Due to their higher number and the instability of cone opsin, rods are privileged under this condition while cones suffer chromophore deficiency and degenerate. These findings reinforce the notion that in patients any effective gene therapy with RPE65 needs to target the cone-rich macula directly to locally restore the cones' chromophore supply outside the reach of rods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glyoxysomes are specialized peroxisomes present in various plant organs such as germinating cotyledons or senescing leaves. They are the site of beta-oxidation and of the glyoxylate cycle. These consecutive pathways are essential to the maintenance of gluconeogenesis initiated by the degradation of reserve or structural lipids. In contrast to mitochondrial beta-oxidation, which is prevalent in animal cells, glyoxysomal beta-oxidation and the glyoxylate cycle have no direct access to the mitochondrial respiratory chain because of the impermeability of the glyoxysomal membrane to the reduced cofactors. The necessity of NAD(+) regeneration can conceivably be fulfilled by membrane redox chains and/or by transmembrane shuttles. Experimental evidence based on the active metabolic roles of higher plant glyoxysomes and yeast peroxisomes suggests the coexistence of two mechanisms, namely a reductase/peroxidase membrane redox chain and a malate/aspartate shuttle susceptible to transfer electrons to the mitochondrial ATP generating system. Such a model interconnects beta-oxidation, the glyoxylate cycle, the respiratory chain and gluconeogenesis in such a way that glyoxysomal malate dehydrogenase is an essential and exclusive component of beta-oxidation (NAD(+) regeneration). Consequently, the classical view of the glyoxylate cycle is superseded by a tentative reactional scheme deprived of cyclic character.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The protein topology database KnotProt, http://knotprot.cent.uw.edu.pl/, collects information about protein structures with open polypeptide chains forming knots or slipknots. The knotting complexity of the cataloged proteins is presented in the form of a matrix diagram that shows users the knot type of the entire polypeptide chain and of each of its subchains. The pattern visible in the matrix gives the knotting fingerprint of a given protein and permits users to determine, for example, the minimal length of the knotted regions (knot's core size) or the depth of a knot, i.e. how many amino acids can be removed from either end of the cataloged protein structure before converting it from a knot to a different type of knot. In addition, the database presents extensive information about the biological functions, families and fold types of proteins with non-trivial knotting. As an additional feature, the KnotProt database enables users to submit protein or polymer chains and generate their knotting fingerprints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geological, hydrogeological and geochemical surveys were carried out in the Piedilago area (Ossola-Simplon region) in order to investigate the geothermal resources present in this area. Following these surface exploration efforts an exploratory geothermal well of 248 m was drilled in 1991. It discharges a thermal water with temperatures up to 43 degrees C and calcium (sodium) sulphate composition with a TDS close to 1350 mg/l. Chemical geothermometers suggest a reservoir temperature close to 45 degrees C indicating that the well virtually produces the pure uncooled thermal water. The Piedilago ex-ample is here considered as the departure point to establish both general criteria for further geothermal investigations in young mountains chains and taking into consideration all the available data on geology and fluid geochemistry of thermal systems in the Ossola-Simplon region, to constrain a geothermal model for the Lower Pennine Zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MHC class II-peptide multimers are important tools for the detection, enumeration and isolation of antigen-specific CD4+ Τ cells. However, their erratic and often poor performance impeded their broad application and thus in-depth analysis of key aspects of antigen-specific CD4+ Τ cell responses. In the first part of this thesis we demonstrate that a major cause for poor MHC class II tetramer staining performance is incomplete peptide loading on MHC molecules. We observed that peptide binding affinity for "empty" MHC class II molecules poorly correlates with peptide loading efficacy. Addition of a His-tag or desthiobiotin (DTB) at the peptide N-terminus allowed us to isolate "immunopure" MHC class II-peptide monomers by affinity chromatography; this significantly, often dramatically, improved tetramer staining of antigen-specific CD4+ Τ cells. Insertion of a photosensitive amino acid between the tag and the peptide, permitted removal of the tag from "immunopure" MHC class II-peptide complex by UV irradiation, and hence elimination of its potential interference with TCR and/or MHC binding. Moreover, to improve loading of self and tumor antigen- derived peptides onto "empty" MHC II molecules, we first loaded these with a photocleavable variant of the influenza A hemagglutinin peptide HA306-318 and subsequently exchanged it with a poorly loading peptide (e.g. NY-ESO-1119-143) upon photolysis of the conditional ligand. Finally, we established a novel type of MHC class II multimers built on reversible chelate formation between 2xHis-tagged MHC molecules and a fluorescent nitrilotriacetic acid (NTA)-containing scaffold. Staining of antigen-specific CD4+ Τ cells with "NTAmers" is fully reversible and allows gentle cell sorting. In the second part of the thesis we investigated the role of the CD8α transmembrane domain (TMD) for CD8 coreceptor function. The sequence of the CD8α TMD, but not the CD8β TMD, is highly conserved and homodimerizes efficiently. We replaced the CD8α TMD with the one of the interleukin-2 receptor a chain (CD8αTac) and thus ablated CD8α TMD interactions. We observed that ΤΙ Τ cell hybridomas expressing CD8αTacβ exhibited severely impaired intracellular calcium flux, IL-2 responses and Kd/PbCS(ABA) P255A tetramer binding. By means of fluorescence resonance energy transfer experiments (FRET) we established that CD8αTacβ associated with TCR:CD3 considerably less efficiently than CD8αβ, both in the presence and the absence of Kd/PbCS(ABA) complexes. Moreover, we observed that CD8αTacβ partitioned substantially less in lipid rafts, and related to this, associated less efficiently with p56Lck (Lck), a Src kinase that plays key roles in TCR proximal signaling. Our results support the view that the CD8α TMD promotes the formation of CD8αβP-CD8αβ dimers on cell surfaces. Because these contain two CD8β chains and that CD8β, unlike CD8α, mediates association of CD8 with TCR:CD3 as well as with lipid rafts and hence with Lck, we propose that the CD8αTMD plays an important and hitherto unrecognized role for CD8 coreceptor function, namely by promoting CD8αβ dimer formation. We discuss what implications this might have on TCR oligomerization and TCR signaling. - Les multimères de complexes MHC classe II-peptide sont des outils importants pour la détection, le dénombrement et l'isolation des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. Cependant, leur performance erratique et souvent inadéquate a empêché leur utilisation généralisée, limitant ainsi l'analyse des aspects clés des réponses des lymphocytes Τ CD4+. Dans la première partie de cette thèse, nous montrons que la cause principale de la faible efficacité des multimères de complexes MHC classe II-peptide est le chargement incomplet des molécules MHC par des peptides. Nous montrons également que l'affinité du peptide pour la molécule MHC classe II "vide" n'est pas nécessairement liée au degré du chargement. Grâce à l'introduction d'une étiquette d'histidines (His-tag) ou d'une molécule de desthiobiotine à l'extrémité N-terminale du peptide, des monomères MHC classe II- peptide dits "immunopures" ont pu être isolés par chromatographic d'affinité. Ceci a permis d'améliorer significativement et souvent de façon spectaculaire, le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. L'insertion d'un acide aminé photosensible entre l'étiquette et le peptide a permis la suppression de l'étiquette du complexe MHC classe- Il peptide "immunopure" par irradiation aux UV, éliminant ainsi de potentielles interférences de liaison au TCR et/ou au MHC. De plus, afin d'améliorer le chargement des molécules MHC classe II "vides" avec des peptides dérivés d'auto-antigènes ou d'antigènes tumoraux, nous avons tout d'abord chargé les molécules MHC "vides" avec un analogue peptidique photoclivable issu du peptide HA306-318 de l'hémagglutinine de la grippe de type A, puis, sous condition de photolyse, nous l'avons échangé avec de peptides à chargement faible (p.ex. NY-ESO-1119-143). Finalement, nous avons construit un nouveau type de multimère réversible, appelé "NTAmère", basé sur la formation chélatante reversible entre les molécules MHC-peptide étiquettés par 2xHis et un support fluorescent contenant des acides nitrilotriacetiques (NTA). Le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt avec les "NTAmères" est pleinement réversible et permet également un tri cellulaire plus doux. Dans la deuxième partie de cette thèse nous avons étudié le rôle du domaine transmembranaire (TMD) du CD8α pour la fonction coréceptrice du CD8. La séquence du TMD du CD8α, mais pas celle du TMD du CD8β, est hautement conservée et permet une homodimérisation efficace. Nous avons remplacé le TMD du CD8α avec celui de la chaîne α du récepteur à l'IL-2 (CD8αTac), éliminant ainsi les interactions du TMD du CD8α. Nous avons montré que les cellules des hybridomes Τ T1 exprimant le CD8αTacβ présentaient une atteinte sévère du flux du calcium intracellulaire, des réponses d'IL-2 et de la liaison des tétramères Kd/PbCS(ABA) P255A. Grâce aux expériences de transfert d'énergie entre molécules fluorescentes (FRET), nous avons montré que l'association du CD8αTacβ avec le TCR:CD3 est considérablement moins efficace qu'avec le CD8αβ, et ceci aussi bien en présence qu'en absence de complexes Kd/PbCS(ABA). De plus, nous avons observé que le CD8αTacβ se distribuait beaucoup moins bien dans les radeaux lipidiques, engendrant ainsi, une association moins efficace avec p56Lck (Lck), une kinase de la famille Src qui joue un rôle clé dans la signalisation proximale du TCR. Nos résultats soutiennent l'hypothèse que le TMD du CD8αβ favorise la formation des dimères de CD8αβ à la surface des cellules. Parce que ces derniers contiennent deux chaînes CD8β et que CD8β, contrairement à CD8α, favorise l'association du CD8 au TCR:CD3 aussi bien qu'aux radeaux lipidiques et par conséquent à Lck, nous proposons que le TMD du CD8α joue un rôle important, jusqu'alors inconnu, pour la fonction coreceptrice du CD8, en encourageant la formation des dimères CD8αβ. Nous discutons des implications possibles sur l'oligomerisation du TCR et la signalisation du TCR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We evaluated the benefits of a novel formulation of vasoactive intestinal peptide (VIP) based on the incorporation of VIP-loaded rhodamine-conjugated liposomes (VIP-Rh-Lip) within hyaluronic acid (HA) gel (Gel-VIP-Rh-Lip) for the treatment of endotoxin-induced uveitis (EIU) in comparison with VIP-Rh-Lip alone. In vitro release study and rheological analysis showed that interactions between HA chains and liposomes resulted in increased viscosity and reinforced elasticity of the gel. In vivo a single intravitreal injection of Gel-VIP-Rh-Lip was performed in rats 7 days prior to uveitis induction by subcutaneous lipopolysaccharide injection. The maximal ocular inflammation occurs within 16-24 h in controls (VIP-Rh-Lip, unloaded-Rh-Lip). Whereas intraocular injection of VIP-Rh-Lip had no effect on EIU severity compared with controls, Gel-VIP-Rh-Lip reduced significantly the clinical score and number of inflammatory cells infiltrating the eye. The fate of liposomes, VIP and HA in the eyes, regional and inguinal lymph nodes and spleen was analyzed by immunostaining and fluorescence microscopy. Retention of liposomes by HA gel was observed in vitro and in vivo. Inflammation severity seemed to impact on system stability resulting in the delayed release of VIP. Thus, HA gel containing VIP-Rh-Lip is an efficient strategy to obtain a sustained delivery of VIP in ocular and lymph node tissues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Switzerland, the country with the highest health expenditure per capita, is lacking data on trauma care and system planning. Recently, 12 trauma centres were designated to be reassessed through a future national trauma registry by 2015. Lausanne University Hospital launched the first Swiss trauma registry in 2008, which contains the largest database on trauma activity nationwide. METHODS: Prospective analysis of data from consecutively admitted shock room patients from 1 January 2008 to 31 December 2012. Shock room admission is based on physiology and mechanism of injury, assessed by prehospital physicians. Management follows a surgeon-led multidisciplinary approach. Injuries are coded by Association for the Advancement of Automotive Medicine (AAAM) certified coders. RESULTS: Over the 5 years, 1,599 trauma patients were admitted, predominantly males with a median age of 41.4 years and median injury severity score (ISS) of 13. Rate of ISS >15 was 42%. Principal mechanisms of injury were road traffic (40.4%) and falls (34.4%), with 91.5% blunt trauma. Principal patterns were brain (64.4%), chest (59.8%) and extremity/pelvic girdle (52.9%) injuries. Severe (abbreviated injury scale [AIS] score ≥ 3) orthopaedic injuries, defined as extremity and spine injuries together, accounted for 67.1%. Overall, 29.1% underwent immediate intervention, mainly by orthopaedics (27.3%), neurosurgeons (26.3 %) and visceral surgeons (13.9%); 43.8% underwent a surgical intervention within the first 24 hours and 59.1% during their hospitalisation. In-hospital mortality for patients with ISS >15 was 26.2%. CONCLUSION: This is the first 5-year report on trauma in Switzerland. Trauma workload was similar to other European countries. Despite high levels of healthcare, mortality exceeds published rates by >50%. Regardless of the importance of a multidisciplinary approach, trauma remains a surgical disease and needs dedicated surgical resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internet is becoming more and more popular among drug users. The use of websites and forums to obtain illicit drugs and relevant information about the means of consumption is a growing phenomenon mainly for new synthetic drugs. Gamma Butyrolactone (GBL), a chemical precursor of Gamma Hydroxy Butyric acid (GHB), is used as a "club drug" and also in drug facilitated sexual assaults. Its market takes place mainly on the Internet through online websites but the structure of the market remains unknown. This research aims to combine digital, physical and chemical information to help understand the distribution routes and the structure of the GBL market. Based on an Internet monitoring process, thirty-nine websites selling GBL, mainly in the Netherlands, were detected between January 2010 and December 2011. Seventeen websites were categorized into six groups based on digital traces (e.g. IP addresses and contact information). In parallel, twenty-five bulk GBL specimens were purchased from sixteen websites for packaging comparisons and carbon isotopic measurements. Packaging information showed a high correlation with digital data confirming the links previously established whereas chemical information revealed undetected links and provided complementary information. Indeed, while digital and packaging data give relevant information about the retailers, the supply routes and the distribution close to the consumer, the carbon isotopic data provides upstream information about the production level and in particular the synthesis pathways and the chemical precursors. A three-level structured market has been thereby identified with a production level mainly located in China and in Germany, an online distribution level mainly hosted in the Netherlands and the customers who order on the Internet.