939 resultados para sequential benchmarks


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to compare the use of different variables to measure the clinical wear of two denture tooth materials in two analysis centers. METHODS: Twelve edentulous patients were provided with full dentures. Two different denture tooth materials (experimental material and control) were placed randomly in accordance with the split-mouth design. For wear measurements, impressions were made after an adjustment phase of 1-2 weeks and after 6, 12, 18, and 24 months. The occlusal wear of the posterior denture teeth of 11 subjects was assessed in two study centers by use of plaster replicas and 3D laser-scanning methods. In both centers sequential scans of the occlusal surfaces were digitized and superimposed. Wear was described by use of four different variables. Statistical analysis was performed after log-transformation of the wear data by use of the Pearson and Lin correlation and by use of a mixed linear model. RESULTS: Mean occlusal vertical wear of the denture teeth after 24 months was between 120μm and 212μm, depending on wear variable and material. For three of the four variables, wear of the experimental material was statistically significantly less than that of the control. Comparison of the two study centers, however, revealed correlation of the wear variables was only moderate whereas strong correlation was observed among the different wear variables evaluated by each center. SIGNIFICANCE: Moderate correlation was observed for clinical wear measurements by optical 3D laser scanning in two different study centers. For the two denture tooth materials, wear measurements limited to the attrition zones led to the same qualitative assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: It is generally assumed that the biodistribution and pharmacokinetics of radiolabelled antibodies remain similar between dosimetric and therapeutic injections in radioimmunotherapy. However, circulation half-lives of unlabelled rituximab have been reported to increase progressively after the weekly injections of standard therapy doses. The aim of this study was to evaluate the evolution of the pharmacokinetics of repeated 131I-rituximab injections during treatment with unlabelled rituximab in patients with non-Hodgkin's lymphoma (NHL). METHODS: Patients received standard weekly therapy with rituximab (375 mg/m2) for 4 weeks and a fifth injection at 7 or 8 weeks. Each patient had three additional injections of 185 MBq 131I-rituximab in either treatment weeks 1, 3 and 7 (two patients) or weeks 2, 4 and 8 (two patients). The 12 radiolabelled antibody injections were followed by three whole-body (WB) scintigraphic studies during 1 week and blood sampling on the same occasions. Additional WB scans were performed after 2 and 4 weeks post 131I-rituximab injection prior to the second and third injections, respectively. RESULTS: A single exponential radioactivity decrease for WB, liver, spleen, kidneys and heart was observed. Biodistribution and half-lives were patient specific, and without significant change after the second or third injection compared with the first one. Blood T(1/2)beta, calculated from the sequential blood samples and fitted to a bi-exponential curve, was similar to the T(1/2) of heart and liver but shorter than that of WB and kidneys. Effective radiation dose calculated from attenuation-corrected WB scans and blood using Mirdose3.1 was 0.53+0.05 mSv/MBq (range 0.48-0.59 mSv/MBq). Radiation dose was highest for spleen and kidneys, followed by heart and liver. CONCLUSION: These results show that the biodistribution and tissue kinetics of 131I-rituximab, while specific to each patient, remained constant during unlabelled antibody therapy. RIT radiation doses can therefore be reliably extrapolated from a preceding dosimetry study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To test the dose response effect of infused fish oil (FO) rich in n-3 PUFAs on the inflammatory response to endotoxin (LPS) and on membrane incorporation of fatty acids in healthy subjects. Prospective, sequential investigation comparing three different FO doses. Three groups of male subjects aged 26.8 +/- 3.2 years (BMI 22.5 +/- 2.1). One of three FO doses (Omegaven10%) as a slow infusion before LPS: 0.5 g/kg 1 day before LPS, 0.2 g/kg 1 day before, or 0.2 g/kg 2 h before. Temperature, hemodynamic variables, indirect calorimetry and blood samples (TNF-alpha, stress hormones) were collected. After LPS temperature, ACTH and TNF-alpha concentrations increased in the three groups: the responses were significantly blunted (p < 0.0001) compared with the control group of the Pluess et al. trial. Cortisol was unchanged. Lowest plasma ACTH, TNF-alpha and temperature AUC values were observed after a single 0.2 g/kg dose of FO. EPA incorporation into platelet membranes was dose-dependent. Having previously shown that the response to LPS was reproducible, this study shows that three FO doses blunted it to various degrees. The 0.2 g/kg perfusion immediately before LPS was the most efficient in blunting the responses, suggesting LPS capture in addition to the systemic and membrane effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NanoImpactNet (NIN) is a multidisciplinary European Commission funded network on the environmental, health and safety (EHS) impact of nanomaterials. The 24 founding scientific institutes are leading European research groups active in the fields of nanosafety, nanorisk assessment and nanotoxicology. This 4−year project is the new focal point for information exchange within the research community. Contact with other stakeholders is vital and their needs are being surveyed. NIN is communicating with 100s of stakeholders: businesses; internet platforms; industry associations; regulators; policy makers; national ministries; international agencies; standard−setting bodies and NGOs concerned by labour rights, EHS or animal welfare. To improve this communication, internet research, a questionnaire distributed via partners and targeted phone calls were used to identify stakeholders' interests and needs. Knowledge gaps and the necessity for further data mentioned by representatives of all stakeholder groups in the targeted phone calls concerned: potential toxic and safety hazards of nanomaterials throughout their lifecycles; fate and persistence of nanoparticles in humans, animals and the environment; risks associated to nanoparticle exposure; participation in the preparation of nomenclature, standards, methodologies, protocols and benchmarks; development of best practice guidelines; voluntary schemes on responsibility; databases of materials, research topics and themes. Findings show that stakeholders and NIN researchers share very similar knowledge needs, and that open communication and free movement of knowledge will benefit both researchers and industry. Consequently NIN will encourage stakeholders to be active members. These survey findings will be used to improve NIN's communication tools to further build on interdisciplinary relationships towards a healthy future with nanotechnology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The AMS 800 urinary control system is the gold standard for the treatment of urinary incontinence due to sphincter insufficiency. Despite excellent functional outcome and latest technological improvements, the revision rate remains significant. To overcome the shortcomings of the current device, we developed a modern electromechanical artificial urinary sphincter. The results demonstrated that this new sphincter is effective and well tolerated up to 3 months. This preliminary study represents a first step in the clinical application of novel technologies and an alternative compression mechanism to the urethra. OBJECTIVES: To evaluate the effectiveness in continence achievement of a new electromechanical artificial urinary sphincter (emAUS) in an animal model. To assess urethral response and animal general response to short-term and mid-term activation of the emAUS. MATERIALS AND METHODS: The principle of the emAUS is electromechanical induction of alternating compression of successive segments of the urethra by a series of cuffs activated by artificial muscles. Between February 2009 and May 2010 the emAUS was implanted in 17 sheep divided into three groups. The first phase aimed to measure bladder leak point pressure during the activation of the device. The second and third phases aimed to assess tissue response to the presence of the device after 2-9 weeks and after 3 months respectively. Histopathological and immunohistochemistry evaluation of the urethra was performed. RESULTS: Bladder leak point pressure was measured at levels between 1091 ± 30.6 cmH2 O and 1244.1 ± 99 cmH2 O (mean ± standard deviation) depending on the number of cuffs used. At gross examination, the explanted urethra showed no sign of infection, atrophy or stricture. On microscopic examination no significant difference in structure was found between urethral structure surrounded by a cuff and control urethra. In the peripheral tissues, the implanted material elicited a chronic foreign body reaction. Apart from one case, specimens did not show significant presence of lymphocytes, polymorphonuclear leucocytes, necrosis or cell degeneration. Immunohistochemistry confirmed the absence of macrophages in the samples. CONCLUSIONS: This animal study shows that the emAUS can provide continence. This new electronic controlled sequential alternating compression mechanism can avoid damage to urethral vascularity, at least up to 3 months after implantation. After this positive proof of concept, long-term studies are needed before clinical application could be considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oolitic carbonates belonging to the Hauptrogenstein Formation of Bajocian (Middle Jurassic) age have been shown to be anomalously enriched in cadmium (Cd) throughout the Jura Mountains. Soils associated with this type of rock substratum may be naturally polluted with regards to Cd. At Schleifenberg (Canton Basel Land, Switzerland) the Hauptrogenstein Formation is almost entirely exposed along a trail on its SW flank. Cadmium concentrations were systematically measured throughout this formation and Cd enrichments in rocks are shown to occur to a maximum content of 4.9 mg kg(-1). We investigated associated soils, which cover the entire outcrop, and show that they have been formed through the weathering of the underlying bedrock and through the uptake of colluvial limestone fragments from the same and older formations. Cadmium contents in the soils reach a maximum value of 2.0 mg kg(-1), thereby exceeding the official Swiss indicative guideline value for soils fixed at 0.8 mg.kg(-1). Mineralogical analyses on the soils and associated bedrock suggest that no allochthonous component related to aeolian transport is present. Sequential extractions applied to selected soil samples show that about half of the Cd resides in the carbonate fraction coming from the fractured parent-rock, while the Cd released from the weathered carbonates is associated either with organic matter (over 10%) or with Fe and Mn-oxihydroxides (approximately 30%). No exchangeable Cd phase was found and this, together with the buffer capacity of this calcareous soil, suggests that the amount of mobile Cd is quite negligible in this soil, which also greatly reduces the amount of bioavailable

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program’s workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study developed and standardized an enzime-linked immunosorbent assay (ELISA) to detect Giardia antigen in feces using rabbit polyclonal antibodies. Giardia cysts were purified from human fecal samples by sucrose and percoll gradients. Gerbils (Meriones unguiculatus) were infected to obtain trophozoites. Rabbits were inoculated with either cyst or trophozoite antigens of 14 Colombian Giardia isolates to develop antibodies against the respective stages. The IgG anti-Giardia were purified by sequential caprylic acid and ammonium sulfate precipitation. A portion of these polyclonal antibodies was linked to alkaline phosphatase (conjugate). One hundred and ninety six samples of human feces, from different patients, were tested by parasitologic diagnosis: 69 were positive for Giardia cysts, 56 had no Giardia parasites, and 71 revealed parasites other than Giardia. The optimal concentration of polyclonal antibodies for antigen capture was 40 µg/ml and the optimal conjugate dilution was 1:100. The absorbance cut-off value was 0.24. The parameters of the ELISA test for Giardia antigen detection were: sensitivity, 100% (95% CI: 93.4-100%); specificity, 95% (95% CI: 88.6-97.6%); positive predictive value, 91% (95% CI: 81.4-95.9%); and negative predictive value, 100% (95% CI: 96.1-100%). This ELISA will improve the diagnosis of Giardia infections in Colombia and will be useful in following patients after treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 19-month mark-release-recapture study of Neotoma micropus with sequential screening for Leishmania mexicana was conducted in Bexar County, Texas, USA. The overall prevalence rate was 14.7% and the seasonal prevalence rates ranged from 3.8 to 26.7%. Nine incident cases were detected, giving an incidence rate of 15.5/100 rats/year. Follow-up of 101 individuals captured two or more times ranged from 14 to 462 days. Persistence of L. mexicana infections averaged 190 days and ranged from 104 to 379 days. Data on dispersal, density, dispersion, and weight are presented, and the role of N. micropus as a reservoir host for L. mexicana is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proprotein convertase subtilisin kexin isozyme 1 (SKI-1)/site 1 protease (S1P) plays crucial roles in cellular homeostatic functions and is hijacked by pathogenic viruses for the processing of their envelope glycoproteins. Zymogen activation of SKI-1/S1P involves sequential autocatalytic processing of its N-terminal prodomain at sites B'/B followed by the herein newly identified C'/C sites. We found that SKI-1/S1P autoprocessing results in intermediates whose catalytic domain remains associated with prodomain fragments of different lengths. In contrast to other zymogen proprotein convertases, all incompletely matured intermediates of SKI-1/S1P showed full catalytic activity toward cellular substrates, whereas optimal cleavage of viral glycoproteins depended on B'/B processing. Incompletely matured forms of SKI-1/S1P further process cellular and viral substrates in distinct subcellular compartments. Using a cell-based sensor for SKI-1/S1P activity, we found that 9 amino acid residues at the cleavage site (P1-P8) and P1' are necessary and sufficient to define the subcellular location of processing and to determine to what extent processing of a substrate depends on SKI-1/S1P maturation. In sum, our study reveals novel and unexpected features of SKI-1/S1P zymogen activation and subcellular specificity of activity toward cellular and pathogen-derived substrates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La majorité des organelles d'une cellule adaptent leur nombre et leur taille pendant les processus de division cellulaire, de trafic vésiculaire ou suite à des changements environnementaux par des processus de fusion et de fragmentation membranaires. Ceci est valable notamment pour le golgi, les mitochondries, les péroxisomes et les lysosomes. La vacuole est le compartiment terminal de la voie endocytaire dans la levure Saccharomyces cerevisiae\ elle correspond aux lysosomes des cellules mammifères. Suite à un choc hyperosmotique, la vacuole se fragmente en plusieurs petites vésicules. Durant ce projet, cette fragmentation a été étudiée en utilisant la technique de microscopie confocale in vivo. J'ai observé que la division de la vacuole se produit d'une façon asymétrique. La première minute après le choc osmotique, les vacuoles rétrécissent et forment des longues invaginations tubulaires. Cette phase est dépendante de la protéine Vps1, un membre de la famille des protéines apparentées à la dynamine, ainsi que d'un gradient transmembranaire de protons. Pendant les 10-15 minutes qui suivent, des vésicules se détachent dans les régions où l'on observe les invaginations pendant la phase initiale. Cette deuxième phase qui mène à la fission des nouveaux compartiments vacuolaires dépend de la production du lipide PI(3,5)P2 par la protéine Fab1. J'ai établi la suite des événements du processus de fragmentation des vacuoles et propose la possibilité d'un rôle régulateur de la protéine kinase cycline-dépendante Pho85.¦En outre, j'ai tenté d'éclaircir plus spécifiquement le rôle de Vps1 pendant la fusion et fission des vacuoles. J'ai trouvé que tous les deux processus sont dépendants de l'activité GTPase de cette protéine. De plus l'association avec la membrane vacuolaire paraît régulée par le cycle d'hydrolyse du GTP. Vps1 peut lier la membrane sans la présence d'un autre facteur protéinique, ce qui permet de conclure à une interaction directe avec des lipides de la membrane. Cette interaction est au moins partiellement effectuée par le domaine GTPase, ce qui est une nouveauté pour un membre de cette famille de protéines. Une deuxième partie de Vps1, nommée insert B, est impliquée dans la liaison à la vacuole, soit par interaction directe avec la membrane, soit par régulation du domaine GTPase. En assumant que Vps1 détienne deux régions capables de liaison aux membranes, je conclus qu'elle pourrait fonctionner comme facteur de « tethering » lors de la fusion des vacuoles.¦-¦La cellule contient plusieurs sous-unités, appelées organelles, possédant chacune une fonction spécifique. Dépendant des processus qui s'y déroulent à l'intérieur, un environnement chimique spécifique est requis. Pour maintenir ces différentes conditions, les organelles sont séparées par des membranes. Lors de la division cellulaire ou en adaptation à des changements de milieu, les organelles doivent être capables de modifier leur morphologie. Cette adaptation a souvent lieu par fusion ou division des organelles. Le même principe est valable pour la vacuole dans la levure. La vacuole est une organelle qui sert principalement au stockage des aliments et à la dégradation des différents composants cellulaires. Alors que la fusion des vacuoles est un processus déjà bien décrit, la fragmentation des vacuoles a jusqu'ici été peu étudiée. Elle peut être induit par un choc osmotique: à cause de la concentration de sel élevé dans le milieu, le cytosol de la levure perd de l'eau. Par un flux d'eau de la vacuole au cytosol, la cellule est capable d'équilibrer celui-ci. Quand la vacuole perd du volume, elle doit réadapter le rapport entre surface membranaire et volume, ce qui se fait efficacement par une fragmentation d'une grande vacuole en plusieurs petites vésicules. Comment ce processus se déroule d'un point de vue morphologique n'a pas été décrit jusqu'à présent. En analysant la fragmentation vacuolaire par microscopie, j'ai trouvé que celle-ci se déroule en deux phases. Pendant la première minute suivant le choc osmotique, les vacuoles rétrécissent et forment des longues invaginations tubulaires. Cette phase dépend de la protéine Vps1, un membre de la famille des protéines apparentées à la dynamine, ainsi que du gradient transmembranaire de protons. Ce gradient s'établit par une pompe membranaire, la V-ATPase, qui transporte des protons dans la vacuole en utilisant l'énergie libérée par hydrolyse d'ATP. Après cette phase initiale, la formation de nouvelles vésicules vacuolaires dépend de la synthèse du lipide PI(3,5)P2.¦Dans la deuxième partie de l'étude, j'ai tenté de décrire comment Vps1 lie la membrane pour effectuer un remodelage de la vacuole. Vps1 est nécessaire pour la fusion et la fragmentation des vacuoles. J'ai découvert que tous les deux processus dépendent de sa capacité d'hydrolyser du GTP. Ainsi l'association avec la membrane est couplée au cycle d'hydrolyse du GTP. Vps1 peut lier la membrane sans la présence d'une autre protéine, et interagit donc très probablement avec les lipides de la membrane. Deux parties différentes de la protéine sont impliquées dans la liaison, dont une, inattendue, le domaine GTPase.¦-¦Numerous organelles undergo membrane fission and fusion events during cell division, vesicular traffic, or in response to changes in environmental conditions. Examples include Golgi (Acharya et al., 1998) mitochondria (Bleazard et al., 1999) peroxisomes (Kuravi et al., 2006) and lysosomes (Ward et al., 1997). In the yeast Saccharomyces cerevisiae the vacuole is the terminal component of the endocytic pathway and corresponds to lysosomes in mammalian cells. Yeast vacuoles fragment into multiple small vesicles in response to a hypertonic shock. This rapid and homogeneous reaction can serve as a model to study the requirements of the fragmentation process. Here, I investigated osmotically induced fragmentation by time-lapse microscopy. I observe that the small fragmentation products originate directly from the large central vacuole by asymmetric scission rather than by consecutive equal divisions and that fragmentation occurs in two distinct phases. During the first minute, vacuoles shrink and generate deep invaginations, leaving behind tubular structures. This phase requires the dynamin-like GTPase Vps1 and the vacuolar proton gradient. In the subsequent 10-15 minutes, vesicles pinch off from the tubular structures in a polarized fashion, directly generating fragmentation products of the final size. This phase depends on the production of phosphatidylinositol- 3,5-bisphosphate by the Fab1 complex. I suggest a possible regulation of vacuole fragmentation by the CDK Pho85. Based on my microscopy study I established a sequential involvement of the different fission factors.¦In addition to the morphological description of vacuole fragmentation I more specifically aimed to shed some light on the role of Vps1 in vacuole fragmentation and fusion. I find that both functions are dependent on the GTPase activity of the protein and that also the membrane association of the dynamin-like protein is coupled to the GTPase cycle. I found that Vps1 has the capacity for direct lipid binding on the vacuole and that this lipid binding is at least partially mediated through residues in the GTPase domain, a complete novelty for a dynamin family member. A second stretch located in the region of insert Β has also membrane-binding activity or regulates the association with the vacuole through the GTPase domain. Under the assumption of two membrane-binding regions I speculate on Vps1 as a possible tethering factor for vacuole fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the T-cell receptor αδ (TCRαδ) locus harbours large libraries of variable (TRAV) and junctional (TRAJ) gene segments, according to previous studies the TCRα chain repertoire is of limited diversity due to restrictions imposed by sequential coordinate TRAV-TRAJ recombinations. By sequencing tens of millions of TCRα chain transcripts from naive mouse CD8(+) T cells, we observed a hugely diverse repertoire, comprising nearly all possible TRAV-TRAJ combinations. Our findings are not compatible with sequential coordinate gene recombination, but rather with a model in which contraction and DNA looping in the TCRαδ locus provide equal access to TRAV and TRAJ gene segments, similarly to that demonstrated for IgH gene recombination. Generation of the observed highly diverse TCRα chain repertoire necessitates deletion of failed attempts by thymic-positive selection and is essential for the formation of highly diverse TCRαβ repertoires, capable of providing good protective immunity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whereas during the last few years handling of the transcutaneous PO2 (tcPO2) and PCO2 (tcPCO2) sensor has been simplified, the high electrode temperature and the short application time remain major drawbacks. In order to determine whether the application of a topical metabolic inhibitor allows reliable measurement at a sensor temperature of 42 degrees C for a period of up to 12 h, we performed a prospective, open, nonrandomized study in a sequential sample of 20 critically ill neonates. A total of 120 comparisons (six repeated measurements per patient) between arterial and transcutaneous values were obtained. Transcutaneous values were measured with a control sensor at 44 degrees C (conventional contact medium, average application time 3 h) and a test sensor at 42 degrees C (Eugenol solution, average application time 8 h). Comparison of tcPO2 and PaO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.16 kPa (range +1.60 to -2.00 kPa), limits of agreement +1.88 and -1.56 kPa. Comparison of tcPO2 and PaO2 at 44 degrees C (control sensor) revealed a mean difference of +0.02 kPa (range +2.60 to -1.90 kPa), limits of agreement +2.12 and -2.08 kPa. Comparison of tcPCO2 and PaCO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.91 (range +2.30 to +0.10 kPa), limits of agreement +2.24 and -0.42 kPa. Comparison of tcPCO2 and PaCO2 at 44 degrees C (control sensor) revealed a mean difference of +0.63 kPa (range 1.50 to -0.30 kPa), limits of agreement +1.73 and -0.47 kPa. CONCLUSION: Our results show that the use of an Eugenol solution allows reliable measurement of tcPO2 at a heating temperature of 42 degrees C; the application time can be prolongued up to a maximum of 12 h without aggravating the skin lesions. The performance of the tcPCO2 monitor was slightly worse at 42 degrees C than at 44 degrees C suggesting that for the Eugenol solution the metabolic offset should be corrected.