972 resultados para Inspection to propagate
Resumo:
This paper presents a stylized model of international trade and asset price bubbles. Its central insight is that bubbles tend to appear and expand in countries where productivity is low relative to the rest of the world. These bubbles absorb local savings, eliminating inefficient investments and liberating resources that are in part used to invest in high productivity countries. Through this channel, bubbles act as a substitute for international capital flows, improving the international allocation of investment and reducing rate-of-return differentials across countries. This view of asset price bubbles could eventually provide a simple account of some real world phenomenae that have been difficult to model before, such as the recurrence and depth of financial crises or their puzzling tendency to propagate across countries.
Resumo:
Hepatitis A virus (HAV), the prototype of genus Hepatovirus, has several unique biological characteristics that distinguish it from other members of the Picornaviridae family. Among these, the need for an intact eIF4G factor for the initiation of translation results in an inability to shut down host protein synthesis by a mechanism similar to that of other picornaviruses. Consequently, HAV must inefficiently compete for the cellular translational machinery and this may explain its poor growth in cell culture. In this context of virus/cell competition, HAV has strategically adopted a naturally highly deoptimized codon usage with respect to that of its cellular host. With the aim to optimize its codon usage the virus was adapted to propagate in cells with impaired protein synthesis, in order to make tRNA pools more available for the virus. A significant loss of fitness was the immediate response to the adaptation process that was, however, later on recovered and more associated to a re-deoptimization rather than to an optimization of the codon usage specifically in the capsid coding region. These results exclude translation selection and instead suggest fine-tuning translation kinetics selection as the underlying mechanism of the codon usage bias in this specific genome region. Additionally, the results provide clear evidence of the Red Queen dynamics of evolution since the virus has very much evolved to re-adapt its codon usage to the environmental cellular changing conditions in order to recover the original fitness.
Resumo:
Hepatitis A virus (HAV), the prototype of genus Hepatovirus, has several unique biological characteristics that distinguish it from other members of the Picornaviridae family. Among these, the need for an intact eIF4G factor for the initiation of translation results in an inability to shut down host protein synthesis by a mechanism similar to that of other picornaviruses. Consequently, HAV must inefficiently compete for the cellular translational machinery and this may explain its poor growth in cell culture. In this context of virus/cell competition, HAV has strategically adopted a naturally highly deoptimized codon usage with respect to that of its cellular host. With the aim to optimize its codon usage the virus was adapted to propagate in cells with impaired protein synthesis, in order to make tRNA pools more available for the virus. A significant loss of fitness was the immediate response to the adaptation process that was, however, later on recovered and more associated to a re-deoptimization rather than to an optimization of the codon usage specifically in the capsid coding region. These results exclude translation selection and instead suggest fine-tuning translation kinetics selection as the underlying mechanism of the codon usage bias in this specific genome region. Additionally, the results provide clear evidence of the Red Queen dynamics of evolution since the virus has very much evolved to re-adapt its codon usage to the environmental cellular changing conditions in order to recover the original fitness.
Resumo:
We analyse the variations produced on tsunami propagation and impact over a straight coastline because of the presence of a submarine canyon incised in the continental margin. For ease of calculation we assume that the shoreline and the shelf edge are parallel and that the incident wave approaches them normally. A total of 512 synthetic scenarios have been computed by combining the bathymetry of a continental margin incised by a parameterised single canyon and the incident tsunami waves. The margin bathymetry, the canyon and the tsunami waves have been generated using mathematical functions (e.g. Gaussian). Canyon parameters analysed are: (i) incision length into the continental shelf, which for a constant shelf width relates directly to the distance from the canyon head to the coast, (ii) canyon width, and (iii) canyon orientation with respect to the shoreline. Tsunami wave parameters considered are period and sign. The COMCOT tsunami model from Cornell University was applied to propagate the waves across the synthetic bathymetric surfaces. Five simulations of tsunami propagation over a non-canyoned margin were also performed for reference. The analysis of the results reveals a strong variation of tsunami arrival times and amplitudes reaching the coastline when a tsunami wave travels over a submarine canyon, with changing maximum height location and alongshore extension. In general, the presence of a submarine canyon lowers the arrival time to the shoreline but prevents wave build-up just over the canyon axis. This leads to a decrease in tsunami amplitude at the coastal stretch located just shoreward of the canyon head, which results in a lower run-up in comparison with a non-canyoned margin. Contrarily, an increased wave build-up occurs on both sides of the canyon head, generating two coastal stretches with an enhanced run-up. These aggravated or reduced tsunami effects are modified with (i) proximity of the canyon tip to the coast, amplifying the wave height, (ii) canyon width, enlarging the areas with lower and higher maximum height wave along the coastline, and (iii) canyon obliquity with respect to the shoreline and shelf edge, increasing wave height shoreward of the leeward flank of the canyon. Moreover, the presence of a submarine canyon near the coast produces a variation of wave energy along the shore, eventually resulting in edge waves shoreward of the canyon head. Edge waves subsequently spread out alongshore reaching significant amplitudes especially when coupling with tsunami secondary waves occurs. Model results have been groundtruthed using the actual bathymetry of Blanes Canyon area in the North Catalan margin. This paper underlines the effects of the presence, morphology and orientation of submarine canyons as a determining factor on tsunami propagation and impact, which could prevail over other effects deriving from coastal configuration.
Resumo:
Multicast is one method to transfer information in IPv4 based communication. Other methods are unicast and broadcast. Multicast is based on the group concept where data is sent from one point to a group of receivers and this remarkably saves bandwidth. Group members express an interest to receive data by using Internet Group Management Protocol and traffic is received by only those receivers who want it. The most common multicast applications are media streaming applications, surveillance applications and data collection applications. There are many data security methods to protect unicast communication that is the most common transfer method in Internet. Popular data security methods are encryption, authentication, access control and firewalls. The characteristics of multicast such as dynamic membership cause that all these data security mechanisms can not be used to protect multicast traffic. Nowadays the protection of multicast traffic is possible via traffic restrictions where traffic is allowed to propagate only to certain areas. One way to implement this is packet filters. Methods tested in this thesis are MVR, IGMP Filtering and access control lists which worked as supposed. These methods restrict the propagation of multicast but are laborious to configure in a large scale. There are also a few manufacturerspecific products that make possible to encrypt multicast traffic. These separate products are expensive and mainly intended to protect video transmissions via satellite. Investigation of multicast security has taken place for several years and the security methods that will be the results of the investigation are getting ready. An IETF working group called MSEC is standardizing these security methods. The target of this working group is to standardize data security protocols for multicast during 2004.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.
Resumo:
It is well established that cytotoxic T lymphocytes play a pivotal role in the protection against intracellular pathogens and tumour cells. Such protective immune responses rely on the specific T cell receptor (TCR)-mediated recognition by CD8 T cells of small antigenic peptides presented in the context of class-I Major Histocompatibility Complex molecules (pMHCs) on the surface of infected or malignant cells. The strength (affinity/avidity) of this interaction is a major correlate of protection. Although tumour-reactive CD8 T cells can be observed in cancer patients, anti-tumour immune responses are often ineffective in controlling or eradicating the disease due to the relative low TCR affinity of these cells. To overcome this limitation, tumour-specific CD8 T cells can be genetically modified to express TCRs of improved binding strength against a defined tumour antigen before adoptive cell transfer into cancer patients. We previously generated a panel of TCRs specific for the cancer-testis antigen NY-ESO-l,57.165 with progressively increased affinities for the pMHC complex, thus providing us with a unique tool to investigate the causal link between the surface expression of such TCRs and T cell activation and function. We recently demonstrated that anti-tumour CD8 T cell reactivity could only be improved within physiological affinity limits, beyond which drastic functional declines were observed, suggesting the presence of multiple regulatory mechanisms limiting T cell activation and function in a TCR affinity-dependent manner. The overarching goal of this thesis was (i) to assess the precise impact of TCR affinity on T cell activation and signalling at the molecular level and (ii) to gain further insights on the mechanisms that regulate and delimitate maximal/optimized CD8 T cell activation and signalling. Specifically, by combining several technical approaches we characterized the activation status of proximal (i.e. CD3Ç, Lek, and ZAP-70) and distal (i.e. ERK1/2) signalling molecules along the TCR affinity gradient. Moreover, we assessed the extent of TCR downmodulation, a critical step for initial T cell activation. CD8 T cells engineered with the optimal TCR affinity variants showed increased activation levels of both proximal and distal signalling molecules when compared to the wild-type T cells. Our analyses also highlighted the "paradoxical" status of tumour-reactive CD8 T cells bearing very high TCR affinities, which retained strong proximal signalling capacity and TCR downmodulation, but were unable to propagate signalling distally (i.e. pERKl/2), resulting in impaired cell-mediated functions. Importantly, these very high affinity T cells displayed maximal levels of SHP-1 and SHP-2 phosphatases, two negative regulatory molecules, and this correlated with a partial pERKl/2 signalling recovery upon pharmacological SHP-l/SHP-2 inhibition. These findings revealed the putative presence of inhibitory regulators of the TCR signalling cascade acting very rapidly following tumour-specific stimulation. Moreover, the very high affinity T cells were only able to transiently express enhanced proximal signalling molecules, suggesting the presence of an additional level of regulation that operates through the activation of negative feedback loops over time, limiting the duration of the TCR-mediated signalling. Overall, the determination of TCR-pMHC binding parameters eliciting optimal CD8 T cell activation, signalling, and effector function while guaranteeing high antigen specificity, together with the identification of critical regulatory mechanisms acting proximally in the TCR signalling cascade, will directly contribute to optimize and support the development of future TCR-based adoptive T cell strategies for the treatment of malignant diseases. -- Les lymphocytes T CD8 cytotoxiques jouent un rôle prédominant dans la protection contre les pathogènes intracellulaires et les cellules tumorales. Ces réponses immunitaires dépendent de la spécificité avec laquelle les récepteurs T (TCR) des lymphocytes CD8 reconnaissent les peptides antigéniques présentés par les molécules du complexe Majeur de Histocompatibilité de classe I (pCMH) à la surface des cellules infectées ou malignes. La force (ou affinité/avidité) de l'interaction du TCR-pCMH est un corrélat majeur de protection. Les réponses immunitaires sont cependant souvent inefficaces et ne permettent pas de contrôler ou d'éliminer les cellules tumorales chez les patients atteint du cancer, et ce à cause de la relative faible reconnaissance des TCRs exprimés par les lymphocytes T CD8 envers les antigènes tumoraux. Afin de surmonter cette limitation, les cellules T anti-tumorales peuvent être génétiquement modifiées en les dotant de TCRs préalablement optimisés afin d'augmenter leur reconnaissance ou affinité contre les antigènes tumoraux, avant leur ré¬infusion dans le patient. Nous avons récemment généré des cellules T CD8 exprimant un panel de TCRs spécifiques pour l'antigène tumoral NY-ESO-l157.16J avec des affinités croissantes, permettant ainsi d'investiguer la causalité directe entre l'affinité du TCR-pCMH et la fonction des cellules T CD8. Nous avons démontré que la réactivité anti-tumorale pouvait être améliorée en augmentant l'affinité du TCR dans une intervalle physiologique, mais au delà duquel nous observons un important déclin fonctionnel. Ces résultats suggèrent la présence de mécanismes de régulation limitant l'activation des cellules T de manière dépendante de l'affinité du TCR. Le but de cette thèse a été (i) de définir l'impact précis de l'affinité du TCR sur l'activation et la signalisation des cellules T CD8 au niveau moléculaire et (ii) d'acquérir de nouvelles connaissances sur les mécanismes qui régulent et délimitent l'activation et la signalisation maximale des cellules T CD8 optimisées. Spécifiquement, en combinant plusieurs approches technologiques, nous avons caractérisé l'état d'activation de différentes protéines de la voie de signalisation proximale (CD3Ç, Lek et ZAP-70) et distale (ERK1/2) le long du gradient d'affinité du TCR, ainsi que l'internalisation du TCR, une étape clef dans l'activation initiale des cellules T. Les lymphocytes T CD8 exprimant des TCRs d'affinité optimale ont montré des niveaux d'activation augmentés des molécules proximales et distales par rapport aux cellules de type sauvage (wild-type). Nos analyses ont également mis en évidence un paradoxe chez les cellules T CD8 équipées avec des TCRs de très haute affinité. En effet, ces cellules anti-tumorales sont capables d'activer leurs circuits biochimiques au niveau proximal et d'internaliser efficacement leur TCR, mais ne parviennent pas à propager les signaux biochimiques dépendants du TCR jusqu'au niveau distal (via phospho-ERKl/2), avec pour conséquence une limitation de leur capacité fonctionnelle. Finalement, nous avons démontré que SHP-1 et SHP-2, deux phosphatases avec des propriétés régulatrices négatives, étaient majoritairement exprimées dans les cellules T CD8 de très hautes affinités. Une récupération partielle des niveaux d'activation de ERK1/2 a pu être observée après l'inhibition pharmacologique de ces phosphatases. Ces découvertes révèlent la présence de régulateurs moléculaires qui inhibent le complexe de signalisation du TCR très rapidement après la stimulation anti-tumorale. De plus, les cellules T de très hautes affinités ne sont capables d'activer les molécules de la cascade de signalisation proximale que de manière transitoire, suggérant ainsi un second niveau de régulation via l'activation de mécanismes de rétroaction prenant place progressivement au cours du temps et limitant la durée de la signalisation dépendante du TCR. En résumé, la détermination des paramètres impliqués dans l'interaction du TCR-pCMH permettant l'activation de voies de signalisation et des fonctions effectrices optimales ainsi que l'identification des mécanismes de régulation au niveau proximal de la cascade de signalisation du TCR contribuent directement à l'optimisation et au développement de stratégies anti-tumorales basées sur l'ingénierie des TCRs pour le traitement des maladies malignes.
Resumo:
Limited evidence exists to suggest that the ability to invade and escape protozoan host cell bactericidal activity extends to members of the Chlamydiaceae, intracellular pathogens of humans and animals and evolutionary descendants of amoeba-resisting Chlamydia-like organisms. PCR and microscopic analyses of Chlamydophila abortus infections of Acanthamoeba castellani revealed uptake of this chlamydial pathogen but, unlike the well-described inhabitant of A. castellani, Parachlamydia acanthamoebae, Cp. abortus did not appear to propagate and is likely digested by its amoebal host. These data raise doubts about the ability of free-living amoebae to serve as hosts and vectors of pathogenic members of the Chlamydiaceae but reveal opportunities, via comparative genomics, to understand virulence mechanisms used by Chlamydia-like organisms to avoid amoebal digestion.
Resumo:
ABSTRACTWe aimed to evaluate the technical efficiency of mini-cuttings technique on vegetative propagation of Paulownia fortunei (Seem.) Hemsl. var. Mikado, as well as the possible existence of anatomical barriers to its rooting. Therefore, plants originated from cuttings formed the mini-stumps and, consequently the clonal mini-garden, which was conducted in semi-hydroponic system. We evaluated the survival of mini-stumps and sprouts production for five successive collects, the percentage of mini-cuttings rooting and their anatomical description. The mini-cuttings were prepared with about 6 to 8 cm in length and two leaves reduced by about 50% in the upper third, being remained an area of, approximately 78 cm2 (10 cm diameter). The mini-cuttings were placed in tubes of 53 cm3, with substrate formed with fine vermiculite and carbonized rice hulls (1:1 v/v) and rooted in acclimatized greenhouse. After 30 days we evaluated the percentage of rooted mini-cuttings, radicial vigor (number and length of roots / mini-cutting), callus formation, emission of new shoots and maintenance of the original leaves. The mini-stumps showed 100% survival after five collects and an average production of 76-114 mini-cuttings/m2/month and rooting ranged from 70 to 90%. Mini-cuttings technique is efficient in to propagate adult propagules of the species and there are not anatomical barriers preventing roots emission.
Resumo:
Tau est une protéine associée aux microtubules enrichie dans l’axone. Dans la maladie d’Alzheimer, Tau devient anormalement hyperphosphorylée, s’accumule dans le compartiment somato-dendritique et s’agrège pour former des enchevêtrements neurofibrillaires (NFTs). Ces NFTs se propagent dans le cerveau dans un ordre bien précis. Ils apparaissent d’abord dans le cortex transenthorinal pour ensuite se propager là où ces neurones projettent, c’est-à-dire au cortex entorhinal. Les NFTs s’étendent ensuite à l’hippocampe puis à différentes régions du cortex et néocortex. De plus, des études récentes ont démontré que la protéine Tau peut être sécrétée par des lignées neuronales et que lorsqu’on injecte des agrégats de Tau dans un cerveau de souris, ceux-ci peuvent pénétrer dans les neurones et induire la pathologie de Tau dans le cerveau. Ces observations ont mené à l’hypothèse que la protéine Tau pathologique pourrait être sécrétée par les neurones, pour ensuite être endocytée par les cellules avoisinantes et ainsi propager la maladie. L’objectif de la présente étude était donc de prouver la sécrétion de la protéine Tau par les neurones et d’identifier par quelle voie elle est secrétée. Nos résultats ont permis de démontrer que la protéine Tau est sécrétée par des neurones corticaux de souris de type sauvage ainsi que dans un modèle de surexpression dans des cellules HeLa et PC12. Nos résultats indiquent que la sécrétion de Tau se ferait par les autophagosomes. Finalement, nous avons démontré que la protéine Tau sécrétée est déphosphorylée et clivée par rapport à la protéine Tau intracellulaire non sécrétée.
Resumo:
Studies on pulse propagation in single mode optical fibers have attracted interest from a wide area of science and technology as they have laid down the foundation for an in-depth understanding of the underlying physical principles, especially in the field of optical telecommunications. The foremost among them is discovery of the optical soliton which is considered to be one of the most significant events of the twentieth century owing to its fantastic ability to propagate undistorted over long distances and to remain unaflected after collision with each other. To exploit the important propertia of optical solitons, innovative mathematical models which take into account proper physical properties of the single mode optical fibers demand special attention. This thesis contains a theoretical analysis of the studies on soliton pulse propagation in single mode optical fibers.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
This work demonstrates how partial evaluation can be put to practical use in the domain of high-performance numerical computation. I have developed a technique for performing partial evaluation by using placeholders to propagate intermediate results. For an important class of numerical programs, a compiler based on this technique improves performance by an order of magnitude over conventional compilation techniques. I show that by eliminating inherently sequential data-structure references, partial evaluation exposes the low-level parallelism inherent in a computation. I have implemented several parallel scheduling and analysis programs that study the tradeoffs involved in the design of an architecture that can effectively utilize this parallelism. I present these results using the 9- body gravitational attraction problem as an example.
Resumo:
In this session we look at the sorts of errors that occur in programs, and how we can use different testing and debugging strategies (such as unit testing and inspection) to track them down. We also look at error handling within the program and at how we can use Exceptions to manage errors in a more sophisticated way. These slides are based on Chapter 6 of the Book 'Objects First with BlueJ'