890 resultados para Context Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to develop an equipment and system of resistance exercise (RE), based on squat-type exercise for rodents, with control of training variables. We developed an operant conditioning system composed of sound, light and feeding devices that allowed optimized RE performance by the animal. With this system, it is not necessary to impose fasting or electric shock for the animal to perform the task proposed (muscle contraction). Furthermore, it is possible to perform muscle function tests in vivo within the context of the exercise proposed and control variables such as intensity, volume (sets and repetitions), and exercise session length, rest interval between sets and repetitions, and concentric strength. Based on the experiments conducted, we demonstrated that the model proposed is able to perform more specific control of other RE variables, especially rest interval between sets and repetitions, and encourages the animal to exercise through short-term energy restriction and "disturbing" stimulus that do not promote alterations in body weight. Therefore, despite experimental limitations, we believe that this RE apparatus is closer to the physiological context observed in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To To conduct a cost-effectiveness analysis of a universal childhood hepatitis A vaccination program in Brazil. Methods: An age and time-dependent dynamic model was developed to estimate the incidence of hepatitis A for 24 years. The analysis was run separately according to the pattern of regional endemicity, one for South + Southeast (low endemicity) and one for the North + Northeast + Midwest (intermediate endemicity). The decision analysis model compared universal childhood vaccination with current program of vaccinating high risk individuals. Epidemiologic and cost estimates were based on data from a nationwide seroprevalence survey of viral hepatitis, primary data collection, National Health Information Systems and literature. The analysis was conducted from both the health system and societal perspectives. Costs are expressed in 2008 Brazilian currency (Real). Results: A universal immunization program would have a significant impact on disease epidemiology in all regions, resulting in 64% reduction in the number of cases of icteric hepatitis, 59% reduction in deaths for the disease and a 62% decrease of life years lost, in a national perspective. With a vaccine price of R$16.89 (US$7.23) per dose, vaccination against hepatitis A was a cost-saving strategy in the low and intermediate endemicity regions and in Brazil as a whole from both health system and society perspective. Results were most sensitive to the frequency of icteric hepatitis, ambulatory care and vaccine costs. Conclusions: Universal childhood vaccination program against hepatitis A could be a cost-saving strategy in all regions of Brazil. These results are useful for the Brazilian government for vaccine related decisions and for monitoring population impact if the vaccine is included in the National Immunization Program. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A non-Markovian one-dimensional random walk model is studied with emphasis on the phase-diagram, showing all the diffusion regimes, along with the exactly determined critical lines. The model, known as the Alzheimer walk, is endowed with memory-controlled diffusion, responsible for the model's long-range correlations, and is characterized by a rich variety of diffusive regimes. The importance of this model is that superdiffusion arises due not to memory per se, but rather also due to loss of memory. The recently reported numerically and analytically estimated values for the Hurst exponent are hereby reviewed. We report the finding of two, previously overlooked, phases, namely, evanescent log-periodic diffusion and log-periodic diffusion with escape, both with Hurst exponent H = 1/2. In the former, the log-periodicity gets damped, whereas in the latter the first moment diverges. These phases further enrich the already intricate phase diagram. The results are discussed in the context of phase transitions, aging phenomena, and symmetry breaking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spin systems in the presence of disorder are described by two sets of degrees of freedom, associated with orientational (spin) and disorder variables, which may be characterized by two distinct relaxation times. Disordered spin models have been mostly investigated in the quenched regime, which is the usual situation in solid state physics, and in which the relaxation time of the disorder variables is much larger than the typical measurement times. In this quenched regime, disorder variables are fixed, and only the orientational variables are duly thermalized. Recent studies in the context of lattice statistical models for the phase diagrams of nematic liquid-crystalline systems have stimulated the interest of going beyond the quenched regime. The phase diagrams predicted by these calculations for a simple Maier-Saupe model turn out to be qualitative different from the quenched case if the two sets of degrees of freedom are allowed to reach thermal equilibrium during the experimental time, which is known as the fully annealed regime. In this work, we develop a transfer matrix formalism to investigate annealed disordered Ising models on two hierarchical structures, the diamond hierarchical lattice (DHL) and the Apollonian network (AN). The calculations follow the same steps used for the analysis of simple uniform systems, which amounts to deriving proper recurrence maps for the thermodynamic and magnetic variables in terms of the generations of the construction of the hierarchical structures. In this context, we may consider different kinds of disorder, and different types of ferromagnetic and anti-ferromagnetic interactions. In the present work, we analyze the effects of dilution, which are produced by the removal of some magnetic ions. The system is treated in a “grand canonical" ensemble. The introduction of two extra fields, related to the concentration of two different types of particles, leads to higher-rank transfer matrices as compared with the formalism for the usual uniform models. Preliminary calculations on a DHL indicate that there is a phase transition for a wide range of dilution concentrations. Ising spin systems on the AN are known to be ferromagnetically ordered at all temperatures; in the presence of dilution, however, there are indications of a disordered (paramagnetic) phase at low concentrations of magnetic ions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context-aware computing is currently considered the most promising approach to overcome information overload and to speed up access to relevant information and services. Context-awareness may be derived from many sources, including user profile and preferences, network information, sensor analysis; usually context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications should not be intrusive and should be able to react according to user’s context, with minimum user effort. Context is an application dependent multidimensional space and the location is an important part of it since the very beginning. Location can be used to guide applications, in providing information or functions that are most appropriate for a specific position. Hence location systems play a crucial role. There are several technologies and systems for computing location to a vary degree of accuracy and tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces. The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous environments. Particularly, the focus will be on pedestrian identification, localization, orientation and activity recognition. This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a 6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites were the main target of the context-aware services discussed. Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons. For example building a smart environment in museums or in protected sites is a challenging task, because localization and tracking are usually based on technologies that are difficult to hide or harmonize within the environment. Therefore it is expected that the experience made with this research may be useful also in domains other than Cultural Heritage. This work presents three different approaches to the pedestrian identification, positioning and tracking: Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision based tracking system for initial settings an real-time calibration; Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS measurements; Pedestrian identification and tracking, combining the vision based tracking system with WiFi localization. The proposed localization systems have been mainly used to enhance Cultural Heritage applications in providing information and services depending on the user’s actual context, in particular depending on the user’s location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with inflation theory, focussing on the model of Jarrow & Yildirim, which is nowadays used when pricing inflation derivatives. After recalling main results about short and forward interest rate models, the dynamics of the main components of the market are derived. Then the most important inflation-indexed derivatives are explained (zero coupon swap, year-on-year, cap and floor), and their pricing proceeding is shown step by step. Calibration is explained and performed with a common method and an heuristic and non standard one. The model is enriched with credit risk, too, which allows to take into account the possibility of bankrupt of the counterparty of a contract. In this context, the general method of pricing is derived, with the introduction of defaultable zero-coupon bonds, and the Monte Carlo method is treated in detailed and used to price a concrete example of contract. Appendixes: A: martingale measures, Girsanov's theorem and the change of numeraire. B: some aspects of the theory of Stochastic Differential Equations; in particular, the solution for linear EDSs, and the Feynman-Kac Theorem, which shows the connection between EDSs and Partial Differential Equations. C: some useful results about normal distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis investigates the issue of work-family conflict and facilitation in a sanitarian contest, using the DISC Model (De Jonge and Dormann, 2003, 2006). The general aim has been declined in two empirical studies reported in this dissertation chapters. Chapter 1 reporting the psychometric properties of the Demand-Induced Strain Compensation Questionnaire. Although the empirical evidence on the DISC Model has received a fair amount of attention in literature both for the theoretical principles and for the instrument developed to display them (DISQ; De Jonge, Dormann, Van Vegchel, Von Nordheim, Dollard, Cotton and Van den Tooren, 2007) there are no studies based solely on psychometric investigation of the instrument. In addition, no previous studies have ever used the DISC as a model or measurement instrument in an Italian context. Thus the first chapter of the present dissertation was based on psychometric investigation of the DISQ. Chapter 2 reporting a longitudinal study contribution. The purpose was to examine, using the DISC model, the relationship between emotional job characteristics, work-family interface and emotional exhaustion among a health care population. We started testing the Triple Match Principle of the DISC Model using solely the emotional dimension of the strain-stress process (i.e. emotional demands, emotional resources and emotional exhaustion). Then we investigated the mediator role played by w-f conflict and w-f facilitation in relation to emotional job characteristics and emotional exhaustion. Finally we compared the mediator model across workers involved in chronic illness home demands and workers who are not involved. Finally, a general conclusion, integrated and discussed the main findings of the studies reported in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis is focused on the open-ended coaxial-probe frequency-domain reflectometry technique for complex permittivity measurement at microwave frequencies of dispersive dielectric multilayer materials. An effective dielectric model is introduced and validated to extend the applicability of this technique to multilayer materials in on-line system context. In addition, the thesis presents: 1) a numerical study regarding the imperfectness of the contact at the probe-material interface, 2) a review of the available models and techniques, 3) a new classification of the extraction schemes with guidelines on how they can be used to improve the overall performance of the probe according to the problem requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neoplastic overgrowth depends on the cooperation of several mutations ultimately leading to major rearrangements in cellular behaviour. The molecular crosstalk occurring between precancerous and normal cells strongly influences the early steps of the tumourigenic process as well as later stages of the disease. Precancerous cells are often removed by cell death from normal tissues but the mechanisms responsible for such fundamental safeguard processes remain in part elusive. To gain insight into these phenomena I took advantage of the clonal analysis methods available in Drosophila for studying the phenotypes due to loss of function of the neoplastic tumour suppressor lethal giant larvae (lgl). I found that lgl mutant cells growing in wild-type imaginal wing discs are subject to the phenomenon of cell competition and are eliminated by JNK-dependent cell death because they express very low levels of dMyc oncoprotein compared to those in the surrounding tissue. Indeed, in non-competitive backgrounds lgl mutant clones are able to overgrow and upregulate dMyc, overwhelming the neighbouring tissue and forming tumourous masses that display several cancer hallmarks. These phenotypes are completely abolished by reducing dMyc abundance within mutant cells while increasing it in lgl clones growing in a competitive context re-establishes their tumourigenic potential. Similarly, the neoplastic growth observed upon the oncogenic cooperation between lgl mutation and activated Ras/Raf/MAPK signalling was found to be characterised by and dependent on the ability of cancerous cells to upregulate dMyc with respect to the adjacent normal tissue, through both transcriptional and post-transcriptional mechanisms, thereby confirming its key role in lgl-induced tumourigenesis. These results provide first evidence that the dMyc oncoprotein is required in lgl mutant tissue to promote invasive overgrowth in developing and adult epithelial tissues and that dMyc abundance inside versus outside lgl mutant clones plays a key role in driving neoplastic overgrowth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der Ausheilung von Infektionen mit Leishmania major liegt die Sekretion von IFN- von sowohl CD4+ als auch CD8+ T Zellen zugrunde.rnAktuell konnte in der Literatur nur ein Epitop aus dem parasitären LACK Protein für eine effektive CD4+ T Zell-vermittelte Immunantwort beschrieben werden. Das Ziel der vorliegenden Arbeit bestand daher darin, mögliche MHC I abhängige CD8+ T Zell Antworten zu untersuchen. rnFür diesen Ansatz wurde als erstes der Effekt einer Vakzinierung mit LACK Protein fusioniert an die Protein-Transduktionsdomäne des HIV-1 (TAT) analysiert. Die Effektivität von TAT-LACK gegenüber CD8+ T Zellen wurde mittels in vivo Protein-Vakzinierung von resistenten C57BL/6 Mäusen in Depletions-Experimenten gezeigt.rnDie Prozessierung von Proteinen vor der Präsentation immunogener Peptide gegenüber T Zellen ist unbedingt erforderlich. Daher wurde in dieser Arbeit die Rolle des IFN--induzierbaren Immunoproteasoms bei der Prozessierung von parasitären Proteinen und Präsentation von Peptiden gebunden an MHC I Moleküle durch in vivo und in vitro Experimente untersucht. Es konnte in dieser Arbeit eine Immunoproteasom-unabhängige Prozessierung aufgezeigt werden.rnWeiterhin wurde Parasitenlysat (SLA) von sowohl Promastigoten als auch Amastigoten fraktioniert. In weiterführenden Experimenten können diese Fraktionen auf immunodominante Proteine/Peptide hin untersucht werden. rnLetztlich wurden Epitop-Vorhersagen für CD8+ T Zellen mittels computergestützer Software von beiden parasitären Lebensformen durchgeführt. 300 dieser Epitope wurden synthetisiert und werden in weiterführenden Experimenten zur Charakterisierung immunogener Eigenschaften weiter verwendet. rnIn ihrer Gesamtheit trägt die vorliegende Arbeit wesentlich zum Verständnis über die komplexen Mechanismen der Prozessierung und letztendlich zur Identifikation von möglichen CD8+ T Zell Epitopen bei. Ein detailiertes Verständnis der Prozessierung von CD8+ T Zell Epitopen von Leishmania major über den MHC Klasse I Weg ist von höchster Bedeutung. Die Charakterisierung sowie die Identifikation dieser Peptide wird einen maßgeblichen Einfluss auf die weiteren Entwicklungen von Vakzinen gegen diesen bedeutenden human-pathogenen Parasiten mit sich bringen. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.