883 resultados para One and many


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Augmented Reality (AR) systems which use optical tracking with fiducial marker for registration have had an important role in popularizing this technology, since only a personal computer with a conventional webcam is required. However, in most these applications, the virtual elements are shown only in the foreground a real element does not occlude a virtual one. The method presented enables AR environments based on fiducial markers to support mutual occlusion between a real element and many virtual ones, according to the elements position (depth) in the environment. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Triatoma lenti and Triatoma sherlocki are hemipterans that belong to the brasiliensis subcomplex. In triatomines, the constitutive heterochromatin pattern is species-specific and allows, in many cases, for the grouping of species. Thus, we cytogenetically analyzed T. sherlocki and T. lenti using C-banding, and we compared the results with previous ones obtained in other species of the brasiliensis subcomplex. Both species were found to have a male diploid chromosome number of 22 chromosomes (2n = 20A. +. XY) with heterochromatic blocks at one or both chromosomal ends of all autosomal pairs. During early meiotic prophase, they showed a large heteropycnotic chromocenter constituted by the association of both sex chromosomes plus two autosomal pairs and many heterochromatic blocks dispersed inside the nucleus. All of these cytogenetic characteristics are similar to those observed in other species of brasiliensis subcomplex, results which confirm the grouping of T. sherlocki and T. lenti within this subcomplex. However, we emphasize the importance of other approaches, such as molecular analysis, to confirm the placement of T. lenti within the brasiliensis subcomplex. © 2012 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays cancer is one of the main causes of death and many efforts worldwide have been driven to find out new treatments and approaches in order to extinguish or reduce this group of disorder. Chemotherapy is the main treatment for cancer, however, conventional schedule based on maximum tolerated dose (MTD) show several side effects and frequently allow the development of drug resistance. In this review we present the evidence that metronomic chemotherapy, based on the frequent administration of low or intermediate doses of chemotherapeutics is as efficient as MTD and works better in some situations. Finally, we present some data indicating that noncytotoxic concentrations of antineoplastic agents are able to both up-regulate the immune system and increase the susceptibility of tumor cells to cytotoxic T lymphocytes. Taken together, data from the literature provide us the evidence that low concentrations of selected chemotherapeutics agents, rather than conventional high doses, should be chosen for combination with immunotherapy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypoxia is one of many factors involved in the regulation of the IGF system. However, no information is available regarding the regulation of the IGF system by acute hypoxia in humans. Objective: The aim of this study was to evaluate the effect of acute hypoxia on the IGF system of children. Design: Twenty-seven previously health children (14 boys and 13 girls) aged 15 days to 9.5 years were studied in two different situations: during a hypoxemic state (HS) due to acute respiratory distress and after full recovery to a normoxemic state (NS). In these two situations oxygen saturation was assessed with a pulse-oximeter and blood samples were collected for serum IGF-I, IGF-II, IGFBP-1, IGFBP-3, ALS and insulin determination by ELISA; fluoroimmunometric assay determination for GH and also for IGF1R gene expression analysis in peripheral lymphocytes by quantitative real-time PCR. Data were paired and analyzed by the Wilcoxon non-parametric test. Results: Oxygen saturation was significantly lower during HS than in NS (P<0.0001). IGF-I and IGF-II levels were lower during HS than in NS (P<0.0001 and P=0.0004. respectively). IGFBP-3 levels were also lower in HS than in NS (P=0.0002) while ALS and basal GH levels were higher during HS (P=0.0015 and P=0.014, respectively). Moreover, IGFBP-1 levels were higher during HS than in NS (P=0.004). No difference was found regarding insulin levels. The expression of IGF1R mRNA as 2(-Delta Delta CT) was higher during HS than in NS (P=0.03). Conclusion: The above results confirm a role of hypoxia in the regulation of the IGF system also in humans. This effect could be direct on the liver and/or mediated by GH and it is not restricted to the hepatocytes but involves other cell lines. During acute hypoxia a combination of alterations usually associated with reduced IGF action was observed. The higher expression of IGF1R mRNA may reflect an up-regulation of the transcriptional process. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis evaluated in vivo and in vitro enamel permeability in different physiological and clinical conditions by means of SEM inspection of replicas of enamel surface obtained from polyvinyl siloxane impressions subsequently later cast in polyether impression ma-terial. This technique, not invasive and risk-free, allows the evaluation of fluid outflow from enamel surface and is able to detect the presence of small quantities of fluid, visu-alized as droplets. Fluid outflow on enamel surface represents enamel permeability. This property has a paramount importance in enamel physiolgy and pathology although its ef-fective role in adhesion, caries pathogenesis and prevention today is still not fully under-stood. The aim of the studies proposed was to evaluate enamel permeability changes in differ-ent conditions and to correlate the findings with the actual knowledge about enamel physiology, caries pathogenesis, fluoride and etchinhg treatments. To obtain confirmed data the replica technique has been supported by others specific techniques such as Ra-man and IR spectroscopy and EDX analysis. The first study carried out visualized fluid movement through dental enamel in vivo con-firmed that enamel is a permeable substrate and demonstrated that age and enamel per-meability are closely related. Examined samples from subjects of different ages showed a decreasing number and size of droplets with increasing age: freshly erupted permanent teeth showed many droplets covering the entire enamel surface. Droplets in permanent teeth were prominent along enamel perikymata. These results obtained through SEM inspection of replicas allowed innovative remarks in enamel physiology. An analogous testing has been developed for evaluation of enamel permeability in primary enamel. The results of this second study showed that primary enamel revealed a substantive permeability with droplets covering the entire enamel sur-face without any specific localization accordingly with histological features, without changes during aging signs of post-eruptive maturation. These results confirmed clinical data that showed a higher caries susceptibility for primary enamel and suggested a strong relationship between this one and enamel permeability. Topical fluoride application represents the gold standard for caries prevention although the mechanism of cariostatic effect of fluoride still needs to be clarified. The effects of topical fluoride application on enamel permeability were evaluated. Particularly two dif-ferent treatments (NaF and APF), with different pH, were examined. The major product of topical fluoride application was the deposition of CaF2-like globules. Replicas inspec-tion before and after both treatments at different times intervals and after specific addi-tional clinical interventions showed that such globule formed in vivo could be removed by professional toothbrushing, sonically and chemically by KOH. The results obtained in relation to enamel permeability showed that fluoride treatments temporarily reduced enamel water permeability when CaF2-like globules were removed. The in vivo perma-nence of decreased enamel permeability after CaF2 globules removal has been demon-strated for 1 h for NaF treated teeth and for at least 7 days for APF treated teeth. Important clinical consideration moved from these results. In fact the caries-preventing action of fluoride application may be due, in part, to its ability to decrease enamel water permeability and CaF2 like-globules seem to be indirectly involved in enamel protection over time maintaining low permeability. Others results obtained by metallographic microscope and SEM/EDX analyses of or-thodontic resins fluoride releasing and not demonstrated the relevance of topical fluo-ride application in decreasing the demineralization marks and modifying the chemical composition of the enamel in the treated area. These data obtained in both the experiments confirmed the efficacy of fluoride in caries prevention and contribute to clarify its mechanism of action. Adhesive dentistry is the gold standard for caries treatment and tooth rehabilitation and is founded on important chemical and physical principles involving both enamel and dentine substrates. Particularly acid etching of dental enamel enamel has usually employed in bonding pro-cedures increasing microscopic roughness. Different acids have been tested in the litera-ture suggesting several etching procedures. The acid-induced structural transformations in enamel after different etching treatments by means of Raman and IR spectroscopy analysis were evaluated and these findings were correlated with enamel permeability. Conventional etching with 37% phosphoric acid gel (H3PO4) for 30 s and etching with 15 % HCl for 120 s were investigated. Raman and IR spectroscopy showed that the treatment with both hydrochloric and phosphoric acids induced a decrease in the carbonate content of the enamel apatite. At the same time, both acids induced the formation of HPO42- ions. After H3PO4 treatment the bands due to the organic component of enamel decreased in intensity, while in-creased after HCl treatment. Replicas of H3PO4 treated enamel showed a strongly reduced permeability while replicas of HCl 15% treated samples showed a maintained permeability. A decrease of the enamel organic component, as resulted after H3PO4 treatment, involves a decrease in enamel permeability, while the increase of the organic matter (achieved by HCl treat-ment) still maintains enamel permeability. These results suggested a correlation between the amount of the organic matter, enamel permeability and caries. The results of the different studies carried out in this thesis contributed to clarify and improve the knowledge about enamel properties with important rebounds in theoretical and clinical aspects of Dentistry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alzheimer's disease (AD) and cancer represent two of the main causes of death worldwide. They are complex multifactorial diseases and several biochemical targets have been recognized to play a fundamental role in their development. Basing on their complex nature, a promising therapeutical approach could be represented by the so-called "Multi-Target-Directed Ligand" approach. This new strategy is based on the assumption that a single molecule could hit several targets responsible for the onset and/or progression of the pathology. In particular in AD, most currently prescribed drugs aim to increase the level of acetylcholine in the brain by inhibiting the enzyme acetylcholinesterase (AChE). However, clinical experience shows that AChE inhibition is a palliative treatment, and the simple modulation of a single target does not address AD aetiology. Research into newer and more potent anti-AD agents is thus focused on compounds whose properties go beyond AChE inhibition (such as inhibition of the enzyme β-secretase and inhibition of the aggregation of beta-amyloid). Therefore, the MTDL strategy seems a more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling its multifactorial nature. In this thesis, it is described the design of new MTDLs able to tackle the multifactorial nature of AD. Such new MTDLs designed are less flexible analogues of Caproctamine, one of the first MTDL owing biological properties useful for the AD treatment. These new compounds are able to inhibit the enzymes AChE, beta-secretase and to inhibit both AChE-induced and self-induced beta-amyloid aggregation. In particular, the most potent compound of the series is able to inhibit AChE in subnanomolar range, to inhibit β-secretase in micromolar concentration and to inhibit both AChE-induced and self-induced beta-amyloid aggregation in micromolar concentration. Cancer, as AD, is a very complex pathology and many different therapeutical approaches are currently use for the treatment of such pathology. However, due to its multifactorial nature the MTDL approach could be, in principle, apply also to this pathology. Aim of this thesis has been the development of new molecules owing different structural motifs able to simultaneously interact with some of the multitude of targets responsible for the pathology. The designed compounds displayed cytotoxic activity in different cancer cell lines. In particular, the most potent compounds of the series have been further evaluated and they were able to bind DNA resulting 100-fold more potent than the reference compound Mitonafide. Furthermore, these compounds were able to trigger apoptosis through caspases activation and to inhibit PIN1 (preliminary result). This last protein is a very promising target because it is overexpressed in many human cancers, it functions as critical catalyst for multiple oncogenic pathways and in several cancer cell lines depletion of PIN1 determines arrest of mitosis followed by apoptosis induction. In conclusion, this study may represent a promising starting pint for the development of new MTDLs hopefully useful for cancer and AD treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phylogeography is a recent field of biological research that links phylogenetics to biogeography through deciphering the imprint that evolutionary history has left on the genetic structure of extant populations. During the cold phases of the successive ice ages, which drastically shaped species’ distributions since the Pliocene, populations of numerous species were isolated in refugia where many of them evolved into different genetic lineages. My dissertation deals with the phylogeography of the Woodland Ringlet (Erebia medusa [Denis and Schiffermüller] 1775) in Central and Eastern Europe. This Palaearctic butterfly species is currently distributed from central France and south eastern Belgium over large parts of Central Europe and southern Siberia to the Pacific. It is absent from those parts of Europe with mediterranean, oceanic and boreal climates. It was supposed to be a Siberian faunal element with a rather homogeneous population structure in Central Europe due to its postglacial expansion out of a single eastern refugium. An already existing evolutionary scenario for the Woodland Ringlet in Central and Eastern Europe is based on nuclear data (allozymes). To know if this is corroborated by organelle evolutionary history, I sequenced two mitochondrial markers (part of the cytochrome oxydase subunit one and the control region) for populations sampled over the same area. Phylogeography largely relies on the construction of networks of uniparentally inherited haplotypes that are compared to geographic haplotype distribution thanks to recent developed methods such as nested clade phylogeographic analysis (NCPA). Several ring-shaped ambiguities (loops) emerged from both haplotype networks in E. medusa. They can be attributed to recombination and homoplasy. Such loops usually avert the straightforward extraction of the phylogeographic signal contained in a gene tree. I developed several new approaches to extract phylogeographic information in the presence of loops, considering either homoplasy or recombination. This allowed me to deduce a consistent evolutionary history for the species from the mitochondrial data and also adds plausibility for the occurrence of recombination in E. medusa mitochondria. Despite the fact that the control region is assumed to have a lack of resolving power in other species, I found a considerable genetic variation of this marker in E. medusa which makes it a useful tool for phylogeographic studies. In combination with the allozyme data, the mitochondrial genome supports the following phylogeographic scenario for E. medusa in Europe: (i) a first vicariance, due to the onset of the Würm glaciation, led to the formation of several major lineages, and is mirrored in the NCPA by restricted gene flow, (ii) later on further vicariances led to the formation of two sub-lineages in the Western lineage and two sub-lineages in the Eastern lineage during the Last Glacial Maximum or Older Dryas; additionally the NCPA supports a restriction of gene flow with isolation by distance, (iii) finally, vicariance resulted in two secondary sub-lineages in the area of Germany and, maybe, to two other secondary sub-lineages in the Czech Republic. The last postglacial warming was accompanied by strong range expansions in most of the genetic lineages. The scenario expected for a presumably Siberian faunal element such as E. medusa is a continuous loss of genetic diversity during postglacial westward expansion. Hence, the pattern found in this thesis contradicts a typical Siberian origin of E. medusa. In contrast, it corroboratess the importance of multiple extra-Mediterranean refugia for European fauna as it was recently assumed for other continental species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LRP1 modulates APP trafficking and metabolism within compartments of the secretory pathway The amyloid precursor protein (APP) is the parent protein to the amyloid beta peptide (Abeta) and is a central player in Alzheimer’s disease (AD) pathology. Abeta liberation depends on APP cleavage by beta- and gamma-secretases. To date, only a unilateral view of APP processing exists, excluding other proteins, which might be transported together and/or processed dependent on each other by the secretases described above. The low density lipoprotein receptor related protein 1 (LRP1) was shown to function as such a mediator of APP processing at multiple steps. Newly synthesized LRP1 can interact with APP, implying an interaction between these two proteins early in the secretory pathway. Therefore, we wanted to investigate whether LRP1 can mediate APP trafficking along the secretory pathway, and, if so, whether it affects APP processing. Indeed, we demonstrate that APP trafficking is strongly influenced by LRP1 transport through the endoplasmic reticulum (ER) and Golgi compartments. LRP1-constructs with ER- and Golgi-retention motifs (LRP-CT KKAA, LRP-CT KKFF) had the capacity to retard APP trafficking at the respective steps in the secretory pathway. Here, we provide evidence that APP metabolism occurs in close conjunction with LRP1 trafficking, highlighting a new role of lipoprotein receptors in neurodegenerative diseases. Increased AICD generation is ineffective in nuclear translocation and transcriptional activity A sequence of amyloid precursor protein (APP) cleavages gives rise to the APP intracellular domain (AICD) together with amyloid beta peptide (Abeta) and/or p3 fragment. One of the environmental factors identified favouring the accumulation of AICD appears to be a rise in intracellular pH. This accumulation is a result of an abrogated cleavage event and does not extend to other secretase substrates. AICD can activate the transcription of artificially expressed constructs and many downstream gene targets have been discussed. Here we further identified the metabolism and subcellular localization of the constructs used in this well documented gene reporter assay. We also co-examined the mechanistic lead up to the AICD accumulation and explored possible significances for its increased expression. We found that most of the AICD generated under pH neutralized conditions is likely that cleaved from C83. Furthermore, the AICD surplus is not transcriptionally active but rather remains membrane tethered and free in the cytosol where it interacts with Fe65. However, Fe65 is still essential in AICD mediated transcriptional transactivation although its exact role in this set of events is unclear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past twenty years, new technologies have required an increasing use of mathematical models in order to understand better the structural behavior: finite element method is the one mostly used. However, the reliability of this method applied to different situations has to be tried each time. Since it is not possible to completely model the reality, different hypothesis must be done: these are the main problems of FE modeling. The following work deals with this problem and tries to figure out a way to identify some of the unknown main parameters of a structure. This main research focuses on a particular path of study and development, but the same concepts can be applied to other objects of research. The main purpose of this work is the identification of unknown boundary conditions of a bridge pier using the data acquired experimentally with field tests and a FEM modal updating process. This work doesn’t want to be new, neither innovative. A lot of work has been done during the past years on this main problem and many solutions have been shown and published. This thesis just want to rework some of the main aspects of the structural optimization process, using a real structure as fitting model.