13 resultados para Pipelines--Maintenance and repair
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the recent years it is emerged that peripheral arterial disease (PAD) has become a growing health problem in Western countries. This is a progressive manifestation of atherothrombotic vascular disease, which results into the narrowing of the blood vessels of the lower limbs and, as final consequence, in critical leg ischemia. PAD often occurs along with other cardiovascular risk factors, including diabetes mellitus (DM), low-grade inflammation, hypertension, and lipid disorders. Patients with DM have an increased risk of developing PAD, and that risk increases with the duration of DM. Moreover, there is a growing population of patients identified with insulin resistance (IR), impaired glucose tolerance, and obesity, a pathological condition known as “metabolic syndrome”, which presents increased cardiovascular risk. Atherosclerosis is the earliest symptom of PAD and is a dynamic and progressive disease arising from the combination of endothelial dysfunction and inflammation. Endothelial dysfunction is a broad term that implies diminished production or availability of nitric oxide (NO) and/or an imbalance in the relative contribution of endothelium-derived relaxing factors. The secretion of these agents is considerably reduced in association with the major risks of atherosclerosis, especially hyperglycaemia and diabetes, and a reduced vascular repair has been observed in response to wound healing and to ischemia. Neovascularization does not only rely on the proliferation of local endothelial cells, but also involves bone marrow-derived stem cells, referred to as endothelial progenitor cells (EPCs), since they exhibit endothelial surface markers and properties. They can promote postnatal vasculogenesis by homing to, differentiating into an endothelial phenotype, proliferating and incorporating into new vessels. Consequently, EPCs are critical to endothelium maintenance and repair and their dysfunction contributes to vascular disease. The aim of this study has been the characterization of EPCs from healthy peripheral blood, in terms of proliferation, differentiation and function. Given the importance of NO in neovascularization and homing process, it has been investigated the expression of NO synthase (NOS) isoforms, eNOS, nNOS and iNOS, and the effects of their inhibition on EPC function. Moreover, it has been examined the expression of NADPH oxidase (Nox) isoforms which are the principal source of ROS in the cell. In fact, a number of evidences showed the correlation between ROS and NO metabolism, since oxidative stress causes NOS inactivation via enzyme uncoupling. In particular, it has been studied the expression of Nox2 and Nox4, constitutively expressed in endothelium, and Nox1. The second part of this research was focused on the study of EPCs under pathological conditions. Firstly, EPCs isolated from healthy subject were cultured in a hyperglycaemic medium, in order to evaluate the effects of high glucose concentration on EPCs. Secondly, EPCs were isolated from the peripheral blood of patients affected with PAD, both diabetic or not, and it was assessed their capacity to proliferate, differentiate, and to participate to neovasculogenesis. Furthermore, it was investigated the expression of NOS and Nox in these cells. Mononuclear cells isolated from peripheral blood of healthy patients, if cultured under differentiating conditions, differentiate into EPCs. These cells are not able to form capillary-like structures ex novo, but participate to vasculogenesis by incorporation into the new vessels formed by mature endothelial cells, such as HUVECs. With respect to NOS expression, these cells have high levels of iNOS, the inducible isoform of NOS, 3-4 fold higher than in HUVECs. While the endothelial isoform, eNOS, is poorly expressed in EPCs. The higher iNOS expression could be a form of compensation of lower eNOS levels. Under hyperglycaemic conditions, both iNOS and eNOS expression are enhanced compared to control EPCs, as resulted from experimental studies in animal models. In patients affected with PAD, the EPCs may act in different ways. Non-diabetic patients and diabetic patients with a higher vascular damage, evidenced by a higher number of circulating endothelial cells (CECs), show a reduced proliferation and ability to participate to vasculogenesis. On the other hand, diabetic patients with lower CEC number have proliferative and vasculogenic capacity more similar to healthy EPCs. eNOS levels in both patient types are equivalent to those of control, while iNOS expression is enhanced. Interestingly, nNOS is not detected in diabetic patients, analogously to other cell types in diabetics, which show a reduced or no nNOS expression. Concerning Nox expression, EPCs present higher levels of both Nox1 and Nox2, in comparison with HUVECs, while Nox4 is poorly expressed, probably because of uncompleted differentiation into an endothelial phenotype. Nox1 is more expressed in PAD patients, diabetic or not, than in controls, suggesting an increased ROS production. Nox2, instead, is lower in patients than in controls. Being Nox2 involved in cellular response to VEGF, its reduced expression can be referable to impaired vasculogenic potential of PAD patients.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
Ancient pavements are composed of a variety of preparatory or foundation layers constituting the substrate, and of a layer of tesserae, pebbles or marble slabs forming the surface of the floor. In other cases, the surface consists of a mortar layer beaten and polished. The term mosaic is associated with the presence of tesserae or pebbles, while the more general term pavement is used in all the cases. As past and modern excavations of ancient pavements demonstrated, all pavements do not necessarily display the stratigraphy of the substrate described in the ancient literary sources. In fact, the number and thickness of the preparatory layers, as well as the nature and the properties of their constituent materials, are often varying in pavements which are placed either in different sites or in different buildings within a same site or even in a same building. For such a reason, an investigation that takes account of the whole structure of the pavement is important when studying the archaeological context of the site where it is placed, when designing materials to be used for its maintenance and restoration, when documenting it and when presenting it to public. Five case studies represented by archaeological sites containing floor mosaics and other kind of pavements, dated to the Hellenistic and the Roman period, have been investigated by means of in situ and laboratory analyses. The results indicated that the characteristics of the studied pavements, namely the number and the thickness of the preparatory layers, and the properties of the mortars constituting them, vary according to the ancient use of the room where the pavements are placed and to the type of surface upon which they were built. The study contributed to the understanding of the function and the technology of the pavementsâ substrate and to the characterization of its constituent materials. Furthermore, the research underlined the importance of the investigation of the whole structure of the pavement, included the foundation surface, in the interpretation of the archaeological context where it is located. A series of practical applications of the results of the research, in the designing of repair mortars for pavements, in the documentation of ancient pavements in the conservation practice, and in the presentation to public in situ and in museums of ancient pavements, have been suggested.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
Combinatorial Optimization is a branch of optimization that deals with the problems where the set of feasible solutions is discrete. Routing problem is a well studied branch of Combinatorial Optimization that concerns the process of deciding the best way of visiting the nodes (customers) in a network. Routing problems appear in many real world applications including: Transportation, Telephone or Electronic data Networks. During the years, many solution procedures have been introduced for the solution of different Routing problems. Some of them are based on exact approaches to solve the problems to optimality and some others are based on heuristic or metaheuristic search to find optimal or near optimal solutions. There is also a less studied method, which combines both heuristic and exact approaches to face different problems including those in the Combinatorial Optimization area. The aim of this dissertation is to develop some solution procedures based on the combination of heuristic and Integer Linear Programming (ILP) techniques for some important problems in Routing Optimization. In this approach, given an initial feasible solution to be possibly improved, the method follows a destruct-and-repair paradigm, where the given solution is randomly destroyed (i.e., customers are removed in a random way) and repaired by solving an ILP model, in an attempt to find a new improved solution.
Resumo:
The mitochondrion is an essential cytoplasmic organelle that provides most of the energy necessary for eukaryotic cell physiology. Mitochondrial structure and functions are maintained by proteins of both mitochondrial and nuclear origin. These organelles are organized in an extended network that dynamically fuses and divides. Mitochondrial morphology results from the equilibrium between fusion and fission processes, controlled by a family of “mitochondria-shaping” proteins. It is becoming clear that defects in mitochondrial dynamics can impair mitochondrial respiration, morphology and motility, leading to apoptotic cell death in vitro and more or less severe neurodegenerative disorders in vivo in humans. Mutations in OPA1, a nuclear encoded mitochondrial protein, cause autosomal Dominant Optic Atrophy (DOA), a heterogeneous blinding disease characterized by retinal ganglion cell degeneration leading to optic neuropathy (Delettre et al., 2000; Alexander et al., 2000). OPA1 is a mitochondrial dynamin-related guanosine triphosphatase (GTPase) protein involved in mitochondrial network dynamics, cytochrome c storage and apoptosis. This protein is anchored or associated on the inner mitochondrial membrane facing the intermembrane space. Eight OPA1 isoforms resulting from alternative splicing combinations of exon 4, 4b and 5b have been described (Delettre et al., 2001). These variants greatly vary among diverse organs and the presence of specific isoforms has been associated with various mitochondrial functions. The different spliced exons encode domains included in the amino-terminal region and contribute to determine OPA1 functions (Olichon et al., 2006). It has been shown that exon 4, that is conserved throughout evolution, confers functions to OPA1 involved in maintenance of the mitochondrial membrane potential and in the fusion of the network. Conversely, exon 4b and exon 5b, which are vertebrate specific, are involved in regulation of cytochrome c release from mitochondria, and activation of apoptosis, a process restricted to vertebrates (Olichon et al., 2007). While Mgm1p has been identified thanks to its role in mtDNA maintenance, it is only recently that OPA1 has been linked to mtDNA stability. Missense mutations in OPA1 cause accumulation of multiple deletions in skeletal muscle. The syndrome associated to these mutations (DOA-1 plus) is complex, consisting of a combination of dominant optic atrophy, progressive external ophtalmoplegia, peripheral neuropathy, ataxia and deafness (Amati- Bonneau et al., 2008; Hudson et al., 2008). OPA1 is the fifth gene associated with mtDNA “breakage syndrome” together with ANT1, PolG1-2 and TYMP (Spinazzola et al., 2009). In this thesis we show for the first time that specific OPA1 isoforms associated to exon 4b are important for mtDNA stability, by anchoring the nucleoids to the inner mitochondrial membrane. Our results clearly demonstrate that OPA1 isoforms including exon 4b are intimately associated to the maintenance of the mitochondrial genome, as their silencing leads to mtDNA depletion. The mechanism leading to mtDNA loss is associated with replication inhibition in cells where exon 4b containing isoforms were down-regulated. Furthermore silencing of exon 4b associated isoforms is responsible for alteration in mtDNA-nucleoids distribution in the mitochondrial network. In this study it was evidenced that OPA1 exon 4b isoform is cleaved to provide a 10kd peptide embedded in the inner membrane by a second transmembrane domain, that seems to be crucial for mitochondrial genome maintenance and does correspond to the second transmembrane domain of the yeasts orthologue encoded by MGM1 or Msp1, which is also mandatory for this process (Diot et al., 2009; Herlan et al., 2003). Furthermore in this thesis we show that the NT-OPA1-exon 4b peptide co-immuno-precipitates with mtDNA and specifically interacts with two major components of the mitochondrial nucleoids: the polymerase gamma and Tfam. Thus, from these experiments the conclusion is that NT-OPA1- exon 4b peptide contributes to the nucleoid anchoring in the inner mitochondrial membrane, a process that is required for the initiation of mtDNA replication and for the distribution of nucleoids along the network. These data provide new crucial insights in understanding the mechanism involved in maintenance of mtDNA integrity, because they clearly demonstrate that, besides genes implicated in mtDNA replications (i.e. polymerase gamma, Tfam, twinkle and genes involved in the nucleotide pool metabolism), OPA1 and mitochondrial membrane dynamics play also an important role. Noticeably, the effect on mtDNA is different depending on the specific OPA1 isoforms down-regulated, suggesting the involvement of two different combined mechanisms. Over two hundred OPA1 mutations, spread throughout the coding region of the gene, have been described to date, including substitutions, deletions or insertions. Some mutations are predicted to generate a truncated protein inducing haploinsufficiency, whereas the missense nucleotide substitutions result in aminoacidic changes which affect conserved positions of the OPA1 protein. So far, the functional consequences of OPA1 mutations in cells from DOA patients are poorly understood. Phosphorus MR spectroscopy in patients with the c.2708delTTAG deletion revealed a defect in oxidative phosphorylation in muscles (Lodi et al., 2004). An energetic impairment has been also show in fibroblasts with the severe OPA1 R445H mutation (Amati-Bonneau et al., 2005). It has been previously reported by our group that OPA1 mutations leading to haploinsufficiency are associated in fibroblasts to an oxidative phosphorylation dysfunction, mainly involving the respiratory complex I (Zanna et al., 2008). In this study we have evaluated the energetic efficiency of a panel of skin fibroblasts derived from DOA patients, five fibroblast cell lines with OPA1 mutations causing haploinsufficiency (DOA-H) and two cell lines bearing mis-sense aminoacidic substitutions (DOA-AA), and compared with control fibroblasts. Although both types of DOA fibroblasts maintained a similar ATP content when incubated in a glucose-free medium, i.e. when forced to utilize the oxidative phosphorylation only to produce ATP, the mitochondrial ATP synthesis through complex I, measured in digitonin-permeabilized cells, was significantly reduced in cells with OPA1 haploinsufficiency only, whereas it was similar to controls in cells with the missense substitutions. Furthermore, evaluation of the mitochondrial membrane potential (DYm) in the two fibroblast lines DOA-AA and in two DOA-H fibroblasts, namely those bearing the c.2819-2A>C mutation and the c.2708delTTAG microdeletion, revealed an anomalous depolarizing response to oligomycin in DOA-H cell lines only. This finding clearly supports the hypothesis that these mutations cause a significant alteration in the respiratory chain function, which can be unmasked only when the operation of the ATP synthase is prevented. Noticeably, oligomycin-induced depolarization in these cells was almost completely prevented by preincubation with cyclosporin A, a well known inhibitor of the permeability transition pore (PTP). This results is very important because it suggests for the first time that the voltage threshold for PTP opening is altered in DOA-H fibroblasts. Although this issue has not yet been addressed in the present study, several are the mechanisms that have been proposed to lead to PTP deregulation, including in particular increased reactive oxygen species production and alteration of Ca2+ homeostasis, whose role in DOA fibroblasts PTP opening is currently under investigation. Identification of the mechanisms leading to altered threshold for PTP regulation will help our understanding of the pathophysiology of DOA, but also provide a strategy for therapeutic intervention.
Resumo:
The DNA topology is an important modifier of DNA functions. Torsional stress is generated when right handed DNA is either over- or underwound, producing structural deformations which drive or are driven by processes such as replication, transcription, recombination and repair. DNA topoisomerases are molecular machines that regulate the topological state of the DNA in the cell. These enzymes accomplish this task by either passing one strand of the DNA through a break in the opposing strand or by passing a region of the duplex from the same or a different molecule through a double-stranded cut generated in the DNA. Because of their ability to cut one or two strands of DNA they are also target for some of the most successful anticancer drugs used in standard combination therapies of human cancers. An effective anticancer drug is Camptothecin (CPT) that specifically targets DNA topoisomerase 1 (TOP 1). The research project of the present thesis has been focused on the role of human TOP 1 during transcription and on the transcriptional consequences associated with TOP 1 inhibition by CPT in human cell lines. Previous findings demonstrate that TOP 1 inhibition by CPT perturbs RNA polymerase (RNAP II) density at promoters and along transcribed genes suggesting an involvement of TOP 1 in RNAP II promoter proximal pausing site. Within the transcription cycle, promoter pausing is a fundamental step the importance of which has been well established as a means of coupling elongation to RNA maturation. By measuring nascent RNA transcripts bound to chromatin, we demonstrated that TOP 1 inhibition by CPT can enhance RNAP II escape from promoter proximal pausing site of the human Hypoxia Inducible Factor 1 (HIF-1) and c-MYC genes in a dose dependent manner. This effect is dependent from Cdk7/Cdk9 activities since it can be reversed by the kinases inhibitor DRB. Since CPT affects RNAP II by promoting the hyperphosphorylation of its Rpb1 subunit the findings suggest that TOP 1inhibition by CPT may increase the activity of Cdks which in turn phosphorylate the Rpb1 subunit of RNAP II enhancing its escape from pausing. Interestingly, the transcriptional consequences of CPT induced topological stress are wider than expected. CPT increased co-transcriptional splicing of exon1 and 2 and markedly affected alternative splicing at exon 11. Surprisingly despite its well-established transcription inhibitory activity, CPT can trigger the production of a novel long RNA (5’aHIF-1) antisense to the human HIF-1 mRNA and a known antisense RNA at the 3’ end of the gene, while decreasing mRNA levels. The effects require TOP 1 and are independent from CPT induced DNA damage. Thus, when the supercoiling imbalance promoted by CPT occurs at promoter, it may trigger deregulation of the RNAP II pausing, increased chromatin accessibility and activation/derepression of antisense transcripts in a Cdks dependent manner. A changed balance of antisense transcripts and mRNAs may regulate the activity of HIF-1 and contribute to the control of tumor progression After focusing our TOP 1 investigations at a single gene level, we have extended the study to the whole genome by developing the “Topo-Seq” approach which generates a map of genome-wide distribution of sites of TOP 1 activity sites in human cells. The preliminary data revealed that TOP 1 preferentially localizes at intragenic regions and in particular at 5’ and 3’ ends of genes. Surprisingly upon TOP 1 downregulation, which impairs protein expression by 80%, TOP 1 molecules are mostly localized around 3’ ends of genes, thus suggesting that its activity is essential at these regions and can be compensate at 5’ ends. The developed procedure is a pioneer tool for the detection of TOP 1 cleavage sites across the genome and can open the way to further investigations of the enzyme roles in different nuclear processes.
Resumo:
The assessment of the RAMS (Reliability, Availability, Maintainability and Safety) performances of system generally includes the evaluations of the “Importance” of its components and/or of the basic parameters of the model through the use of the Importance Measures. The analytical equations proposed in this study allow the estimation of the first order Differential Importance Measure on the basis of the Birnbaum measures of components, under the hypothesis of uniform percentage changes of parameters. The aging phenomena are introduced into the model by assuming exponential-linear or Weibull distributions for the failure probabilities. An algorithm based on a combination of MonteCarlo simulation and Cellular Automata is applied in order to evaluate the performance of a networked system, made up of source nodes, user nodes and directed edges subjected to failure and repair. Importance Sampling techniques are used for the estimation of the first and total order Differential Importance Measures through only one simulation of the system “operational life”. All the output variables are computed contemporaneously on the basis of the same sequence of the involved components, event types (failure or repair) and transition times. The failure/repair probabilities are forced to be the same for all components; the transition times are sampled from the unbiased probability distributions or it can be also forced, for instance, by assuring the occurrence of at least a failure within the system operational life. The algorithm allows considering different types of maintenance actions: corrective maintenance that can be performed either immediately upon the component failure or upon finding that the component has failed for hidden failures that are not detected until an inspection; and preventive maintenance, that can be performed upon a fixed interval. It is possible to use a restoration factor to determine the age of the component after a repair or any other maintenance action.
Resumo:
In the last decades, the building materials and construction industry has been contributing to a great extent to generate a high impact on our environment. As it has been considered one of the key areas in which to operate to significantly reduce our footprint on environment, there has been widespread belief that particular attention now has to be paid and specific measures have to be taken to limit the use of non-renewable resources.The aim of this thesis is therefore to study and evaluate sustainable alternatives to commonly used building materials, mainly based on ordinary Portland Cement, and find a supportable path to reduce CO2 emissions and promote the re-use of waste materials. More specifically, this research explores different solutions for replacing cementitious binders in distinct application fields, particularly where special and more restricting requirements are needed, such as restoration and conservation of architectural heritage. Emphasis was thus placed on aspects and implications more closely related to the concept of non-invasivity and environmental sustainability. A first part of the research was addressed to the study and development of sustainable inorganic matrices, based on lime putty, for the pre-impregnation and on-site binding of continuous carbon fiber fabrics for structural rehabilitation and heritage restoration. Moreover, with the aim to further limit the exploitation of non-renewable resources, the synthesis of chemically activated silico-aluminate materials, as metakaolin, ladle slag or fly ash, was thus successfully achieved. New sustainable binders were hence proposed as novel building materials, suitable to be used as primary component for construction and repair mortars, as bulk materials in high-temperature applications or as matrices for high-toughness fiber reinforced composites.
Resumo:
Lo scheletro è un tessuto dinamico, capace di adattarsi alle richieste funzionali grazie a fenomeni di rimodellamento ed alla peculiare proprietà rigenerativa. Tali processi avvengono attraverso l’azione coordinata di osteoclasti ed osteoblasti. Queste popolazioni cellulari cooperano allo scopo di mantenere l’ equilibrio indispensabile per garantire l’omeostasi dello scheletro. La perdita di tale equilibrio può portare ad una diminuzione della massa ossea e, ad una maggiore suscettibilità alle fratture, come avviene nel caso dell’osteoporosi. E’ noto che, nella fisiopatologia dell’osso, un ruolo cruciale è svolto da fattori endocrini e paracrini. Dati recenti suggeriscono che il rimodellamento osseo potrebbe essere influenzato dal sistema nervoso. L’ipotesi è supportata dalla presenza, nelle vicinanze dell’osso, di fibre nervose sensoriali responsabili del rilascio di alcuni neuro peptidi, tra i quali ricordiamo la sostanza P. Inoltre in modelli animali è stato dimostrato il diretto coinvolgimento del sistema nervoso nel mantenimento dell’omeostasi ossea, infatti ratti sottoposti a denervazione hanno mostrato una perdita dell’equilibrio esistente tra osteoblasti ed osteoclasti. Per tali ragioni negli ultimi anni si è andata intensificando la ricerca in questo campo cercando di comprendere il ruolo dei neuropeptidi nel processo di differenziamento dei precursori mesenchimali in senso osteogenico. Le cellule stromali mesenchimali adulte sono indifferenziate multipotenti che risiedono in maniera predominante nel midollo osseo, ma che possono anche essere isolate da tessuto adiposo, cordone ombelicale e polpa dentale. In questi distretti le MSC sono in uno stato non proliferativo fino a quando non sono richieste per processi locali di riparo e rigenerazione tessutale. MSC, opportunamente stimolate, possono differenziare in diversi tipi di tessuto connettivo quali, tessuto osseo, cartilagineo ed adiposo. L’attività di ricerca è stata finalizzata all’ottimizzazione di un protocollo di espansione ex vivo ed alla valutazione dell’influenza della sostanza P, neuropeptide presente a livello delle terminazioni sensoriali nelle vicinanze dell’osso, nel processo di commissionamento osteogenico.
Resumo:
The primary goals of this study were to develop a cell-free in vitro assay for the assessment of nonthermal electromagnetic (EMF) bioeffects and to develop theoretical models in accord with current experimental observations. Based upon the hypothesis that EMF effects operate by modulating Ca2+/CaM binding, an in vitro nitric oxide (NO) synthesis assay was developed to assess the effects of a pulsed radiofrequency (PRF) signal used for treatment of postoperative pain and edema. No effects of PRF on NO synthesis were observed. Effects of PRF on Ca2+/CaM binding were also assessed using a Ca2+-selective electrode, also yielding no EMF Ca2+/CaM binding. However, a PRF effect was observed on the interaction of hemoglobin (Hb) with tetrahydrobiopterin, leading to the development of an in vitro Hb deoxygenation assay, showing a reduction in the rate of Hb deoxygenation for exposures to both PRF and a static magnetic field (SMF). Structural studies using pyranine fluorescence, Gd3+ vibronic sideband luminescence and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy were conducted in order to ascertain the mechanism of this EMF effect on Hb. Also, the effect of SMF on Hb oxygen saturation (SO2) was assessed under gas-controlled conditions. These studies showed no definitive changes in protein/solvation structure or SO2 under equilibrium conditions, suggesting the need for real-time instrumentation or other means of observing out-of-equilibrium Hb dynamics. Theoretical models were developed for EMF transduction, effects on ion binding, neuronal spike timing, and dynamics of Hb deoxygenation. The EMF sensitivity and simplicity of the Hb deoxygenation assay suggest a new tool to further establish basic biophysical EMF transduction mechanisms. If an EMF-induced increase in the rate of deoxygenation can be demonstrated in vivo, then enhancement of oxygen delivery may be a new therapeutic method by which clinically relevant EMF-mediated enhancement of growth and repair processes can occur.