823 resultados para Pipeline Failiure
Resumo:
The scale is defined as chemical compounds from inorganic nature, initially soluble in salt solutions, which may precipitate accumulate in columns of production and surface equipment. This work aimd to quantify the crystalline phases of scale through the Rietveld method. The study was conducted in scale derived from columns production wells in development and recipients of pigs. After collecting samples of scale were performed the procedure for separations of inorganic and organic phase and preparation to be analyzed at the X-ray Laboratory. The XRD and XRF techniques were used to monitor whether identifying and quantifying crystalline phases present in the deposits. The SEM technique was used to visualize the morphology of the scales and assess their homogeneity after the milling process. XRD measurements were performed with and without milling and with or without the accessory spinner. For quantify crystalline phases the program DBWStools was used. The procedure for conducting the first refinement was instrumental in setting parameters, then the structural parameters of the phases in the sample and finally the parameters of the function profile used. In the diffraction patterns of samples of scale observed that the best measures were those that passed through the mill and used the accessory spinner. Through the results, it was noted that the quantitative analysis for samples of scale is feasible when need to monitor a particular crystalline phase in a well, pipeline or oil field. Routinely, the quantification of phases by the Rietveld method is hardwork because in many scale was very difficult to identify the crystalline phases present
Resumo:
With the increasing of demand for natural gas and the consequent growth of the pipeline networks, besides the importance of transport and transfer of oil products by pipeline, and when it comes to product quality and integrity of the pipeline there is an important role regarding to the monitoring internal corrosion of the pipe. This study aims to assess corrosion in three pipeline that operate with different products, using gravimetric techniques and electrical resistance. Chemical analysis of residues originated in the pipeline helps to identify the mechanism corrosive process. The internal monitoring of the corrosion in the pipelines was carried out between 2009 and 2010 using coupon weight loss and electrical resistance probe. Physico-chemical techniques of diffraction and fluorescence X-rays were used to characterize the products of corrosion of the pipelines. The corrosion rate by weight loss was analyzed for every pipeline, only those ones that has revealed corrosive attack were analyzed located corrosion rate. The corrosion potential was classified as low to pipeline gas and ranged from low to severe for oil pipelines and the pipeline derivatives. Corrosion products were identified as iron carbonate, iron oxide and iron sulfide
Resumo:
Radiation in the first days of supernova explosions contains rich information about physical properties of the exploding stars. In the past three years, I used the intermediate Palomar Transient Factory to conduct one-day cadence surveys, in order to systematically search for infant supernovae. I show that the one-day cadences in these surveys were strictly controlled, that the realtime image subtraction pipeline managed to deliver transient candidates within ten minutes of images being taken, and that we were able to undertake follow-up observations with a variety of telescopes within hours of transients being discovered. So far iPTF has discovered over a hundred supernovae within a few days of explosions, forty-nine of which were spectroscopically classified within twenty-four hours of discovery.
Our observations of infant Type Ia supernovae provide evidence for both the single-degenerate and double-degenerate progenitor channels. On the one hand, a low-velocity Type Ia supernova iPTF14atg revealed a strong ultraviolet pulse within four days of its explosion. I show that the pulse is consistent with the expected emission produced by collision between the supernova ejecta and a companion star, providing direct evidence for the single degenerate channel. By comparing the distinct early-phase light curves of iPTF14atg to an otherwise similar event iPTF14dpk, I show that the viewing angle dependence of the supernova-companion collision signature is probably responsible to the difference of the early light curves. I also show evidence for a dark period between the supernova explosion and the first light of the radioactively-powered light curve. On the other hand, a peculiar Type Ia supernova iPTF13asv revealed strong near-UV emission and absence of iron in the spectra within the first two weeks of explosion, suggesting a stratified ejecta structure with iron group elements confined to the slow-moving part of the ejecta. With its total ejecta mass estimated to exceed the Chandrasekhar limit, I show that the stratification and large mass of the ejecta favor the double-degenerate channel.
In a separate approach, iPTF found the first progenitor system of a Type Ib supernova iPTF13bvn in the pre-explosion HST archival mages. Independently, I used the early-phase optical observations of this supernova to constrain its progenitor radius to be no larger than several solar radii. I also used its early radio detections to derive a mass loss rate of 3e-5 solar mass per year for the progenitor right before the supernova explosion. These constraints on the physical properties of the iPTF13bvn progenitor provide a comprehensive data set to test Type Ib supernova theories. A recent HST revisit to the iPTF13bvn site two years after the supernova explosion has confirmed the progenitor system.
Moving forward, the next frontier in this area is to extend these single-object analyses to a large sample of infant supernovae. The upcoming Zwicky Transient Facility with its fast survey speed, which is expected to find one infant supernova every night, is well positioned to carry out this task.
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
The petroleum production pipeline networks are inherently complex, usually decentralized systems. Strict operational constraints are applied in order to prevent serious problems like environmental disasters or production losses. This paper describes an intelligent system to support decisions in the operation of these networks, proposing a staggering for the pumps of transfer stations that compose them. The intelligent system is formed by blocks which interconnect to process the information and generate the suggestions to the operator. The main block of the system uses fuzzy logic to provide a control based on rules, which incorporate knowledge from experts. Tests performed in the simulation environment provided good results, indicating the applicability of the system in a real oil production environment. The use of the stagger proposed by the system allows a prioritization of the transfer in the network and a flow programming
Resumo:
Actually in the oil industry biotechnological approaches represent a challenge. In that, attention to metal structures affected by electrochemical corrosive processes, as well as by the interference of microorganisms (biocorrosion) which affect the kinetics of the environment / metal interface. Regarding to economical and environmental impacts reduction let to the use of natural products as an alternative to toxic synthetic inhibitors. This study aims the employment of green chemistry by evaluating the stem bark extracts (EHC, hydroalcoholic extract) and leaves (ECF, chloroform extract) of plant species Croton cajucara Benth as a corrosion inhibitor. In addition the effectiveness of corrosion inhibition of bioactive trans-clerodane dehydrocrotonin (DCTN) isolated from the stem bark of this Croton was also evaluated. For this purpose, carbon steel AISI 1020 was immersed in saline media (3,5 % NaCl) in the presence and absence of a microorganism recovered from a pipeline oil sample. Corrosion inhibition efficiency and its mechanisms were investigated by linear sweep voltammetry and electrochemical impedance. Culture-dependent and molecular biology techniques were used to characterize and identify bacterial species present in oil samples. The tested natural products EHC, ECF and DCTN (DMSO as solvent) in abiotic environment presented respectively, corrosion inhibition efficiencies of 57.6% (500 ppm), 86.1% (500 ppm) and 54.5% (62.5 ppm). Adsorption phenomena showed that EHC best fit Frumkin isotherm and ECF to Temkin isotherm. EHC extract (250 ppm) dissolved in a polar microemulsion system (MES-EHC) showed significant maximum inhibition efficiency (93.8%) fitting Langmuir isotherm. In the presence of the isolated Pseudomonas sp, EHC and ECF were able to form eco-compatible organic films with anti-corrosive properties
Resumo:
Analyzing large-scale gene expression data is a labor-intensive and time-consuming process. To make data analysis easier, we developed a set of pipelines for rapid processing and analysis poplar gene expression data for knowledge discovery. Of all pipelines developed, differentially expressed genes (DEGs) pipeline is the one designed to identify biologically important genes that are differentially expressed in one of multiple time points for conditions. Pathway analysis pipeline was designed to identify the differentially expression metabolic pathways. Protein domain enrichment pipeline can identify the enriched protein domains present in the DEGs. Finally, Gene Ontology (GO) enrichment analysis pipeline was developed to identify the enriched GO terms in the DEGs. Our pipeline tools can analyze both microarray gene data and high-throughput gene data. These two types of data are obtained by two different technologies. A microarray technology is to measure gene expression levels via microarray chips, a collection of microscopic DNA spots attached to a solid (glass) surface, whereas high throughput sequencing, also called as the next-generation sequencing, is a new technology to measure gene expression levels by directly sequencing mRNAs, and obtaining each mRNA’s copy numbers in cells or tissues. We also developed a web portal (http://sys.bio.mtu.edu/) to make all pipelines available to public to facilitate users to analyze their gene expression data. In addition to the analyses mentioned above, it can also perform GO hierarchy analysis, i.e. construct GO trees using a list of GO terms as an input.
Resumo:
Dato il recente avvento delle tecnologie NGS, in grado di sequenziare interi genomi umani in tempi e costi ridotti, la capacità di estrarre informazioni dai dati ha un ruolo fondamentale per lo sviluppo della ricerca. Attualmente i problemi computazionali connessi a tali analisi rientrano nel topic dei Big Data, con databases contenenti svariati tipi di dati sperimentali di dimensione sempre più ampia. Questo lavoro di tesi si occupa dell'implementazione e del benchmarking dell'algoritmo QDANet PRO, sviluppato dal gruppo di Biofisica dell'Università di Bologna: il metodo consente l'elaborazione di dati ad alta dimensionalità per l'estrazione di una Signature a bassa dimensionalità di features con un'elevata performance di classificazione, mediante una pipeline d'analisi che comprende algoritmi di dimensionality reduction. Il metodo è generalizzabile anche all'analisi di dati non biologici, ma caratterizzati comunque da un elevato volume e complessità, fattori tipici dei Big Data. L'algoritmo QDANet PRO, valutando la performance di tutte le possibili coppie di features, ne stima il potere discriminante utilizzando un Naive Bayes Quadratic Classifier per poi determinarne il ranking. Una volta selezionata una soglia di performance, viene costruito un network delle features, da cui vengono determinate le componenti connesse. Ogni sottografo viene analizzato separatamente e ridotto mediante metodi basati sulla teoria dei networks fino all'estrapolazione della Signature finale. Il metodo, già precedentemente testato su alcuni datasets disponibili al gruppo di ricerca con riscontri positivi, è stato messo a confronto con i risultati ottenuti su databases omici disponibili in letteratura, i quali costituiscono un riferimento nel settore, e con algoritmi già esistenti che svolgono simili compiti. Per la riduzione dei tempi computazionali l'algoritmo è stato implementato in linguaggio C++ su HPC, con la parallelizzazione mediante librerie OpenMP delle parti più critiche.
Resumo:
Event extraction from texts aims to detect structured information such as what has happened, to whom, where and when. Event extraction and visualization are typically considered as two different tasks. In this paper, we propose a novel approach based on probabilistic modelling to jointly extract and visualize events from tweets where both tasks benefit from each other. We model each event as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. The manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the high-dimensional space is incorporated into the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization and performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.
Resumo:
Faced with the continued emergence of antibiotic resistance to all known classes of antibiotics, a paradigm shift in approaches toward antifungal therapeutics is required. Well characterized in a broad spectrum of bacterial and fungal pathogens, biofilms are a key factor in limiting the effectiveness of conventional antibiotics. Therefore, therapeutics such as small molecules that prevent or disrupt biofilm formation would render pathogens susceptible to clearance by existing drugs. This is the first report describing the effect of the Pseudomonas aeruginosa alkylhydroxyquinolone interkingdom signal molecules 2-heptyl-3-hydroxy-4-quinolone and 2-heptyl-4-quinolone on biofilm formation in the important fungal pathogen Aspergillus fumigatus. Decoration of the anthranilate ring on the quinolone framework resulted in significant changes in the capacity of these chemical messages to suppress biofilm formation. Addition of methoxy or methyl groups at the C5–C7 positions led to retention of anti-biofilm activity, in some cases dependent on the alkyl chain length at position C2. In contrast, halogenation at either the C3 or C6 positions led to loss of activity, with one notable exception. Microscopic staining provided key insights into the structural impact of the parent and modified molecules, identifying lead compounds for further development.
Resumo:
O objetivo primordial deste trabalho foi estabelecer um roteiro tecnológico para aplicação das tecnologias de “Captação, Utilização e Sequestração de Carbono - CCUS” em Portugal. Para o efeito procedeu-se à identificação da origem das maiores fontes emissoras estacionárias industriais de CO2, adotando como critério o valor mínimo de 1×105 ton CO2/ano e limitado apenas ao território continental. Com base na informação recolhida e referente aos dados oficiais mais recentes (ano de 2013), estimou-se que o volume de emissões industriais de CO2 possível de captar em Portugal, corresponde a cerca de 47 % do valor global das emissões industriais, sendo oriundo de três setores de atividade industrial: produção de cimento, de pasta de papel e centrais termoelétricas a carvão. A maioria das grandes fontes emissoras industriais localiza-se no litoral do país, concentrando-se entre Aveiro e Sines. Pelas condicionantes geográficas do país e, sobretudo pela vantagem de já existir uma rede de gasodutos para o transporte de gás natural, com as respetivas infraestruturas de apoio associadas, admitiu-se que o cenário mais favorável para o transporte do CO2 captado será a criação de um sistema de transporte por gasoduto específico para o CO2. Como critério de compatibilização da proximidade das fontes emissoras de CO2 com potenciais locais para o armazenamento geológico das correntes captadas, adotou-se a distância máxima de 100 km, considerada adequada perante a dimensão do território nacional e as características do tecido industrial nacional. Efetuou-se a revisão das tecnologias de captação de CO2 disponíveis, quer comercialmente, quer em níveis avançados de demonstração e procedeu-se à análise exploratória da adequação desses diferentes métodos de captação a cada um dos setores de atividade industrial previamente identificados com emissões de CO2 suscetíveis de serem captadas. Na perspetiva da melhor integração dos processos, esta análise preliminar tomou em consideração as características das misturas gasosas, assim como o contexto industrial correspondente e o processo produtivo que lhe dá origem. As possibilidades de utilização industrial do CO2 sujeito à captação no país foram tratadas neste trabalho de forma genérica dado que a identificação de oportunidades reais para a utilização de correntes de CO2 captadas exige uma análise de compatibilização das necessidades efetivas de utilização de CO2 por parte de potenciais utilizadores industriais que carece da caracterização prévia das propriedades dessas correntes. Este é um tipo de análise muito específico que pressupõe o interesse mútuo de diferentes intervenientes: agentes emissores de CO2, operadores de transporte e, principalmente, potenciais utilizadores de CO2 como: matéria-prima para a síntese de compostos, solvente de extração supercrítica na indústria alimentar ou farmacêutica, agente corretor de pH em tratamento de efluentes, biofixação por fotossíntese, ou outra das aplicações possíveis identificadas para o CO2 captado. A última etapa deste estudo consistiu na avaliação das possibilidades de armazenamento geológico do CO2 captado e envolveu a identificação, nas bacias sedimentares nacionais, de formações geológicas com características reconhecidas como sendo boas indicações para o armazenamento de CO2 de forma permanente e em segurança. Seguiu-se a metodologia preconizada por organizações internacionais aplicando à situação nacional, critérios de seleção e de segurança que se encontram reconhecidamente definidos. A adequação para o armazenamento de CO2 das formações geológicas pré-selecionadas terá que ser comprovada por estudos adicionais que complementem os dados já existentes sobre as características geológicas destas formações e, mais importante ainda, por testes laboratoriais e ensaios de injeção de CO2 que possam fornecer informação concreta para estimar a capacidade de sequestração e de retenção de CO2 nestas formações e estabelecer os modelos geológicos armazenamento que permitam identificar e estimar, de forma concreta e objetiva, os riscos associados à injeção e armazenamento de CO2.
Resumo:
The migratory endoparasitic nematode Bursaphelenchus xylophilus, which is the causal agent of pine wilt disease, has phytophagous and mycetophagous phases during its life cycle. This highly unusual feature distinguishes it from other plantparasitic nematodes and requires profound changes in biology between modes. During the phytophagous stage, the nematode migrates within pine trees, feeding on the contents of parenchymal cells. Like other plant pathogens, B. xylophilus secretes effectors from pharyngeal gland cells into the host during infection.We provide the first description of changes in the morphology of these gland cells between juvenile and adult life stages. Using a comparative transcriptomics approach and an effector identification pipeline, we identify numerous novel parasitism genes which may be important for the mediation of interactions of B. xylophilus with its host. In-depth characterization of all parasitism genes using in situ hybridization reveals two major categories of detoxification proteins, those specifically expressed in either the pharyngeal gland cells or the digestive system. These data suggest that B. xylophilus incorporates effectors in a multilayer detoxification strategy in order to protect itself from host defence responses during phytophagy.
Resumo:
Este trabajo crea y aplica una metodología acorde con la realidad del país para zonificar la vulnerabilidad estructural y de la población sometida a una amenaza tecnológica, en este caso bajo la influencia de una posible emergencia por derrame de combustibles del poliducto. Una de las finalidades del trabajo es zonificar esta vulnerabilidad como elemento por considerar en la determinación del riesgo, elaboración de planes municipales de contingencia y en las propuestas de ordenamiento territorial.Palabras claves: Poliducto, RECOPE, vulnerabilidad, riesgo, derrame de combustibles, amenaza tecnológica.Abstract: This study develops and applies a methodology in keeping with the reality of the country in order to zone the structural and population vulnerability subject to technological hazard, in this case the influence of a possible emergency due to fuel spill from an oil pipeline. One of the objectives of this study is to zone this vulnerability as an element to consider in determining risk, devising municipal contingency plans and in proposing territorial coding.Keywords: Pipeline, RECOPE, vulnerability, risk, fuel spill, hazard, technological hazard.