823 resultados para Pipeline Failiure


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this Project, a preliminary design of a dehydration unit for domestic gas will be outlined. This unit that is the subject of the study belongs to a project named Gorgon. Such project is currently been developed by Chevron in Barrow Island, Australia. In order to conduct a proper design of such unit, characteristics of the natural gas that is being extracted shall be detailed, as well as proper specifications of the pipeline to which the gas will supply. After this, different techniques for dehydrating the gas are evaluated; the technique that fits better this Project is absorption by glycol and following such assumption will be chosen as the best one. More accurately, the most suitable type of glycol for this particular unit is triethilene glycol, considering that it fits better the conditions of the project. Once the method is chosen, a simulation shall be undertaken with the purpose of determining the number of stages required for the correct functioning of the unit, the glycol rate and its purity. Besides, it is needed to estimate its pressure and temperature and the dimensions that would then follow. In addition, pressures and temperatures are estimated at the regeneration glycol process, together with dimensions of some units. Furthermore, it is necessary to estimate pressure and temperature at which natural gas is leaving the dehydration unit. In addition, both compression needed to secure the flux at the pipeline and the resulting pressure at the reception shall be studied. Finally, an economic study is carried out in order to conclude whether or not this specific Project is feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are different methods of construction of outfall pipelines, all of them have to solve the problem of placing a tube over a known location in sea bed. This process has sometimes to be done in difficult conditions as waves, current or depths greater than 30 metres, where a diver cannot go safely beyond. Also the placement of the pipeline must be carried out without any damage to the tube, therefore a close control of the deflections and stresses in the structure must be performed. The importance of this control should be not diminished because a damage during the construction would imply a very difficult and expensive repair, that should be avoided with a proper design of the construction process. This paper is focused in the analysis of the tube during its placement according to a very well known construction method consisting in placing the tube from a boat, where all the connections between consecutive tube segments are performed, and also the whole process is controlled. This method is used for outfall as well as offshore pipelines, and it will be described in Section 2

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O transporte de gás e derivados de petróleo é realizado pelo uso de tubulações, denominadas de oleodutos ou gasodutos, que necessitam de elevados níveis de resistência mecânica e corrosão, aliadas a uma boa tenacidade à fratura e resistência à fadiga. A adição de elementos de liga nesses aços, Ti, V e Nb entre outros, é realizada para o atendimento destes níveis de resistência após o processamento termomecânico das chapas para fabricação destes dutos, utilizando-se a norma API 5L do American Petroleum Institute, API, para a classificação destes aços. A adição de elementos de liga em associação com o processamento termomecânico visa o refino de grão da microestrutura austenítica, o qual é transferido para a estrutura ferrítica resultante. O Brasil é o detentor das maiores reservas mundiais de nióbio, que tem sido apresentado como refinador da microestrutura mais eficiente que outros elementos, como o V e Ti. Neste trabalho dois aços, denominados Normal e Alto Nb foram estudados. A norma API propõe que a soma das concentrações de Nióbio, Vanádio e Titânio devem ser menores que 0,15% no aço. As concentrações no aço contendo mais alto Nb é de 0,107%, contra 0,082% do aço de composição normal, ou seja, ambos atendem o valor especificado pela norma API. Entretanto, os aços são destinados ao uso em dutovias pela PETROBRÁS que impõe limites nos elementos microligantes para os aços aplicados em dutovias. Deste modo estudos foram desenvolvidos para verificar se os parâmetros de resistência à tração, ductilidade, tenacidade ao impacto e resistência à propagação de trinca por fadiga, estariam em acordo com a norma API 5L grau X70 e com os resultados que outros pesquisadores têm encontrado para aços dessa classe. Ainda, como para a formação de uma dutovia os tubos são unidos uns aos outros por processo de soldagem (circunferencial), o estudo de fadiga foi estendido para as regiões da solda e zona termicamente afetada (ZTA). Como conclusão final observa-se que o aço API 5L X70 com Nb modificado, produzido conforme processo desenvolvido pela ArcelorMittal - Tubarão, apresenta os parâmetros de resistência e ductilidade em tração, resistência ao impacto e resistência a propagação de trinca em fadiga (PTF) similar aos aços API 5L X70 com teores de Nb = 0,06 % peso e aqueles da literatura com teores de Nb+Ti+V < 0,15% peso. O metal base, metal de solda e zona termicamente afetada apresentaram curvas da/dN x ΔK similares, com os parâmetros do material C e m, da equação de Paris, respectivamente na faixa de 3,3 - 4,2 e 1.3x10-10 - 5.0x10-10 [(mm/ciclo)/(MPa.m1/2)m].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A argila bentonítica é amplamente utilizada em transporte de sólidos produzidos durante perfuração de poços. Teve por objetivo estudar o escoamento de misturas bentonita-água e determinar suas propriedades reológicas e parâmetros hidráulicos úteis nos projetos de instalações de recalque de misturas sólido-líquido. Foi montado um circuito fechado de tubulações para estudar dados de perda de carga e perfis de velocidade. Realizaram-se ensaios com misturas bentonita-água sob varias concentrações, algumas transportando areia. Observaram-se que a reologia da mistura bentonita-água é melhor descrita pela formulação de Herschell-Bulkley para fluidos não-Newtonianos. O coeficiente de atrito para descrever a perda de carga da mistura bentonita-água observada em tubulações no laboratório coloca-se entre as previsões de Tomita (1959) e Szilas et al (1981). A variação da velocidade da mistura na seção transversal do tubo é melhor aproximada pela equação de Bogue-Metzner (1963).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As doenças tropicais negligenciadas (DTNs) causam um imenso sofrimento para a pessoa acometida e em muitos casos podem levar o indivíduo a morte. Elas representam um obstáculo devastador para a saúde e continuam a ser um sério impedimento para a redução da pobreza e desenvolvimento socioeconômico. Das 17 doenças desse grupo, a leishmaniose, incluindo a leishmaniose cutânea, tem grande destaque devido sua alta incidência, os gastos para o tratamento e as complicações geradas em processos de coinfecção. Ainda mais agravante, os investimentos direcionados ao controle, combate e principalmente a inovação em novos produtos é ainda muito limitado. Atualmente, a academia tem um importante papel na luta contra essas doenças através da busca de novos alvos terapêuticos e também de novas moléculas com potencial terapêutico. É nesse contexto que esse projeto teve como meta a implantação de uma plataforma para a identificação de moléculas com atividade leishmanicida. Como alvo terapêutico, optamos pela utilização da enzima diidroorotato desidrogenase de Leishmania Viannia braziliensis (LbDHODH), enzima de extrema importância na síntese de novo de nucleotídeos de pirimidina, cuja principal função é converter o diidroorotato em orotato. Esta enzima foi clonada, expressa e purificada com sucesso em nosso laboratório. Os estudos permitiram que a enzima fosse caracterizada cineticamente e estruturalmente via cristalografia de raios- X. Os primeiros ensaios inibitórios foram realizados com o orotato, produto da catálise e inibidor natural da enzima. O potencial inibitório do orotato foi mensurado através da estimativa do IC50 e a interação proteína-ligante foi caracterizada através de estudos cristalográficos. Estratégias in silico e in vitro foram utilizadas na busca de ligantes, através das quais foram identificados inibidores para a enzima LbDHODH. Ensaios de validação cruzada, utilizando a enzima homóloga humana, permitiram identificar os ligantes com maior índice de seletividade que tiveram seu potencial leishmanicida avaliado in vitro contra as formas promastigota e amastigota de Leishmania braziliensis. A realização do presente projeto permitiu a identificação de uma classe de ligantes que apresentam atividade seletiva contra LbDHODH e que será utilizada no planejamento de futuras gerações de moléculas com atividade terapêutica para o tratamento da leishmaniose. Além disso, a plataforma de ensaios otimizada permitirá a avaliação de novos grupos de moléculas como uma importante estratégia na busca por novos tratamentos contra a leishmaniose

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to explore the leadership capacities and practices of assistant principals. The research also sought to determine what relationships existed between capacity and practice and to see if there was a difference based on experience, context and personal characteristics. Since the majority of principals first serve as assistant principals, their work and experiences as assistant principals will have significant consequences (Kwan, 2009). The literature has long held and continues to challenge the notion that the role of assistant principal is adequate preparation for the principalship (Chan, Webb, & Bowen, 2003; Harris, Muijs, & Crawford, 2003; Kwan, 2009; Mertz, 2000; Webb & Vulliamy, 1995). Based on empirical findings, this study has affirmed the need to further research and refine the role of the assistant principal. The results indicate that in addition to strengths, there are explicit gaps and missed opportunities in the leadership practices of assistant principals that impact the potential for building a leadership pipeline within schools. The work of the assistant principal is characterized by a proliferation of duties rather than a strategic set of practices that support distributed leadership and sustainability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O trabalho aborda a aplicação da técnica de reconciliação de dados para o balanço da movimentação de gás natural em uma malha de escoamento de gás não processado, elaborando também um método de cálculo rápido de inventário de um duto. Foram aplicadas, separadamente, a reconciliação volumétrica à condição padrão de medição e a reconciliação mássica, bem como realizadas comparações dos resultados em relação ao balanço original e verificação do balanço resultante de energia em termos de poder calorífico superior. Dois conjuntos de pesos foram aplicados, um arbitrado de acordo com o conhecimento prévio da qualidade do sistema de medição de cada um dos pontos, outro baseado no inverso da variância dos volumes diários apurados no período. Ambos apresentaram bons resultados e o segundo foi considerado o mais apropriado. Por meio de uma abordagem termodinâmica, foi avaliado o potencial impacto, ao balanço, da condensação de parte da fase gás ao longo do escoamento e a injeção de um condensado de gás natural não estabilizado por uma das fontes. Ambos tendem a impactar o balanço, sendo o resultado esperado um menor volume, massa e energia de fase gás na saída. Outros fatores de considerável impacto na qualidade dos dados e no resultado final da reconciliação são a qualidade da medição de saída do sistema e a representatividade da composição do gás neste ponto. O inventário é calculado a partir de uma regressão que se baseia em um regime permanente de escoamento, o que pode apresentar maior desvio quando fortes transientes estão ocorrendo no último dia do mês, porém a variação de inventário ao longo do mês possui baixo impacto no balanço. Concluiu-se que a reconciliação volumétrica é a mais apropriada para este sistema, pois os dados reconciliados levam os balanços mássicos e de energia em termos de poder calorífico, ambos na fase gás, para dentro do perfil esperado de comportamento. Embora um balanço volumétrico nulo apenas da fase gás não seja por si só o comportamento esperado quando se considera os efeitos descritos, para desenvolver um balanço mais robusto é necessário considerar as frações líquidas presentes no sistema, agregando maior dificuldade na aquisição e qualidade dos dados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gasoline Distribution Generally Available Control Technology (GD GACT) is a Federal environmental regulation that is specifically written and enforced to reduce HAP emissions from gasoline distribution (GD) facilities. The regulation targets four specific types of GD facilities: bulk gasoline terminals, bulk gasoline plants, pipeline breakout stations, and pipeline pumping stations. A GD GACT compliance plan was developed for a particular, representative example of each type of GD facility affected by the regulation. Each facility in the study is owned and operated by a single company. The compliance plans were developed to meet the regulatory requirements contained within GD GACT. The compliance plans will be implemented at each facility prior to the January 10, 2011 compliance date.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context. Young massive clusters are key to map the Milky Way’s structure, and near-infrared large area sky surveys have contributed strongly to the discovery of new obscured massive stellar clusters. Aims. We present the third article in a series of papers focused on young and massive clusters discovered in the VVV survey. This article is dedicated to the physical characterization of VVV CL086, using part of its OB-stellar population. Methods. We physically characterized the cluster using JHKS near-infrared photometry from ESO public survey VVV images, using the VVV-SkZ pipeline, and near-infrared K-band spectroscopy, following the methodology presented in the first article of the series. Results. Individual distances for two observed stars indicate that the cluster is located at the far edge of the Galactic bar. These stars, which are probable cluster members from the statistically field-star decontaminated CMD, have spectral types between O9 and B0 V. According to our analysis, this young cluster (1.0 Myr < age < 5.0 Myr) is located at a distance of 11+5-6 kpc, and we estimate a lower limit for the cluster total mass of (2.8+1.6-1.4) · 103 M⊙. It is likely that the cluster contains even earlier and more massive stars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Gaia-ESO Survey is a large public spectroscopic survey that aims to derive radial velocities and fundamental parameters of about 105 Milky Way stars in the field and in clusters. Observations are carried out with the multi-object optical spectrograph FLAMES, using simultaneously the medium-resolution (R ~ 20 000) GIRAFFE spectrograph and the high-resolution (R ~ 47 000) UVES spectrograph. In this paper we describe the methods and the software used for the data reduction, the derivation of the radial velocities, and the quality control of the FLAMES-UVES spectra. Data reduction has been performed using a workflow specifically developed for this project. This workflow runs the ESO public pipeline optimizing the data reduction for the Gaia-ESO Survey, automatically performs sky subtraction, barycentric correction and normalisation, and calculates radial velocities and a first guess of the rotational velocities. The quality control is performed using the output parameters from the ESO pipeline, by a visual inspection of the spectra and by the analysis of the signal-to-noise ratio of the spectra. Using the observations of the first 18 months, specifically targets observed multiple times at different epochs, stars observed with both GIRAFFE and UVES, and observations of radial velocity standards, we estimated the precision and the accuracy of the radial velocities. The statistical error on the radial velocities is σ ~ 0.4 km s-1 and is mainly due to uncertainties in the zero point of the wavelength calibration. However, we found a systematic bias with respect to the GIRAFFE spectra (~0.9 km s-1) and to the radial velocities of the standard stars (~0.5 km s-1) retrieved from the literature. This bias will be corrected in the future data releases, when a common zero point for all the set-ups and instruments used for the survey is be established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many regions, seawater desalination is a growing industry that has its impact on benthic communities. This study analyses the effect on benthic communities of a mitigation measure applied to a brine discharge, using polychaete assemblages as indicator. An eight-year study was conducted at San Pedro del Pinatar (SE Spain) establishing a grid of 12 sites at a depth range of 29–38 m during autumn. Brine discharge started in 2006 and produced a significant decrease in abundance, richness and diversity of polychaete families at the location closest to the discharge, where salinity reached 49. In 2010, a diffuser was deployed at the end of the pipeline in order to increase the mixing, to reduce the impact on benthic communities. After implementation of this mitigation measure, the salinity measured close to discharge was less than 38.5 and a significant recovery in polychaete richness and diversity was detected, to levels similar to those before the discharge. A less evident recovery in abundance was also observed, probably due to different recovery rates of polychaete families. Some families like Paraonidae and Magelonidae were more tolerant to this impact. Others like Syllidae and Capitellidae recovered quickly, although still affected by the discharge, while some families such as Sabellidae and Cirratulidae appeared to recover more slowly.