300 resultados para noncovariant gauges
Resumo:
The need for continuous recording rain gauges makes it difficult to determine the rainfall erosivity factor (R-factor) of the (R)USLE model in areas without good temporal data coverage. In mainland Spain, the Nature Conservation Institute (ICONA) determined the R-factor at few selected pluviographs, so simple estimates of the R-factor are definitely of great interest. The objectives of this study were: (1) to identify a readily available estimate of the R-factor for mainland Spain; (2) to discuss the applicability of a single (global) estimate based on analysis of regional results; (3) to evaluate the effect of record length on estimate precision and accuracy; and (4) to validate an available regression model developed by ICONA. Four estimators based on monthly precipitation were computed at 74 rainfall stations throughout mainland Spain. The regression analysis conducted at a global level clearly showed that modified Fournier index (MFI) ranked first among all assessed indexes. Applicability of this preliminary global model across mainland Spain was evaluated by analyzing regression results obtained at a regional level. It was found that three contiguous regions of eastern Spain (Catalonia, Valencian Community and Murcia) could have a different rainfall erosivity pattern, so a new regression analysis was conducted by dividing mainland Spain into two areas: Eastern Spain and plateau-lowland area. A comparative analysis concluded that the bi-areal regression model based on MFI for a 10-year record length provided a simple, precise and accurate estimate of the R-factor in mainland Spain. Finally, validation of the regression model proposed by ICONA showed that R-ICONA index overpredicted the R-factor by approximately 19%.
Resumo:
Tide gauge data are identified as legacy data given the radical transition between observation method and required output format associated with tide gauges over the 20th-century. Observed water level variation through tide-gauge records is regarded as the only significant basis for determining recent historical variation (decade to century) in mean sea-level and storm surge. There are limited tide gauge records that cover the 20th century, such that the Belfast (UK) Harbour tide gauge would be a strategic long-term (110 years) record, if the full paper-based records (marigrams) were digitally restructured to allow for consistent data analysis. This paper presents the methodology of extracting a consistent time series of observed water levels from the 5 different Belfast Harbour tide gauges’ positions/machine types, starting late 1901. Tide-gauge data was digitally retrieved from the original analogue (daily) records by scanning the marigrams and then extracting the sequential tidal elevations with graph-line seeking software (Ungraph™). This automation of signal extraction allowed the full Belfast series to be retrieved quickly, relative to any manual x–y digitisation of the signal. Restructuring variably lengthed tidal data sets to a consistent daily, monthly and annual file format was undertaken by project-developed software: Merge&Convert and MergeHYD allow consistent water level sampling both at 60 min (past standard) and 10 min intervals, the latter enhancing surge measurement. Belfast tide-gauge data have been rectified, validated and quality controlled (IOC 2006 standards). The result is a consistent annual-based legacy data series for Belfast Harbour that includes over 2 million tidal-level data observations.
Resumo:
Identifying 20th-century periodic coastal surge variation is strategic for the 21st-century coastal surge estimates, as surge periodicities may amplify/reduce future MSL enhanced surge forecasts. Extreme coastal surge data from Belfast Harbour (UK) tide gauges are available for 1901–2010 and provide the potential for decadal-plus periodic coastal surge analysis. Annual extreme surge-elevation distributions (sampled every 10-min) are analysed using PCA and cluster analysis to decompose variation within- and between-years to assess similarity of years in terms of Surge Climate Types, and to establish significance of any transitions in Type occurrence over time using non-parametric Markov analysis. Annual extreme surge variation is shown to be periodically organised across the 20th century. Extreme surge magnitude and distribution show a number of significant cyclonic induced multi-annual (2, 3, 5 & 6 years) cycles, as well as dominant multi-decadal (15–25 years) cycles of variation superimposed on an 80 year fluctuation in atmospheric–oceanic variation across the North Atlantic (relative to NAO/AMO interaction). The top 30 extreme surge events show some relationship with NAO per se, given that 80% are associated with westerly dominant atmospheric flows (+ NAO), but there are 20% of the events associated with blocking air massess (− NAO). Although 20% of the top 30 ranked positive surges occurred within the last twenty years, there is no unequivocal evidence of recent acceleration in extreme surge magnitude related to other than the scale of natural periodic variation.
Resumo:
The dynamic interaction of vehicles and bridges results in live loads being induced into bridges that are greater than the vehicle’s static weight. To limit this dynamic effect, the Iowa Department of Transportation (DOT) currently requires that permitted trucks slow to five miles per hour and span the roadway centerline when crossing bridges. However, this practice has other negative consequences such as the potential for crashes, impracticality for bridges with high traffic volumes, and higher fuel consumption. The main objective of this work was to provide information and guidance on the allowable speeds for permitted vehicles and loads on bridges .A field test program was implemented on five bridges (i.e., two steel girder bridges, two pre-stressed concrete girder bridges, and one concrete slab bridge) to investigate the dynamic response of bridges due to vehicle loadings. The important factors taken into account during the field tests included vehicle speed, entrance conditions, vehicle characteristics (i.e., empty dump truck, full dump truck, and semi-truck), and bridge geometric characteristics (i.e., long span and short span). Three entrance conditions were used: As-is and also Level 1 and Level 2, which simulated rough entrance conditions with a fabricated ramp placed 10 feet from the joint between the bridge end and approach slab and directly next to the joint, respectively. The researchers analyzed and utilized the field data to derive the dynamic impact factors (DIFs) for all gauges installed on each bridge under the different loading scenarios.
Resumo:
In industrial plants, oil and oil compounds are usually transported by closed pipelines with circular cross-section. The use of radiotracers in oil transport and processing industrial facilities allows calibrating flowmeters, measuring mean residence time in cracking columns, locate points of obstruction or leak in underground ducts, as well as investigating flow behavior or industrial processes such as in distillation towers. Inspection techniques using radiotracers are non-destructive, simple, economic and highly accurate. Among them, Total Count, which uses a small amount of radiotracer with known activity, is acknowledged as an absolute technique for flow rate measurement. A viscous fluid transport system, composed by four PVC pipelines with 13m length (12m horizontal and 1m vertical) and ½, ¾, 1 and 2-inch gauges, respectively, interconnected by maneuvering valves was designed and assembled in order to conduct the research. This system was used to simulate different flow conditions of petroleum compounds and for experimental studies of flow profile in the horizontal and upward directions. As 198Au presents a single photopeak (411,8 keV), it was the radioisotope chosen for oil labeling, in small amounts (6 ml) or around 200 kBq activity, and it was injected in the oil transport lines. A NaI scintillation detector 2”x 2”, with well-defined geometry, was used to measure total activity, determine the calibration factor F and, positioned after a homogenization distance and interconnected to a standardized electronic set of nuclear instrumentation modules (NIM), to detect the radioactive cloud.
Resumo:
Les anodes de carbone sont des éléments consommables servant d’électrode dans la réaction électrochimique d’une cuve Hall-Héroult. Ces dernières sont produites massivement via une chaine de production dont la mise en forme est une des étapes critiques puisqu’elle définit une partie de leur qualité. Le procédé de mise en forme actuel n’est pas pleinement optimisé. Des gradients de densité importants à l’intérieur des anodes diminuent leur performance dans les cuves d’électrolyse. Encore aujourd’hui, les anodes de carbone sont produites avec comme seuls critères de qualité leur densité globale et leurs propriétés mécaniques finales. La manufacture d’anodes est optimisée de façon empirique directement sur la chaine de production. Cependant, la qualité d’une anode se résume en une conductivité électrique uniforme afin de minimiser les concentrations de courant qui ont plusieurs effets néfastes sur leur performance et sur les coûts de production d’aluminium. Cette thèse est basée sur l’hypothèse que la conductivité électrique de l’anode n’est influencée que par sa densité considérant une composition chimique uniforme. L’objectif est de caractériser les paramètres d’un modèle afin de nourrir une loi constitutive qui permettra de modéliser la mise en forme des blocs anodiques. L’utilisation de la modélisation numérique permet d’analyser le comportement de la pâte lors de sa mise en forme. Ainsi, il devient possible de prédire les gradients de densité à l’intérieur des anodes et d’optimiser les paramètres de mise en forme pour en améliorer leur qualité. Le modèle sélectionné est basé sur les propriétés mécaniques et tribologiques réelles de la pâte. La thèse débute avec une étude comportementale qui a pour objectif d’améliorer la compréhension des comportements constitutifs de la pâte observés lors d’essais de pressage préliminaires. Cette étude est basée sur des essais de pressage de pâte de carbone chaude produite dans un moule rigide et sur des essais de pressage d’agrégats secs à l’intérieur du même moule instrumenté d’un piézoélectrique permettant d’enregistrer les émissions acoustiques. Cette analyse a précédé la caractérisation des propriétés de la pâte afin de mieux interpréter son comportement mécanique étant donné la nature complexe de ce matériau carboné dont les propriétés mécaniques sont évolutives en fonction de la masse volumique. Un premier montage expérimental a été spécifiquement développé afin de caractériser le module de Young et le coefficient de Poisson de la pâte. Ce même montage a également servi dans la caractérisation de la viscosité (comportement temporel) de la pâte. Il n’existe aucun essai adapté pour caractériser ces propriétés pour ce type de matériau chauffé à 150°C. Un moule à paroi déformable instrumenté de jauges de déformation a été utilisé pour réaliser les essais. Un second montage a été développé pour caractériser les coefficients de friction statique et cinétique de la pâte aussi chauffée à 150°C. Le modèle a été exploité afin de caractériser les propriétés mécaniques de la pâte par identification inverse et pour simuler la mise en forme d’anodes de laboratoire. Les propriétés mécaniques de la pâte obtenues par la caractérisation expérimentale ont été comparées à celles obtenues par la méthode d’identification inverse. Les cartographies tirées des simulations ont également été comparées aux cartographies des anodes pressées en laboratoire. La tomodensitométrie a été utilisée pour produire ces dernières cartographies de densité. Les résultats des simulations confirment qu’il y a un potentiel majeur à l’utilisation de la modélisation numérique comme outil d’optimisation du procédé de mise en forme de la pâte de carbone. La modélisation numérique permet d’évaluer l’influence de chacun des paramètres de mise en forme sans interrompre la production et/ou d’implanter des changements coûteux dans la ligne de production. Cet outil permet donc d’explorer des avenues telles la modulation des paramètres fréquentiels, la modification de la distribution initiale de la pâte dans le moule, la possibilité de mouler l’anode inversée (upside down), etc. afin d’optimiser le processus de mise en forme et d’augmenter la qualité des anodes.
Resumo:
Fiber optical sensors have played an important role in applications for monitoring the health of civil infrastructures, such as bridges, oil rigs, and railroads. Due to the reduction in cost of fiber-optic components and systems, fiber optical sensors have been studied extensively for their higher sensitivity, precision and immunity to electrical interference compared to their electrical counterparts. A fiber Bragg grating (FBG) strain sensor has been employed for this study to detect and distinguish normal and lateral loads on rail tracks. A theoretical analysis of the relationship between strain and displacement under vertical and horizontal strains on an aluminum beam has been performed, and the results are in excellent agreement with the measured strain data. Then a single FBG sensor system with erbium-doped fiber amplifier broadband source has been carried out. Force and temperature applied on the system have resulted in changes of 0.05 nm per 50 με and 0.094 nm per 10 oC at the center wavelength of the FBG. Furthermore, a low cost fiber-optic sensor system with a distributed feedback (DFB) laser as the light source has been implemented. We show that it has superior noise and sensitivity performances compared to strain gauge sensors. The design has been extended to accommodate multiple sensors with negligible cross talk. When two cascaded sensors on a rail track section are tested, strain readings of the sensor 20 inches away from the position of applied force decay to one seventh of the data of the sensor at the applied force location. The two FBG sensor systems can detect 1 ton of vertical load with a square wave pattern and 0.1 ton of lateral loads (3 tons and 0.5 ton, respectively, for strain gauges). Moreover, a single FBG sensor has been found capable of detecting and distinguishing lateral and normal strains applied at different frequencies. FBG sensors are promising alternatives to electrical sensors for their high sensitivity,ease of installation, and immunity to electromagnetic interferences.
Resumo:
The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have been emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rainfall amounts. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e. RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance, but also for use in hydrological modeling. The results show that the RCs considering measurement errors derived from laboratory experiments provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Even assuming higher uncertainties for RCs as obtained from the laboratory up to a certain level is observed practical.
Resumo:
For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.
Resumo:
A medição precisa da força é necessária para muitas aplicações, nomeadamente, para a determinação da resistência mecânica dos materiais, controlo de qualidade durante a produção, pesagem e segurança de pessoas. Dada a grande necessidade de medição de forças, têm-se desenvolvido, ao longo do tempo, várias técnicas e instrumentos para esse fim. Entre os vários instrumentos utilizados, destacam-se os sensores de força, também designadas por células de carga, pela sua simplicidade, precisão e versatilidade. O exemplo mais comum é baseado em extensómetros elétricos do tipo resistivo, que aliados a uma estrutura formam uma célula de carga. Este tipo de sensores possui sensibilidades baixas e em repouso, presença de offset diferente de zero, o que torna complexo o seu condicionamento de sinal. Este trabalho apresenta uma solução para o condicionamento e aquisição de dados para células de carga que, tanto quanto foi investigado, é inovador. Este dispositivo permite efetuar o condicionamento de sinal, digitalização e comunicação numa estrutura atómica. A ideia vai de encontro ao paradigma dos sensores inteligentes onde um único dispositivo eletrónico, associado a uma célula de carga, executa um conjunto de operações de processamento de sinal e transmissão de dados. Em particular permite a criação de uma rede ad-hoc utilizando o protocolo de comunicação IIC. O sistema é destinado a ser introduzido numa plataforma de carga, desenvolvida na Escola Superior de Tecnologia e Gestão de Bragança, local destinado à sua implementação. Devido à sua estratégia de conceção para a leitura de forças em três eixos, contém quatro células de carga, com duas saídas cada, totalizando oito saídas. O hardware para condicionamento de sinal já existente é analógico, e necessita de uma placa de dimensões consideráveis por cada saída. Do ponto de vista funcional, apresenta vários problemas, nomeadamente o ajuste de ganho e offset ser feito manualmente, tornando-se essencial um circuito com melhor desempenho no que respeita a lidar com um array de sensores deste tipo.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Humanas, Departamento de Geografia, 2015.
Resumo:
Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization
Resumo:
The main objective of blasting is to produce optimum fragmentation for downstream processing. Fragmentation is usually considered optimum when the average fragment size is minimum and the fragmentation distribution as uniform as possible. One of the parameters affecting blasting fragmentation is believed to be time delay between holes of the same row. Although one can find a significant number of studies in the literature, which examine the relationship between time delay and fragmentation, their results have been often controversial. The purpose of this work is to increase the level of understanding of how time delay between holes of the same row affects fragmentation. Two series of experiments were conducted for this purpose. The first series involved tests on small scale grout and granite blocks to determine the moment of burden detachment. The instrumentation used for these experiments consisted mainly of strain gauges and piezoelectric sensors. Some experiments were also recorded with a high speed camera. It was concluded that the time of detachment for this specific setup is between 300 and 600 μs. The second series of experiments involved blasting of a 2 meter high granite bench and its purpose was the determination of the hole-to-hole delay that provides optimum fragmentation. The fragmentation results were assessed with image analysis software. Moreover, vibration was measured close to the blast and the experiments were recorded with high speed cameras. The results suggest that fragmentation was optimum when delays between 4 and 6 ms were used for this specific setup. Also, it was found that the moment at which gases first appear to be venting from the face was consistently around 6 ms after detonation.
Resumo:
Axle bearing damage with possible catastrophic failures can cause severe disruptions or even dangerous derailments, potentially causing loss of human life and leading to significant costs for railway infrastructure managers and rolling stock operators. Consequently the axle bearing damage process has safety and economic implications on the exploitation of railways systems. Therefore it has been the object of intense attention by railway authorities as proved by the selection of this topic by the European Commission in calls for research proposals. The MAXBE Project (http://www.maxbeproject.eu/), an EU-funded project, appears in this context and its main goal is to develop and to demonstrate innovative and efficient technologies which can be used for the onboard and wayside condition monitoring of axle bearings. The MAXBE (interoperable monitoring, diagnosis and maintenance strategies for axle bearings) project focuses on detecting axle bearing failure modes at an early stage by combining new and existing monitoring techniques and on characterizing the axle bearing degradation process. The consortium for the MAXBE project comprises 18 partners from 8 member states, representing operators, railway administrations, axle bearing manufactures, key players in the railway community and experts in the field of monitoring, maintenance and rolling stock. The University of Porto is coordinating this research project that kicked-off in November 2012 and it is completed on October 2015. Both on-board and wayside systems are explored in the project since there is a need for defining the requirement for the onboard equipment and the range of working temperatures of the axle bearing for the wayside systems. The developed monitoring systems consider strain gauges, high frequency accelerometers, temperature sensors and acoustic emission. To get a robust technology to support the decision making of the responsible stakeholders synchronized measurements from onboard and wayside monitoring systems are integrated into a platform. Also extensive laboratory tests were performed to correlate the in situ measurements to the status of the axle bearing life. With the MAXBE project concept it will be possible: to contribute to detect at an early stage axle bearing failures; to create conditions for the operational and technical integration of axle bearing monitoring and maintenance in different European railway networks; to contribute to the standardization of the requirements for the axle bearing monitoring, diagnosis and maintenance. Demonstration of the developed condition monitoring systems was performed in Portugal in the Northern Railway Line with freight and passenger traffic with a maximum speed of 220 km/h, in Belgium in a tram line and in the UK. Still within the project, a tool for optimal maintenance scheduling and a smart diagnostic tool were developed. This paper presents a synthesis of the most relevant results attained in the project. The successful of the project and the developed solutions have positive impact on the reliability, availability, maintainability and safety of rolling stock and infrastructure with main focus on the axle bearing health.
Resumo:
In Colombia coffee production is facing risks due to an increase in the variability and amount of rainfall, which may alter hydrological cycles and negatively influence yield quality and quantity. Shade trees in coffee plantations, however, are known to produce ecological benefits, such as intercepting rainfall and lowering its velocity, resulting in a reduced net-rainfall and higher water infiltration. In this case study, we measured throughfall and soil hydrological properties in four land use systems in Cauca, Colombia, that differed in stand structural parameters: shaded coffee, unshaded coffee, secondary forest and pasture. We found that throughfall was rather influenced by stand structural characteristics than by rainfall intensity. Lower throughfall was recorded in the shaded coffee compared to the other systems when rain gauges were placed at a distance of 1.0 m to the shade tree. The variability of throughfall was high in the shaded coffee, which was due to different canopy characteristics and irregular arrangements of shade tree species. Shaded coffee and secondary forest resembled each other in soil structural parameters, with an increase in saturated hydraulic conductivity and microporosity, whereas bulk density and macroporosity decreased, compared to the unshaded coffee and pasture. In this context tree-covered systems indicate a stronger resilience towards changing rainfall patterns, especially in mountainous areas where coffee is cultivated.