994 resultados para Piedmonts (Geology)
Resumo:
We present the first image of the Madeira upper crustal structure, using ambient seismic noise tomography. 16 months of ambient noise, recorded in a dense network of 26 seismometers deployed across Madeira, allowed reconstructing Rayleigh wave Green's functions between receivers. Dispersion analysis was performed in the short period band from 1.0 to 4.0 s. Group velocity measurements were regionalized to obtain 20 tomographic images, with a lateral resolution of 2.0 km in central Madeira. Afterwards, the dispersion curves, extracted from each cell of the 2D group velocity maps, were inverted as a function of depth to obtain a 3D shear wave velocity model of the upper crust, from the surface to a depth of 2.0 km. The obtained 3D velocity model reveals features throughout the island that correlates well with surface geology and island evolution. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Linear unmixing decomposes an hyperspectral image into a collection of re ectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces vertex component analysis, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.
Resumo:
Este trabalho centra-se no estudo do aproveitamento expectável do maciço rochoso da pedreira da Curviã N.o 2 (Joane, Vila Nova de Famalicão, no N Portugal), através da obtenção de um bloco unitário tipo que forneça indicações para a exploração do recurso geológico para fins industriais e/ou ornamentais. Desta forma, investiga-se se num dado limite de zona geotécnica do maciço rochoso e propicio a obtenção de blocos com dimensão, avaliados apos o processo de transformação, nomeadamente, para enrocamento em obras marítimas ou balastro em obras ferroviárias. Foram seleccionados diversos afloramentos, tendo-se recorrido a técnica de amostragem linear as superfícies expostas do maciço. Esta técnica e uma das formas mais expeditas de coligir dados geológico-geotécnicos relativos as descontinuidades. Procedeu-se, ainda, a um tratamento estatístico das descontinuidades, bem como dos parâmetros geológico-geotécnicos e geomecânicos a estas associadas, propostos pela Sociedade Internacional de Mecânica das Rochas (ISRM). Todos os dados foram representados cartograficamente numa base apoiada pelos Sistemas de Informação Geográfica (SIG) e utilizadas as ferramentas de geologia estrutural, analise morfotectónica, modelação digital de terreno e cartografia de zonamento geotécnico. O zonamento geotécnico do maciço granítico foi realizado sempre em estreita ligação com o conhecimento das características do maciço ”in situ”. Pretende-se que esta metodologia contribua para um melhor conhecimento da compartimentação dos maciços rochosos em geral e, em particular, do modelo geotécnico comportamental do maciço rochoso da Curviã N.o2.
Resumo:
The existence of satellite images ofthe West Iberian Margin allowed comparative study of images as a tool applied to structural geology. Interpretation of LANDSAT images of the Lusitanian Basin domain showed the existence of a not previously described WNW-ESE trending set oflineaments. These lineaments are persistent and only observable on small scale images (e.g. approx. 11200000 and 11500 000) with various radiometric characteristics. They are approximately 20 km long, trend l200±15° and cross cut any other families oflineaments. The fact that these lineaments are perpendicular to the Quaternary thrusts of the Lower Tagus Valley and also because they show no off-set across them, suggests that they resulted from intersection oflarge tensile fractures on the earth's surface. It is proposed in this work that these lineaments formed on a crustal flexure of tens ofkm long, associated with the Quaternary WNW-ESE oriented maximum compressive stress on the West Iberian Margin. The maximum compressive stress rotated anticlockwise from a NW -SE orientation to approximately WNW-ESE, from Late Miocene to Quaternary times (RIBEIRO et aI., 1996). Field inspection of the lineaments revealed zones of norm~1.J. faulting and cataclasis, which are coincident with the lineaments and affect sediments of upper Miocene up to Quaternary age. These deformation structures show localized extension perpendicular to the lineaments, i.e. perpendicular to the maximum compressive direction, after recent stress data along the West Portuguese Margin (CABRAL & RIBEIRO, 1989; RIBEIRO et at., 1996). Also, on a first approach, the geographical distribution of these lineaments correlates well with earthquake epicenters and areas of largest Quaternary Vertical Movements within the inverted Lusitanian Basin (CABRAL, 1995).
Resumo:
Os aproveitamentos geotérmicos têm vindo a aumentar significativamente em todo o mundo, sendo os Estados Unidos da América, o maior produtor desta energia proveniente do interior da Terra, com cerca de 3.187 MW de capacidade instalada. Portugal tem capacidade instalada total de 29 MW, no entanto no que se refere ao aproveitamento de “alta entalpia”, isto é, o aproveitamento geotérmico para produção elétrica, apenas se encontra no arquipélago dos Açores, na ilha de S. Miguel, onde estão instaladas e em funcionamento duas centrais geotérmicas com a potência total de 23 MW, com produção de energia de 185 GWh. Em Portugal Continental, não se consegue produzir energia elétrica devido às temperaturas existentes, restringindo esta utilização apenas ao aproveitamento de baixa entalpia (máximo de 76 ºC). Este aproveitamento normalmente é feito em cascata, segundo, predominando o aquecimento de águas sanitárias, climatização, e para termas, usando águas termominerais. Para a exploração deste recurso renovável, é necessário conhecer a hidrogeologia do país, e relacioná-la com a fracturação, e acidentes tectónicos. Portugal Continental, está divido em quatros partes distintas a nível hidrogeológico, o Maciço Antigo, a Orla Ocidental, a Bacia Tejo-Sado e a Orla Meridional. Qualquer aproveitamento geotérmico em Portugal terá de atender a estas características, potenciando também, novas explorações geotérmicas orientadas para as pessoas, respeitando os valores sociais, culturais e ambientais. Neste contexto, existem alguns complexos geotérmicos em funcionamento, outros abandonados, e muitos outros em estudo para uma breve aplicação. Um exemplo de sucesso no aproveitamento do calor geotérmico, é o complexo de Chaves, que foi evoluindo desde 1985, até aos dias de hoje, continuando em exploração e em expansão para um melhor servir da população local. A existência de dois furos, e brevemente dum terceiro, servem para o abastecimento duma piscina, dum hotel, das termas, e da balneoterapia. Devido à riqueza a nível das temperaturas, dos caudais, e ao nível das necessidades energéticas existentes, este complexo apresenta um tempo de retorno de investimento de cerca de 7 anos, o que é geralmente considerado para investimentos para fins públicos, como é o caso. No âmbito das investigações agora realizadas, foi constatado que estes projetos suportam a cobertura de alguma incerteza hidrogeológica, dada a importante procura existente.
Resumo:
The interruption of vectorial transmission of Chagas disease in Venezuela is attributed to the combined effects of ongoing entomoepidemiological surveillance, ongoing house spraying with residual insecticides and the concurrent building and modification of rural houses in endemic areas during almost five decades. The original endemic areas which totaled 750,000 km², have been reduced to 365,000 km². During 1958-1968, initial entomological evaluations carried out showed that the house infestation index ranged between 60-80%, the house infection index at 8-11% and a house density index of 30-50 triatomine bugs per house. By 1990-98, these indexes were further reduced to 1.6-4.0%, 0.01-0.6% and 3-4 bugs per house respectively. The overall rural population seroprevalence has declined from 44.5% (95% C.I.: 43.4-45.3%) to 9.2% (95% C.I.: 9.0-9.4%) for successive grouped periods from 1958 to 1998. The annual blood donor prevalence is firmly established below 1%. The population at risk of infection has been estimated to be less than four million. Given that prevalence rates are stable and appropriate for public health programmes, consideration has been given to potential biases that may distort results such as: a) geographical differences in illness or longevity of patients; b) variations in levels of ascertainment; c) variations in diagnostic criteria; and d) variations in population structure, mainly due to appreciable population migration. The endemic areas with continuous transmission are now mainly confined to piedmonts, as well as patchy foci in higher mountainous ranges, where the exclusive vector is Rhodnius prolixus. There is also an unstable area, of which landscapes are made up of grasslands with scattered broad-leaved evergreen trees and costal plains, where transmission is very low and occasional outbreaks are reported.
Resumo:
Nos últimos anos assistiu-se ao crescente aumento do custo da Energia Elétrica (EE), com grande impacto após o ano 2012 devido à alteração no escalão da taxa de IVA aplicável. Por outro lado tem-se ainda vindo a verificar o aumento do défice tarifário devido a um conjunto de medidas e decisões estratégicas que atualmente estão a ser pagas por todos os consumidores de energia. A introdução dos programas da microprodução seguida da miniprodução, por parte da Direção Geral de Energia e Geologia (DGEG), permitiu aos pequenos e grandes consumidores de EE, efetuar localmente produção de EE por intermedio de fontes renováveis. Contudo, segundo as “limitações” por parte destes programas, apenas era permitido aos novos pequenos produtores injetar toda a eletricidade produzida na rede elétrica, não proporcionando qualquer benefício ao nível do consumo de energia local. Ano após ano, tem-se verificado uma revisão negativa, por parte da DGEG, sobre as tarifas de remuneração da energia produzida por estes sistemas, o que abalou significativamente um setor que até aqui tinha vindo a crescer a passos largos. Tendo em conta esta nova realidade surge a necessidade de procurar alternativas mais viáveis. A alternativa proposta, não é nada mais do que uma “revisão eficiente” dos atuais sistemas em vigor, permitindo assim aos pequenos produtores, atenuar os consumos energéticos e injetar na rede os excedentes de energia. O Autoconsumo revoluciona assim os atuais mecanismos existentes, garantindo deste modo que os consumidores de EE possam reduzir a sua fatura de eletricidade através da geração local de energia.
Resumo:
O setor da indústria destaca-se como um dos maiores consumidores de energia final em Portugal, representando cerca de 30% do consumo. Para fazer face a esta situação e no âmbito da Estratégia Nacional para a Energia foi criado, pelo Decreto-Lei n.º 71/2008, o Sistema de Gestão dos Consumos Intensivos de Energia (SGCIE), regulamento que classifica como Consumidoras Intensivas de Energia (CIE) as indústrias com um consumo anual superior aos 500 tep. Prevendo a elaboração de Planos de Racionalização dos Consumos de Energia (PREn), estabelecendo-se acordos de racionalização dos consumos com a Direção Geral de Energia e Geologia (DGEG) [1]. Atuando ao nível da eficiência energética o consumo de energia na indústria pode diminuir significativamente, para tal é necessário proceder-se à execução de auditorias energéticas e determinar as soluções mais adequadas de forma a reduzir os desperdícios e custos associados ao consumo de energia. Nesta dissertação apresenta-se a realização de uma auditoria energética a uma instalação comercial, que assenta essencialmente em quatro etapas, nomeadamente: planeamento da intervenção, trabalho de campo, tratamento e análise da informação recolhida, elaboração do relatório da auditoria. A aplicação desta metodologia constitui uma grande ajuda na realização de auditorias energéticas conferindo uma maior qualidade à sua execução. De forma a validar a metodologia utilizada nas auditorias energéticas foi realizado o estudo a uma instalação comercial que registou no ano 2013, um consumo energético inferior a 500 tep, contudo aderiu de forma voluntária ao Sistema de Gestão dos Consumos Intensivos de Energia (SGCIE), sendo obrigado a racionalizar o seu consumo de energia de acordo com as metas estabelecidas no SGCIE.
Resumo:
Num universo despovoado de formas geométricas perfeitas, onde proliferam superfícies irregulares, difíceis de representar e de medir, a geometria fractal revelou-se um instrumento poderoso no tratamento de fenómenos naturais, até agora considerados erráticos, imprevisíveis e aleatórios. Contudo, nem tudo na natureza é fractal, o que significa que a geometria euclidiana continua a ser útil e necessária, o que torna estas geometrias complementares. Este trabalho centra-se no estudo da geometria fractal e na sua aplicação a diversas áreas científicas, nomeadamente, à engenharia. São abordadas noções de auto-similaridade (exata, aproximada), formas, dimensão, área, perímetro, volume, números complexos, semelhança de figuras, sucessão e iterações relacionadas com as figuras fractais. Apresentam-se exemplos de aplicação da geometria fractal em diversas áreas do saber, tais como física, biologia, geologia, medicina, arquitetura, pintura, engenharia eletrotécnica, mercados financeiros, entre outras. Conclui-se que os fractais são uma ferramenta importante para a compreensão de fenómenos nas mais diversas áreas da ciência. A importância do estudo desta nova geometria, é avassaladora graças à sua profunda relação com a natureza e ao avançado desenvolvimento tecnológico dos computadores.
Resumo:
The present work aims to achieve and further develop a hydrogeomechanical approach in Caldas da Cavaca hydromineral system rock mass (Aguiar da Beira, NW Portugal), and contribute to a better understanding of the hydrogeological conceptual site model. A collection of several data, namely geology, hydrogeology, rock and soil geotechnics, borehole hydraulics and hydrogeomechanics, was retrieved from three rock slopes (Lagoa, Amores and Cancela). To accomplish a comprehensive analysis and rock engineering conceptualisation of the site, a multi‐technical approach were used, such as, field and laboratory techniques, hydrogeotechnical mapping, hydrogeomechanical zoning and hydrogeomechanical scheme classifications and indexes. In addition, a hydrogeomechanical data analysis and assessment, such as Hydro‐Potential (HP)‐Value technique, JW Joint Water Reduction index, Hydraulic Classification (HC) System were applied on rock slopes. The hydrogeomechanical zone HGMZ 1 of Lagoa slope achieved higher hydraulic conductivities with poorer rock mass quality results, followed by the hydrogeomechanical zone HGMZ 2 of Lagoa slope, with poor to fair rock mass quality and lower hydraulic parameters. In addition, Amores slope had a fair to good rock mass quality and the lowest hydraulic conductivity. The hydrogeomechanical zone HGMZ 3 of Lagoa slope, and the hydrogeomechanical zones HGMZ 1 and HGMZ 2 of Cancela slope had a fair to poor rock mass quality but were completely dry. Geographical Information Systems (GIS) mapping technologies was used in overall hydrogeological and hydrogeomechanical data integration in order to improve the hydrogeological conceptual site model.
Resumo:
As cada vez mais importantes questões ambientais levam à imprescindibilidade da eficiência energética, sendo que esta tem cada vez mais importância a nível mundial. Deste modo, a eficiência energética, quer por uma obrigação legal (devido aos níveis de consumo) quer por questões de estatuto no mercado (imagem de uma empresa amiga do ambiente), está bem presente no mundo industrial. Portugal apresenta, no que diz respeito à situação energética, uma forte dependência externa, acima dos 70 %. Em 2012, informação disponível pela Direção Geral de Energia e Geologia (DGEG), o consumo total de energia primária era de 21.482 ktep e mais de 55 % desse consumo era proveniente de origem fóssil. Os setores que apresentam maiores consumos de eletricidade por setor de atividade são a Indústria e o setor de Serviços, onde estão presentes os edifícios com consumos perto dos 33 %. De forma a promover a eficiência energética e implementar a utilização racional de energia foram criados programas, estratégias e legislação que permitiram incentivar a diminuição dos consumos de energia nos edifícios de serviço. Uma das metodologias implementadas passa pela gestão de energia, ou seja, para atuar é necessário conhecer os fluxos de energia de um edifício. As auditorias energéticas permitem realizar um levantamento e análise desses mesmos fluxos, com o desígnio de identificar oportunidades de racionalização de consumo de energia. Nesta dissertação foi realizado um estudo da redução de consumos energéticos de uma piscina municipal baseado em dados de uma auditoria energética. Através da auditoria foi possível obter um resultado do exame energético, caracterizar o perfil real de utilização da energia elétrica e caracterizar os equipamentos dos consumidores energéticos instalados. Também permitiu realizar um levantamento térmico, de forma conhecer as temperaturas e a humidade relativa do edifício. Por fim, são apresentadas cinco medidas e algumas recomendações de eficiência energética, que permitem uma redução do consumo anual à instalação de cerca de 20%. Procedeu-se, também, a uma análise dos dados obtidos na auditoria, de forma a identificar algumas oportunidades de racionalização de energia.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Estuaries and other transitional waters are complex ecosystems critically important as nursery and shelter areas for organisms. Also, humans depend on estuaries for multiple socio-economical activities such as urbanism, tourism, heavy industry, (taking advantage of shipping), fisheries and aquaculture, the development of which led to strong historical pressures, with emphasis on pollution. The degradation of estuarine environmental quality implies ecologic, economic and social prejudice, hence the importance of evaluating environmental quality through the identification of stressors and impacts. The Sado Estuary (SW Portugal) holds the characteristics of industrialized estuaries, which results in multiple adverse impacts. Still, it has recently been considered moderately contaminated. In fact, many studies were conducted in the past few years, albeit scattered due to the absence of true biomonitoring programmes. As such, there is a need to integrate the information, in order to obtain a holistic perspective of the area able to assist management and decision-making. As such, a geographical information system (GIS) was created based on sediment contamination and biomarker data collected from a decade-long time-series of publications. Four impacted and a reference areas were identified, characterized by distinct sediment contamination patterns related to different hot spots and diffuse sources of toxicants. The potential risk of sediment-bound toxicants was determined by contrasting the levels of pollutants with available sediment quality guidelines, followed by their integration through the Sediment Quality guideline Quotient (SQG-Q). The SQG-Q estimates per toxicant or class was then subjected to georreferencing and statistical analyses between the five distinct areas and seasons. Biomarker responses were integrated through the Biomarkers Consistency Indice and georreferenced as well through GIS. Overall, in spite of the multiple biological traits surveyed, the biomarker data (from several organisms) are accordant with sediment contamination. The most impacted areas were the shipyard area and adjacent industrial belt, followed by urban and agricultural grounds. It is evident that the estuary, although globally moderately impacted, is very heterogeneous and affected by a cocktail of contaminants, especially metals and polycyclic aromatic hydrocarbon. Although elements (like copper, zinc and even arsenic) may originate from the geology of the hydrographic basin of the Sado River, the majority of the remaining contaminants results from human activities. The present work revealed that the estuary should be divided into distinct biogeographic units, in order to implement effective measures to safeguard environmental quality.
Resumo:
The Keystone XL has a big role for transforming Canadian oil to the USA. The function of the pipeline is decreasing the dependency of the American oil industry on other countries and it will help to limit external debt. The proposed pipeline seeks the most suitable route which cannot damage agricultural and natural water recourses such as the Ogallala Aquifer. Using the Geographic Information System (GIS) techniques, the suggested path in this study got extremely high correct results that will help in the future to use the least cost analysis for similar studies. The route analysis contains different weighted overlay surfaces, each, was influenced by various criteria (slope, geology, population and land use). The resulted least cost path routes for each weighted overlay surface were compared with the original proposed pipeline and each displayed surface was more effective than the proposed Keystone XL pipeline.