1000 resultados para piassava extraction areas
Resumo:
Climatic impacts of energy-peat extraction are of increasing concern due to EU emissions trading requirements. A new excavation-drier peat extraction method has been developed to reduce the climatic impact and increase the efficiency of peat extraction. To quantify and compare the soil GHG fluxes of the excavation drier and the traditional milling methods, as well as the areas from which the energy peat is planned to be extracted in the future (extraction reserve area types), soil CO2, CH4 and N2O fluxes were measured during 2006–2007 at three sites in Finland. Within each site, fluxes were measured from drained extraction reserve areas, extraction fields and stockpiles of both methods and additionally from the biomass driers of the excavation-drier method. The Life Cycle Assessment (LCA), described at a principal level in ISO Standards 14040:2006 and 14044:2006, was used to assess the long-term (100 years) climatic impact from peatland utilisation with respect to land use and energy production chains where utilisation of coal was replaced with peat. Coal was used as a reference since in many cases peat and coal can replace each other in same power plants. According to this study, the peat extraction method used was of lesser significance than the extraction reserve area type in regards to the climatic impact. However, the excavation-drier method seems to cause a slightly reduced climatic impact as compared with the prevailing milling method.
Resumo:
Factors affecting the detennination of PAHs by capillary GC/MS were studied. The effect of the initial column temperature and the injection solvent on the peak areas and heights of sixteen PAHs, considered as priority pollutants, USillg crosslinked methyl silicone (DB!) and 5% diphenyl, 94% dimethyl, 1% vinyl polysiloxane (DBS) columns was examined. The possibility of using high boiling point alcohols especially butanol, pentanol, cyclopentanol, and hexanol as injection solvents was investigated. Studies were carried out to optimize the initial column temperature for each of the alcohols. It was found that the optimum initial column temperature is dependent on the solvent employed. The peak areas and heights of the PAHs are enhanced when the initial column temperature is 10-20 c above the boiling point of the solvent using DB5 column, and the same or 10 C above the boiling point of the solvent using DB1 column. Comparing the peak signals of the PAHs using the alcohols, p-xylene, n-octane, and nonane as injection solvents, hexanol gave the greatest peak areas and heights of the PAHs particularly the late-eluted peaks. The detection limits were at low pg levels, ranging from 6.0 pg for fluorene t9 83.6 pg for benzo(a)pyrene. The effect of the initial column temperature on the peak shape and the separation efficiency of the PARs was also studied using DB1 and DB5 columns. Fronting or splitting of the peaks was obseIVed at very low initial column temperature. When high initial column temperature was used, tailing of the peaks appeared. Great difference between DB! and.DB5 columns in the range of the initial column temperature in which symmetrical.peaks of PAHs can be obtained is observed. Wider ranges were shown using DB5 column. Resolution of the closely-eluted PAHs was also affected by the initial column temperature depending on the stationary phase employed. In the case of DB5, only the earlyeluted PAHs were affected; whereas, with DB1, all PAHs were affected. An analytical procedure utilizing solid phase extraction with bonded phase silica (C8) cartridges combined with GC/MS was developed to analyze PAHs in water as an alternative method to those based on the extraction with organic solvent. This simple procedure involved passing a 50 ml of spiked water sample through C8 bonded phase silica cartridges at 10 ml/min, dried by passing a gentle flow of nitrogen at 20 ml/min for 30 sec, and eluting the trapped PAHs with 500 Jll of p-xylene at 0.3 ml/min. The recoveries of PAHs were greater than 80%, with less than 10% relative standard deviations of nine determinations. No major contaminants were present that could interfere with the recognition of PAHs. It was also found that these bonded phase silica cartridges can be re-used for the extraction of PAHs from water.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Solid phase extraction (SPE) is a powerful technique for preconcentration/removal or separation of trace and ultra trace amounts of toxic and nutrient elements. SPE effectively simplifies the labour intensive sample preparation, increase its reliability and eliminate the clean up step by using more selective extraction procedures. The synthesis of sorbents with a simplified procedure and diminution of the risks of errors shows the interest in the areas of environmental monitoring, geochemical exploration, food, agricultural, pharmaceutical, biochemical industry and high purity metal designing, etc. There is no universal SPE method because the sample pretreatment depends strongly on the analytical demand. But there is always an increasing demand for more sensitive, selective, rapid and reliable analytical procedures. Among the various materials, chelate modified naphthalene, activated carbon and chelate functionalized highly cross linked polymers are most important. In the biological and environmental field, large numbers of samples are to be analysed within a short span of time. Hence, online flow injection methods are preferred as they allow extraction, separation, identification and quantification of many numbers of analytes. The flow injection online preconcentration flame AAS procedure developed allows the determination of as low as 0.1 µg/l of nickel in soil and cobalt in human hair samples. The developed procedure is precise and rapid and allows the analysis of 30 samples per hour with a loading time of 60 s. The online FI manifold used in the present study permits high sampling, loading rates and thus resulting in higher preconcentration/enrichment factors of -725 and 600 for cobalt and nickel respectively with a 1 min preconcentration time compared to conventional FAAS signal. These enrichment factors are far superior to hitherto developed on line preconcentration procedures for inorganics. The instrumentation adopted in the present study allows much simpler equipment and low maintenance costs compared to costlier ICP-AES or ICP-MS instruments.
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Extraction of tidal channel networks from aerial photographs alone and combined with laser altimetry
Resumo:
Tidal channel networks play an important role in the intertidal zone, exerting substantial control over the hydrodynamics and sediment transport of the region and hence over the evolution of the salt marshes and tidal flats. The study of the morphodynamics of tidal channels is currently an active area of research, and a number of theories have been proposed which require for their validation measurement of channels over extensive areas. Remotely sensed data provide a suitable means for such channel mapping. The paper describes a technique that may be adapted to extract tidal channels from either aerial photographs or LiDAR data separately, or from both types of data used together in a fusion approach. Application of the technique to channel extraction from LiDAR data has been described previously. However, aerial photographs of intertidal zones are much more commonly available than LiDAR data, and most LiDAR flights now involve acquisition of multispectral images to complement the LiDAR data. In view of this, the paper investigates the use of multispectral data for semiautomatic identification of tidal channels, firstly from only aerial photographs or linescanner data, and secondly from fused linescanner and LiDAR data sets. A multi-level, knowledge-based approach is employed. The algorithm based on aerial photography can achieve a useful channel extraction, though may fail to detect some of the smaller channels, partly because the spectral response of parts of the non-channel areas may be similar to that of the channels. The algorithm for channel extraction from fused LiDAR and spectral data gives an increased accuracy, though only slightly higher than that obtained using LiDAR data alone. The results illustrate the difficulty of developing a fully automated method, and justify the semi-automatic approach adopted.
Resumo:
This paper examines the interaction of spatial and dynamic aspects of resource extraction from forests by local people. Highly cyclical and varied across space and time, the patterns of resource extraction resulting from the spatial–temporal model bear little resemblance to the patterns drawn from focusing either on spatial or temporal aspects of extraction alone. Ignoring this variability inaccurately depicts villagers’ dependence on different parts of the forest and could result in inappropriate policies. Similarly, the spatial links in extraction decisions imply that policies imposed in one area can have unintended consequences in other areas. Combining the spatial–temporal model with a measure of success in community forest management—the ability to avoid open-access resource degradation—characterizes the impact of incomplete property rights on patterns of resource extraction and stocks.
The impact of buffer zone size and management on illegal extraction, park protection and enforcement
Resumo:
Many protected areas or parks in developing countries have buffer zones at their boundaries to achieve the dual goals of protecting park resources and providing resource benefits to neighbouring people. Despite the prevalence of these zoning policies, few behavioural models of people’s buffer zone use inform the sizing and management of those zones. This paper uses a spatially explicit resource extraction model to examine the impact of buffer zone size and management on extraction by local people, both legal and illegal, and the impact of that extraction on forest quality in the park’s core and buffer zone. The results demonstrate trade-offs between the level of enforcement, the size of a buffer zone, and the amount of illegal extraction in the park; and describe implications for “enrichment” of buffer zones and evaluating patterns of forest degradation.
Resumo:
Very high-resolution Synthetic Aperture Radar sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote sensing data for monitoring flood dynamics in urban areas. In this study a hybrid methodology combining radiometric thresholding, region growing and change detection is introduced as an approach enabling the automated, objective and reliable flood extent extraction from very high-resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values inferred from SAR images of floods. SAR images acquired during dry conditions enable the identification of areas i) that are not “visible” to the sensor (i.e. regions affected by ‘layover’ and ‘shadow’) and ii) that systematically behave as specular reflectors (e.g. smooth tarmac, permanent water bodies). Change detection with respect to a pre- or post flood reference image thereby reduces over-detection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by the very high-resolution SAR sensor on board TerraSAR-X as well as airborne photography highlights advantages and limitations of the proposed method. We conclude that even though the fully automated SAR-based flood mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the flood mapping capability of high quality aerial photography.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
Traffic Control Signs or destination boards on roadways offer significant information for drivers. Regulation signs tell something like your speed, turns, etc; Warning signs warn drivers of conditions ahead to help them avoid accidents; Destination signs show distances and directions to various locations; Service signs display location of hospitals, gas and rest areas etc. Because the signs are so important and there is always a certain distance from them to drivers, to let the drivers get information clearly and easily even in bad weather or other situations. The idea is to develop software which can collect useful information from a special camera which is mounted in the front of a moving car to extract the important information and finally show it to the drivers. For example, when a frame contains on a destination drive sign board it will be text something like "Linkoping 50",so the software should extract every character of "Linkoping 50", compare them with the already known character data in the database. if there is extracted character match "k" in the database then output the destination name and show to the driver. In this project C++ will be used to write the code for this software.
Resumo:
This article proposes a method for 3D road extraction from a stereopair of aerial images. The dynamic programming (DP) algorithm is used to carry out the optimization process in the object-space, instead of usually doing it in the image-space such as the DP traditional methodologies. This means that road centerlines are directly traced in the object-space, implying that a mathematical relationship is necessary to connect road points in object and image-space. This allows the integration of radiometric information from images into the associate mathematical road model. As the approach depends on an initial approximation of each road, it is necessary a few seed points to coarsely describe the road. Usually, the proposed method allows good results to be obtained, but large anomalies along the road can disturb its performance. Therefore, the method can be used for practical application, although it is expected some kind of local manual edition of the extracted road centerline.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Neste artigo é proposto um método semiautomático para extração de rodovias combinando um estereopar de imagens aéreas de baixa resolução com um poliedro gerado a partir de um modelo digital do terreno (MDT). O problema é formulado no espaço-objeto através de uma função objetivo que modela o objeto 'rodovia' como uma curva suave e pertencente a uma superfície poliédrica. A função objetivo proposta depende também de informações radiométricas, que são acessadas no espaço-imagem via relação de colinearidade entre pontos da rodovia no espaço-objeto e os correspondentes nos espaços imagem do estereopar. A linha poligonal que melhor modela a rodovia selecionada é obtida por otimização no espaço-objeto da função objetivo, tendo por base o algoritmo de programação dinâmica. O processo de otimização é iterativo e dependente do fornecimento por um operador de uma aproximação inicial para a rodovia selecionada. Os resultados obtidos mostraram que o método é robusto frente a anomalias existentes ao longo das rodovias, tais como obstruções causadas por sombras e árvores.
Resumo:
Four lignin samples were extracted from sugar cane bagasse using four different alcohols (methanol, ethanol, n-propanol, and 1-butanol) via the organosolv-CO2 supercritical pulping process. Langmuir films were characterized by surface pressure vs mean molecular area (Pi-A) isotherms to exploit information at the molecular level carrying out stability tests, cycles of compression/expansion (hysteresis), subphase temperature variations, and metallic ions dissolved into the water subphase at different concentrations. Briefly, it was observed that these lignins are relatively stable on the water surface when compared to those obtained via different extraction processes. Besides, the Pi-A isotherms are shifted to smaller molecular areas at higher subphase temperatures and to larger molecular areas when the metallic ions are dissolved into the subphase. The results are related to the formation of stable aggregates (domains) onto the water subphase by these lignins, as shown in the Pi-A isotherms. It was found as well that the most stable lignin monolayer onto the water subphase is that extracted with 1-butanol. Homogeneous Langmuir-Blodgett (LB) films of this lignin could be produced as confirmed by UV-vis absorption spectroscopy and the cumulative transfer parameter. In addition, FTIR analysis showed that this lignin LB film is structured in a way that the phenyl groups are organized preferentially parallel to the substrate surface. Further, these LB films were deposited onto gold interdigitated electrodes and ITO and applied in studies involving the detection of Cd+2 ions in aqueous solutions at low concentration levels throughimpedance spectroscopy and electrochemical measurements. FTIR spectroscopy was carried out before and after soaking the thin films into Cd+2 aqueous solutions, revealing a possible physical interaction between the lignin phenyl groups and the heavy metal ions. The importance of using nanostructured systems is demonstrated as well by comparing both LB and cast films.