997 resultados para Building Extraction


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a methodology is proposed for the geometric refinement of laser scanning building roof contours using high-resolution aerial images and Markov Random Field (MRF) models. The proposed methodology takes for granted that the 3D description of each building roof reconstructed from the laser scanning data (i.e., a polyhedron) is topologically correct and that it is only necessary to improve its accuracy. Since roof ridges are accurately extracted from laser scanning data, our main objective is to use high-resolution aerial images to improve the accuracy of roof outlines. In order to meet this goal, the available roof contours are first projected onto the image-space. After that, the projected polygons and the straight lines extracted from the image are used to establish an MRF description, which is based on relations ( relative length, proximity, and orientation) between the two sets of straight lines. The energy function associated with the MRF is minimized by using a modified version of the brute force algorithm, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding laser scanning polygon projected onto the image-space. The preliminary results showed that the proposed methodology is promising, since most sides of the refined polygons are geometrically better than corresponding projected laser scanning straight lines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Semi-automatic building detection and extraction is a topic of growing interest due to its potential application in such areas as cadastral information systems, cartographic revision, and GIS. One of the existing strategies for building extraction is to use a digital surface model (DSM) represented by a cloud of known points on a visible surface, and comprising features such as trees or buildings. Conventional surface modeling using stereo-matching techniques has its drawbacks, the most obvious being the effect of building height on perspective, shadows, and occlusions. The laser scanner, a recently developed technological tool, can collect accurate DSMs with high spatial frequency. This paper presents a methodology for semi-automatic modeling of buildings which combines a region-growing algorithm with line-detection methods applied over the DSM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a method is proposed to refine the LASER 3D roofs geometrically by using a high-resolution aerial image and Markov Random Field (MRF) models. In order to do so, a MRF description for grouping straight lines is developed, assuming that each projected side contour and ridge is topologically correct and that it is only necessary to improve its accuracy. Although the combination of laser data with data from image is most justified for refining roof contour, the structure of ridges can give greater robustness in the topological description of the roof structure. The MRF model is formulated based on relationships (length, proximity, and orientation) between the straight lines extracted from the image and projected polygon and also on retangularity and corner injunctions. The energy function associated with MRF is minimized by the genetic algorithm optimization method, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding LASER scanning polygon projected onto the image-space. The results obtained were satisfactory. This method was able to provide polygons roof refined buildings in which most of its contour sides and ridges were geometrically improved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a method for the automatic extraction of building roof contours from a LiDAR-derived digital surface model (DSM). The method is based on two steps. First, to detect aboveground objects (buildings, trees, etc.), the DSM is segmented through a recursive splitting technique followed by a region merging process. Vectorization and polygonization are used to obtain polyline representations of the detected aboveground objects. Second, building roof contours are identified from among the aboveground objects by optimizing a Markov-random-field-based energy function that embodies roof contour attributes and spatial constraints. Preliminary results have shown that the proposed methodology works properly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A composting Heat Extraction Unit (HEU) was designed to utilise waste heat from decaying organic matter for a variety of heating application The aim was to construct an insulated small scale, sealed, organic matter filled container. In this vessel a process fluid within embedded pipes would absorb thermal energy from the hot compost and transport it to an external heat exchanger. Experiments were conducted on the constituent parts and the final design comprised of a 2046 litre container insulated with polyurethane foam and kingspan with two arrays of qualpex piping embedded in the compost to extract heat. The thermal energy was used in horticultural trials by heating polytunnels using a radiator system during a winter/spring period. The compost derived energy was compared with conventional and renewable energy in the form of an electric fan heater and solar panel. The compost derived energy was able to raise polytunnel temperatures to 2-3°C above the control, with the solar panel contributing no thermal energy during the winter trial and the electric heater the most efficient maintaining temperature at its preset temperature of 10°C. Plants that were cultivated as performance indicators showed no significant difference in growth rates between the heat sources. A follow on experiment conducted using special growing mats for distributing compost thermal energy directly under the plants (Radish, Cabbage, Spinach and Lettuce) displayed more successful growth patterns than those in the control. The compost HEU was also used for more traditional space heating and hot water heating applications. A test space was successfully heated over two trials with varying insulation levels. Maximum internal temperature increases of 7°C and 13°C were recorded for building U-values of 1.6 and 0.53 W/m2K respectively using the HEU. The HEU successfully heated a 60 litre hot water cylinder for 32 days with maximum water temperature increases of 36.5°C recorded. Total energy recovered from the 435 Kg of compost within the HEU during the polytunnel growth trial was 76 kWh which is 3 kWh/day for the 25 days when the HEU was activated. With a mean coefficient of performance level of 6.8 calculated for the HEU the technology is energy efficient. Therefore the compost HEU developed here could be a useful renewable energy technology particularly for small scale rural dwellers and growers with access to significant quantities of organic matter

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective processes to fractionate the main compounds in biomass, such as wood, are a prerequisite for an effective biorefinery. Water is environmentally friendly and widely used in industry, which makes it a potential solvent also for forest biomass. At elevated temperatures over 100 °C, water can readily hydrolyse and dissolve hemicelluloses from biomass. In this work, birch sawdust was extracted using pressurized hot water (PHWE) flow-through systems. The hypothesis of the work was that it is possible to obtain polymeric, water-soluble hemicelluloses from birch sawdust using flow-through PHW extractions at both laboratory and large scale. Different extraction temperatures in the range 140–200 °C were evaluated to see the effect of temperature to the xylan yield. The yields and extracted hemicelluloses were analysed to obtain sugar ratios, the amount of acetyl groups, furfurals and the xylan yields. Higher extraction temperatures increased the xylan yield, but decreased the molar mass of the dissolved xylan. As the extraction temperature increased, more acetic acid was released from the hemicelluloses, thus further decreasing the pH of the extract. There were only trace amounts of furfurals present after the extractions, indicating that the treatment was mild enough not to degrade the sugars further. The sawdust extraction density was increased by packing more sawdust in the laboratory scale extraction vessel. The aim was to obtain extracts with higher concentration than in typical extraction densities. The extraction times and water flow rates were kept constant during these extractions. The higher sawdust packing degree decreased the water use in the extractions and the extracts had higher hemicellulose concentrations than extractions with lower sawdust degrees of packing. The molar masses of the hemicelluloses were similar in higher packing degrees and in the degrees of packing that were used in typical PHWE flow-through extractions. The structure of extracted sawdust was investigated using small angle-(SAXS) and wide angle (WAXS) x-ray scattering. The cell wall topography of birch sawdust and extracted sawdust was compared using x-ray tomography. The results showed that the structure of the cell walls of extracted birch sawdust was preserved but the cell walls were thinner after the extractions. Larger pores were opened inside the fibres and cellulose microfibrils were more tightly packed after the extraction. Acetate buffers were used to control the pH of the extracts during the extractions. The pH control prevented excessive xylan hydrolysis and increased the molar masses of the extracted xylans. The yields of buffered extractions were lower than for plain water extractions at 160–170 °C, but at 180 °C yields were similar to those from plain water and pH buffers. The pH can thus be controlled during extraction with acetate buffer to obtain xylan with higher molar mass than those obtainable using plain water. Birch sawdust was extracted both in the laboratory and pilot scale. The performance of the PHWE flow-through system was evaluated in the laboratory and the pilot scale using vessels with the same shape but different volumes, with the same relative water flow through the sawdust bed, and in the same extraction temperature. Pre-steaming improved the extraction efficiency and the water flow through the sawdust bed. The extracted birch sawdust and the extracted xylan were similar in both laboratory and pilot scale. The PHWE system was successfully scaled up by a factor of 6000 from the laboratory to pilot scale and extractions performed equally well in both scales. The results show that a flow-through system can be further scaled up and used to extract water-soluble xylans from birch sawdust. Extracted xylans can be concentrated, purified, and then used in e.g. films and barriers, or as building blocks for novel material applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work presents a new method for activity extraction and reporting from video based on the aggregation of fuzzy relations. Trajectory clustering is first employed mainly to discover the points of entry and exit of mobiles appearing in the scene. In a second step, proximity relations between resulting clusters of detected mobiles and contextual elements from the scene are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the structure of the scene and characterises the ongoing different activities of the scene. Discovered activity zones can be reported as activity maps with different granularities thanks to the analysis of the transitive closure matrix. Taking advantage of the soft relation properties, activity zones and related activities can be labeled in a more human-like language. We present results obtained on real videos corresponding to apron monitoring in the Toulouse airport in France.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.