986 resultados para named inventories


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cities and urban regions are undertaking efforts to quantify greenhouse (GHG) emissions from their jurisdictional boundaries. Although inventorying methodologies are beginning to standardize for GHG sources, carbon sequestration is generally not quantified. This article describes the methodology and quantification of gross urban carbon sinks. Sinks are categorized into direct and embodied sinks. Direct sinks generally incorporate natural process, such as humification in soils and photosynthetic biomass growth (in urban trees, perennial crops, and regional forests). Embodied sinks include activities associated with consumptive behavior that result in the import and/or storage of carbon, such as landfilling of waste, concrete construction, and utilization of durable wood products. Using methodologies based on the Intergovernmental Panel on Climate Change 2006 guidelines (for direct sinks) and peer-reviewed literature (for embodied sinks), carbon sequestration for 2005 is calculated for the Greater Toronto Area. Direct sinks are found to be 317 kilotons of carbon (kt C), and are dominated by regional forest biomass. Embodied sinks are calculated to be 234 kt C based on one year's consumption, though a complete life cycle accounting of emissions would likely transform this sum from a carbon sink to a source. There is considerable uncertainty associated with the methodologies used, which could be addressed with city-specific stock-change measurements. Further options for enhancing carbon sink capacity within urban environments are explored, such as urban biomass growth and carbon capture and storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach for assisting low-literacy readers in accessing Web online information. The oEducational FACILITAo tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that oEducational FACILITAo improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an input-output analysis of the life-cycle labor, land, and greenhouse gas (GHG) requirements of alternative options for three case studies: investing money in a new vehicle versus in repairs of an existing vehicle (labor), passenger transport modes for a trip between Sydney and Melbourne (land use), and renewable electricity generation (GHG emissions). These case studies were chosen to demonstrate the possibility of rank crossovers in life-cycle inventory (LCI) results as system boundaries are expanded and upstream production inputs are taken into account. They demonstrate that differential convergence can cause crossovers in the ranking of inventories for alternative functional units occurring at second-and higher-order upstream production layers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life-cycle assessment (LCA) is a method for evaluating the environmental impacts of products holistically, including direct and supply chain impacts. The current LCA methodologies and the standards by the International Organization for Standardization (ISO) impose practical difficulties for drawing system boundaries; decisions on inclusion or exclusion of processes in an analysis (the cutoff criteria) are typically not made on a scientific basis. In particular, the requirement of deciding which processes could be excluded from the inventory can be rather difficult to meet because many excluded processes have often never been assessed by the practitioner, and therefore, their negligibility cannot be guaranteed. LCA studies utilizing economic input−output analysis have shown that, in practice, excluded processes can contribute as much to the product system under study as included processes; thus, the subjective determination of the system boundary may lead to invalid results. System boundaries in LCA are discussed herein with particular attention to outlining hybrid approaches as methods for resolving the boundary selection problem in LCA. An input−output model can be used to describe at least a part of a product system, and an ISO-compatible system boundary selection procedure can be designed by applying hybrid input−output-assisted approaches. There are several hybrid input−output analysis-based LCA methods that can be implemented in practice for broadening system boundary and also for ISO compliance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In named entity recognition (NER) for biomedical literature, approaches based on combined classifiers have demonstrated great performance improvement compared to a single (best) classifier. This is mainly owed to sufficient level of diversity exhibited among classifiers, which is a selective property of classifier set. Given a large number of classifiers, how to select different classifiers to put into a classifier-ensemble is a crucial issue of multiple classifier-ensemble design. With this observation in mind, we proposed a generic genetic classifier-ensemble method for the classifier selection in biomedical NER. Various diversity measures and majority voting are considered, and disjoint feature subsets are selected to construct individual classifiers. A basic type of individual classifier – Support Vector Machine (SVM) classifier is adopted as SVM-classifier committee. A multi-objective Genetic algorithm (GA) is employed as the classifier selector to facilitate the ensemble classifier to improve the overall sample classification accuracy. The proposed approach is tested on the benchmark dataset – GENIA version 3.02 corpus, and compared with both individual best SVM classifier and SVM-classifier ensemble algorithm as well as other machine learning methods such as CRF, HMM and MEMM. The results show that the proposed approach outperforms other classification algorithms and can be a useful method for the biomedical NER problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Named entity recognition (NER) is an essential step in the process of information extraction within text mining. This paper proposes a technique to extract drug named entities from unstructured and informal medical text using a hybrid model of lexicon-based and rule-based techniques. In the proposed model, a lexicon is first used as the initial step to detect drug named entities. Inference rules are then deployed to further extract undetected drug names. The designed rules employ part of speech tags and morphological features for drug name detection. The proposed hybrid model is evaluated using a benchmark data set from the i2b2 2009 medication challenge, and is able to achieve an f-score of 66.97%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective : The objective of this paper is to formulate an extended segment representation (SR) technique to enhance named entity recognition (NER) in medical applications.

Methods : An extension to the IOBES (Inside/Outside/Begin/End/Single) SR technique is formulated. In the proposed extension, a new class is assigned to words that do not belong to a named entity (NE) in one context but appear as an NE in other contexts. Ambiguity in such cases can negatively affect the results of classification-based NER techniques. Assigning a separate class to words that can potentially cause ambiguity in NER allows a classifier to detect NEs more accurately; therefore increasing classification accuracy.

Results : The proposed SR technique is evaluated using the i2b2 2010 medical challenge data set with eight different classifiers. Each classifier is trained separately to extract three different medical NEs, namely treatment, problem, and test. From the three experimental results, the extended SR technique is able to improve the average F1-measure results pertaining to seven out of eight classifiers. The kNN classifier shows an average reduction of 0.18% across three experiments, while the C4.5 classifier records an average improvement of 9.33%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An accurate Named Entity Recognition (NER) is important for knowledge discovery in text mining. This paper proposes an ensemble machine learning approach to recognise Named Entities (NEs) from unstructured and informal medical text. Specifically, Conditional Random Field (CRF) and Maximum Entropy (ME) classifiers are applied individually to the test data set from the i2b2 2010 medication challenge. Each classifier is trained using a different set of features. The first set focuses on the contextual features of the data, while the second concentrates on the linguistic features of each word. The results of the two classifiers are then combined. The proposed approach achieves an f-score of 81.8%, showing a considerable improvement over the results from CRF and ME classifiers individually which achieve f-scores of 76% and 66.3% for the same data set, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Named Entity Recognition (NER) is a crucial step in text mining. This paper proposes a new graph-based technique for representing unstructured medical text. The new representation is used to extract discriminative features that are able to enhance the NER performance. To evaluate the usefulness of the proposed graph-based technique, the i2b2 medication challenge data set is used. Specifically, the 'treatment' named entities are extracted for evaluation using six different classifiers. The F-measure results of five classifiers are enhanced, with an average improvement of up to 26% in performance.