1000 resultados para query extraction


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information portals are seen as an appropriate platform for personalised healthcare and wellbeing information provision. Efficient content management is a core capability of a successful smart health information portal (SHIP) and domain expertise is a vital input to content management when it comes to matching user profiles with the appropriate resources. The rate of generation of new health-related content far exceeds the numbers that can be manually examined by domain experts for relevance to a specific topic and audience. In this paper we investigate automated content discovery as a plausible solution to this shortcoming that capitalises on the existing database of expert-endorsed content as an implicit store of knowledge to guide such a solution. We propose a novel content discovery technique based on a text analytics approach that utilises an existing content repository to acquire new and relevant content. We also highlight the contribution of this technique towards realisation of smart content management for SHIPs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new active learning query strategy for information extraction, called Domain Knowledge Informativeness (DKI). Active learning is often used to reduce the amount of annotation effort required to obtain training data for machine learning algorithms. A key component of an active learning approach is the query strategy, which is used to iteratively select samples for annotation. Knowledge resources have been used in information extraction as a means to derive additional features for sample representation. DKI is, however, the first query strategy that exploits such resources to inform sample selection. To evaluate the merits of DKI, in particular with respect to the reduction in annotation effort that the new query strategy allows to achieve, we conduct a comprehensive empirical comparison of active learning query strategies for information extraction within the clinical domain. The clinical domain was chosen for this work because of the availability of extensive structured knowledge resources which have often been exploited for feature generation. In addition, the clinical domain offers a compelling use case for active learning because of the necessary high costs and hurdles associated with obtaining annotations in this domain. Our experimental findings demonstrated that 1) amongst existing query strategies, the ones based on the classification model’s confidence are a better choice for clinical data as they perform equally well with a much lighter computational load, and 2) significant reductions in annotation effort are achievable by exploiting knowledge resources within active learning query strategies, with up to 14% less tokens and concepts to manually annotate than with state-of-the-art query strategies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A rapidly increasing number of Web databases are now become accessible via
their HTML form-based query interfaces. Query result pages are dynamically generated
in response to user queries, which encode structured data and are displayed for human
use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data
from query result pages is critical for web data integration applications, such as comparison
shopping, meta-search engines etc, and has been intensively studied. A number of approaches
have been proposed. As the structures of Web pages become more and more complex, the
existing approaches start to fail, and most of them do not remove irrelevant contents which
may a®ect the accuracy of data record extraction. We propose an automated approach for
Web data extraction. First, it makes use of visual features and query terms to identify data
sections and extracts data records in these sections. We also represent several content and
visual features of visual blocks in a data section, and use them to ¯lter out noisy blocks.
Second, it measures similarity between data items in di®erent data records based on their
visual and content features, and aligns them into di®erent groups so that the data in the
same group have the same semantics. The results of our experiments with a large set of
Web query result pages in di®erent domains show that our proposed approaches are highly
e®ective.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing importance and need of data processing for information extraction is vital for Web databases. Due to the sheer size and volume of databases, retrieval of relevant information as needed by users has become a cumbersome process. Information seekers are faced by information overloading - too many result sets are returned for their queries. Moreover, too few or no results are returned if a specific query is asked. This paper proposes a ranking algorithm that gives higher preference to a user’s current search and also utilizes profile information in order to obtain the relevant results for a user’s query.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of discovered features in relevance feedback (RF) is the key issue for effective search query. Most existing feedback methods do not carefully address the issue of selecting features for noise reduction. As a result, extracted noisy features can easily contribute to undesirable effectiveness. In this paper, we propose a novel feature extraction method for query formulation. This method first extract term association patterns in RF as knowledge for feature extraction. Negative RF is then used to improve the quality of the discovered knowledge. A novel information filtering (IF) model is developed to evaluate the proposed method. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics confirm that the proposed model achieved encouraging performance compared to state-of-the-art IF models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A building information model (BIM) provides a rich representation of a building's design. However, there are many challenges in getting construction-specific information from a BIM, limiting the usability of BIM for construction and other downstream processes. This paper describes a novel approach that utilizes ontology-based feature modeling, automatic feature extraction based on ifcXML, and query processing to extract information relevant to construction practitioners from a given BIM. The feature ontology generically represents construction-specific information that is useful for a broad range of construction management functions. The software prototype uses the ontology to transform the designer-focused BIM into a construction-specific feature-based model (FBM). The formal query methods operate on the FBM to further help construction users to quickly extract the necessary information from a BIM. Our tests demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building information modeling (BIM) is an emerging technology and process that provides rich and intelligent design information models of a facility, enabling enhanced communication, coordination, analysis, and quality control throughout all phases of a building project. Although there are many documented benefits of BIM for construction, identifying essential construction-specific information out of a BIM in an efficient and meaningful way is still a challenging task. This paper presents a framework that combines feature-based modeling and query processing to leverage BIM for construction. The feature-based modeling representation implemented enriches a BIM by representing construction-specific design features relevant to different construction management (CM) functions. The query processing implemented allows for increased flexibility to specify queries and rapidly generate the desired view from a given BIM according to the varied requirements of a specific practitioner or domain. Central to the framework is the formalization of construction domain knowledge in the form of a feature ontology and query specifications. The implementation of our framework enables the automatic extraction and querying of a wide-range of design conditions that are relevant to construction practitioners. The validation studies conducted demonstrate that our approach is significantly more effective than existing solutions. The research described in this paper has the potential to improve the efficiency and effectiveness of decision-making processes in different CM functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation, RC attempts at linking entity pairs between two entity lists under the relation. To accomplish the RC goals, we propose to formulate search queries for each query entity α based on some auxiliary information, so that to detect its target entity β from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying symmetry in scalar fields is a recent area of research in scientific visualization and computer graphics communities. Symmetry detection techniques based on abstract representations of the scalar field use only limited geometric information in their analysis. Hence they may not be suited for applications that study the geometric properties of the regions in the domain. On the other hand, methods that accumulate local evidence of symmetry through a voting procedure have been successfully used for detecting geometric symmetry in shapes. We extend such a technique to scalar fields and use it to detect geometrically symmetric regions in synthetic as well as real-world datasets. Identifying symmetry in the scalar field can significantly improve visualization and interactive exploration of the data. We demonstrate different applications of the symmetry detection method to scientific visualization: query-based exploration of scalar fields, linked selection in symmetric regions for interactive visualization, and classification of geometrically symmetric regions and its application to anomaly detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the challenge of bridging the semantic gap between the rich meaning users desire when they query to locate and browse media and the shallowness of media descriptions that can be computed in today's content management systems. To facilitate high-level semantics-based content annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from fill production to determine when a scene change occurs. We then investigate different rules and conventions followed as part of Fill Grammar that would guide and shape an algorithmic solution for determining a scene. Two different techniques using intershot analysis are proposed as solutions in this paper. In addition, we present different refinement mechanisms, such as film-punctuation detection founded on Film Grammar, to further improve the results. These refinement techniques demonstrate significant improvements in overall performance. Furthermore, we analyze errors in the context of film-production techniques, which offer useful insights into the limitations of our method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The tight relation between arterial oxygen content and maximum oxygen uptake (Vv(o2max)within a given person at sea level is diminished with altitude acclimatization. An explanation often suggested for this mismatch is impairment of the muscle O(2) extraction capacity with chronic hypoxia, and is the focus of the present study. We have studied six lowlanders during maximal exercise at sea level (SL) and with acute (AH) exposure to 4,100 m altitude, and again after 2 (W2) and 8 weeks (W8) of altitude sojourn, where also eight high altitude native (Nat) Aymaras were studied. Fractional arterial muscle O(2) extraction at maximal exercise was 90.0+/-1.0% in the Danish lowlanders at sea level, and remained close to this value in all situations. In contrast to this, fractional arterial O(2) extraction was 83.2+/-2.8% in the high altitude natives, and did not change with the induction of normoxia. The capillary oxygen conductance of the lower extremity, a measure of oxygen diffusing capacity, was decreased in the Danish lowlanders after 8 weeks of acclimatization, but was still higher than the value obtained from the high altitude natives. The values were (in ml min(-1) mmHg(-1)) 55.2+/-3.7 (SL), 48.0+/-1.7 (W2), 37.8+/-0.4 (W8) and 27.7+/-1.5 (Nat). However, when correcting oxygen conductance for the observed reduction in maximal leg blood flow with acclimatization the effect diminished. When calculating a hypothetical leg V(o2max)at altitude using either the leg blood flow or the O(2) conductance values obtained at sea level, the former values were almost completely restored to sea level values. This would suggest that the major determinant V(o2max)for not to increase with acclimatization is the observed reduction in maximal leg blood flow and O(2) conductance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] During maximal whole body exercise VO2 peak is limited by O2 delivery. In turn, it is though that blood flow at near-maximal exercise must be restrained by the sympathetic nervous system to maintain mean arterial pressure. To determine whether enhancing vasodilation across the leg results in higher O2 delivery and leg VO2 during near-maximal and maximal exercise in humans, seven men performed two maximal incremental exercise tests on the cycle ergometer. In random order, one test was performed with and one without (control exercise) infusion of ATP (8 mg in 1 ml of isotonic saline solution) into the right femoral artery at a rate of 80 microg.kg body mass-1.min-1. During near-maximal exercise (92% of VO2 peak), the infusion of ATP increased leg vascular conductance (+43%, P<0.05), leg blood flow (+20%, 1.7 l/min, P<0.05), and leg O2 delivery (+20%, 0.3 l/min, P<0.05). No effects were observed on leg or systemic VO2. Leg O2 fractional extraction was decreased from 85+/-3 (control) to 78+/-4% (ATP) in the infused leg (P<0.05), while it remained unchanged in the left leg (84+/-2 and 83+/-2%; control and ATP; n=3). ATP infusion at maximal exercise increased leg vascular conductance by 17% (P<0.05), while leg blood flow tended to be elevated by 0.8 l/min (P=0.08). However, neither systemic nor leg peak VO2 values where enhanced due to a reduction of O2 extraction from 84+/-4 to 76+/-4%, in the control and ATP conditions, respectively (P<0.05). In summary, the VO2 of the skeletal muscles of the lower extremities is not enhanced by limb vasodilation at near-maximal or maximal exercise in humans. The fact that ATP infusion resulted in a reduction of O2 extraction across the exercising leg suggests a vasodilating effect of ATP on less-active muscle fibers and other noncontracting tissues and that under normal conditions these regions are under high vasoconstrictor influence to ensure the most efficient flow distribution of the available cardiac output to the most active muscle fibers of the exercising limb.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes the work performed over the database of questions belonging to the different opinion polls carried during the last 50 years in Spain. Approximately half of the questions are provided with a title while the other half remain untitled. The work and implemented techniques in order to automatically generate the titles for untitled questions are described. This process is performed over very short texts and generated titles are subject to strong stylistic conventions and should be fully grammatical pieces of Spanish