946 resultados para user data
Resumo:
OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
This paper presents a multifactor approach for performance assessment of Water Users Associations (WUAs) in Uzbekistan in order to identify the drivers for improved and effi cient performance of WUAs. The study was carried out in the Fergana Valley where the WUAs were created along the South Fergana Main Canal during the last 10 years. The farmers and the employees of 20 WUAs were questioned about the WUAs’ activities and the quantitative and qualitative data were obtained. This became a base for the calculation of 36 indicators divided into 6 groups: Water supply, technical conditions, economic conditions, social and cultural conditions, organizational conditions and information conditions. All the indicators assessed with a differentiated point system adjusted for subjectivity of several of them give the total maximal result for the associations of 250 point. The WUAs of the Fergana Valley showed the score between 145 and 219 points, what refl ects a highly diverse level of the WUAs performance in the region. The analysis of the indicators revealed that the key points of the WUA’s success are the organizational and institutional conditions including the participatory factors and awareness of both the farmers and employees about the work of WUA. The research showed that the low performance of the WUAs is always explained by the low technical and economic conditions along with weak organization and information dissemination conditions. It is clear that it is complicated to improve technical and economic conditions immediately because they are cost-based and cost-induced. However, it is possible to improve the organizational conditions and to strengthen the institutional basis via formal and information institutions which will gradually lead to improvement of economic and technical conditions of WUAs. Farmers should be involved into the WUA Governance and into the process of making common decisions and solving common problems together via proper institutions. Their awareness can also be improved by leading additional trainings for increasing farmers’ agronomic and irrigation knowledge, teaching them water saving technologies and acquainting them with the use of water measuring equipment so it can bring reliable water supply, transparent budgeting and adequate as well as equitable water allocation to the water users.
Resumo:
This paper proposed an automated 3D lumbar intervertebral disc (IVD) segmentation strategy from MRI data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based approach. After that, a three-dimensional (3D) variable-radius soft tube model of the lumbar spine column is built to guide the 3D disc segmentation. The disc segmentation is achieved as a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
A feasibility study by Pail et al. (Can GOCE help to improve temporal gravity field estimates? In: Ouwehand L (ed) Proceedings of the 4th International GOCE User Workshop, ESA Publication SP-696, 2011b) shows that GOCE (‘Gravity field and steady-state Ocean Circulation Explorer’) satellite gravity gradiometer (SGG) data in combination with GPS derived orbit data (satellite-to-satellite tracking: SST-hl) can be used to stabilize and reduce the striping pattern of a bi-monthly GRACE (‘Gravity Recovery and Climate Experiment’) gravity field estimate. In this study several monthly (and bi-monthly) combinations of GRACE with GOCE SGG and GOCE SST-hl data on the basis of normal equations are investigated. Our aim is to assess the role of the gradients (solely) in the combination and whether already one month of GOCE observations provides sufficient data for having an impact in the combination. The estimation of clean and stable monthly GOCE SGG normal equations at high resolution ( > d/o 150) is found to be difficult, and the SGG component, solely, does not show significant added value to monthly and bi-monthly GRACE gravity fields. Comparisons of GRACE-only and combined monthly and bi-monthly solutions show that the striping pattern can only be reduced when using both GOCE observation types (SGG, SST-hl), and mainly between d/o 45 and 60.
Resumo:
The article proposes granular computing as a theoretical, formal and methodological basis for the newly emerging research field of human–data interaction (HDI). We argue that the ability to represent and reason with information granules is a prerequisite for data legibility. As such, it allows for extending the research agenda of HDI to encompass the topic of collective intelligence amplification, which is seen as an opportunity of today’s increasingly pervasive computing environments. As an example of collective intelligence amplification in HDI, we introduce a collaborative urban planning use case in a cognitive city environment and show how an iterative process of user input and human-oriented automated data processing can support collective decision making. As a basis for automated human-oriented data processing, we use the spatial granular calculus of granular geometry.
Resumo:
Various avours of a new research field on (socio-)physical or personal analytics have emerged, with the goal of deriving semantically-rich insights from people's low-level physical sensing combined with their (online) social interactions. In this paper, we argue for more comprehensive data sources, including environmental (e.g. weather, infrastructure) and application-specific data, to better capture the interactions between users and their context, in addition to those among users. To illustrate our proposed concept of synergistic user <-> context analytics, we first provide some example use cases. Then, we present our ongoing work towards a synergistic analytics platform: a testbed, based on mobile crowdsensing and the Internet of Things (IoT), a data model for representing the different sources of data and their connections, and a prediction engine for analyzing the data and producing insights.
Resumo:
This paper proposed an automated three-dimensional (3D) lumbar intervertebral disc (IVD) segmentation strategy from Magnetic Resonance Imaging (MRI) data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based template matching approach. Based on the estimated two-dimensional (2D) geometrical parameters, a 3D variable-radius soft tube model of the lumbar spine column is built by model fitting to the 3D data volume. Taking the geometrical information from the 3D lumbar spine column as constraints and segmentation initialization, the disc segmentation is achieved by a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.