997 resultados para data elements


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cover title.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cover title.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliographical references.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report evaluates the use of remotely sensed images in implementing the Iowa DOT LRS that is currently in the stages of system architecture. The Iowa Department of Transportation is investing a significant amount of time and resources into creation of a linear referencing system (LRS). A significant portion of the effort in implementing the system will be creation of a datum, which includes geographically locating anchor points and then measuring anchor section distances between those anchor points. Currently, system architecture and evaluation of different data collection methods to establish the LRS datum is being performed for the DOT by an outside consulting team.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study was undertaken by UKOLN on behalf of the Joint Information Systems Committee (JISC) in the period April to September 2008. Application profiles are metadata schemata which consist of data elements drawn from one or more namespaces, optimized for a particular local application. They offer a way for particular communities to base the interoperability specifications they create and use for their digital material on established open standards. This offers the potential for digital materials to be accessed, used and curated effectively both within and beyond the communities in which they were created. The JISC recognized the need to undertake a scoping study to investigate metadata application profile requirements for scientific data in relation to digital repositories, and specifically concerning descriptive metadata to support resource discovery and other functions such as preservation. This followed on from the development of the Scholarly Works Application Profile (SWAP) undertaken within the JISC Digital Repositories Programme and led by Andy Powell (Eduserv Foundation) and Julie Allinson (RRT UKOLN) on behalf of the JISC. Aims and Objectives 1.To assess whether a single metadata AP for research data, or a small number thereof, would improve resource discovery or discovery-to-delivery in any useful or significant way. 2.If so, then to:a.assess whether the development of such AP(s) is practical and if so, how much effort it would take; b.scope a community uptake strategy that is likely to be successful, identifying the main barriers and key stakeholders. 3.Otherwise, to investigate how best to improve cross-discipline, cross-community discovery-to-delivery for research data, and make recommendations to the JISC and others as appropriate. Approach The Study used a broad conception of what constitutes scientific data, namely data gathered, collated, structured and analysed using a recognizably scientific method, with a bias towards quantitative methods. The approach taken was to map out the landscape of existing data centres, repositories and associated projects, and conduct a survey of the discovery-to-delivery metadata they use or have defined, alongside any insights they have gained from working with this metadata. This was followed up by a series of unstructured interviews, discussing use cases for a Scientific Data Application Profile, and how widely a single profile might be applied. On the latter point, matters of granularity, the experimental/measurement contrast, the quantitative/qualitative contrast, the raw/derived data contrast, and the homogeneous/heterogeneous data collection contrast were discussed. The Study report was loosely structured according to the Singapore Framework for Dublin Core Application Profiles, and in turn considered: the possible use cases for a Scientific Data Application Profile; existing domain models that could either be used or adapted for use within such a profile; and a comparison existing metadata profiles and standards to identify candidate elements for inclusion in the description set profile for scientific data. The report also considered how the application profile might be implemented, its relationship to other application profiles, the alternatives to constructing a Scientific Data Application Profile, the development effort required, and what could be done to encourage uptake in the community. The conclusions of the Study were validated through a reference group of stakeholders.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Visual cluster analysis provides valuable tools that help analysts to understand large data sets in terms of representative clusters and relationships thereof. Often, the found clusters are to be understood in context of belonging categorical, numerical or textual metadata which are given for the data elements. While often not part of the clustering process, such metadata play an important role and need to be considered during the interactive cluster exploration process. Traditionally, linked-views allow to relate (or loosely speaking: correlate) clusters with metadata or other properties of the underlying cluster data. Manually inspecting the distribution of metadata for each cluster in a linked-view approach is tedious, specially for large data sets, where a large search problem arises. Fully interactive search for potentially useful or interesting cluster to metadata relationships may constitute a cumbersome and long process. To remedy this problem, we propose a novel approach for guiding users in discovering interesting relationships between clusters and associated metadata. Its goal is to guide the analyst through the potentially huge search space. We focus in our work on metadata of categorical type, which can be summarized for a cluster in form of a histogram. We start from a given visual cluster representation, and compute certain measures of interestingness defined on the distribution of metadata categories for the clusters. These measures are used to automatically score and rank the clusters for potential interestingness regarding the distribution of categorical metadata. Identified interesting relationships are highlighted in the visual cluster representation for easy inspection by the user. We present a system implementing an encompassing, yet extensible, set of interestingness scores for categorical metadata, which can also be extended to numerical metadata. Appropriate visual representations are provided for showing the visual correlations, as well as the calculated ranking scores. Focusing on clusters of time series data, we test our approach on a large real-world data set of time-oriented scientific research data, demonstrating how specific interesting views are automatically identified, supporting the analyst discovering interesting and visually understandable relationships.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Advances in communication, navigation and imaging technologies are expected to fundamentally change methods currently used to collect data. Electronic data interchange strategies will also minimize data handling and automatically update files at the point of capture. This report summarizes the outcome of using a multi-camera platform as a method to collect roadway inventory data. It defines basic system requirements as expressed by users, who applied these techniques and examines how the application of the technology met those needs. A sign inventory case study was used to determine the advantages of creating and maintaining the database and provides the capability to monitor performance criteria for a Safety Management System. The project identified at least 75 percent of the data elements needed for a sign inventory can be gathered by viewing a high resolution image.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

From 2014, QUT will be adopting a life-cycle approach to Course Quality Assurance informed by a wider and richer range of historic, ‘live’ and ‘predictive’ course data. Key data elements continue to be grouped according to the three broad categories – Viability, Quality of Learning Environment and Outcomes – and are further supported with analytic data presented within tables and charts. Course Quality Assurance and this Consolidated Courses Performance Report illuminate aspects of courses from a data evidence base highlighting the strengths and weaknesses of our courses. It provides the framework and tools to achieve QUT's commitment to excellent graduate outcomes by drawing attention and focus to the quality of our courses and providing a structured approach for bringing about change. Our portfolio of courses forms a vital part of QUT, generating almost $600 million in 2013 alone. Real world courses are fundamental to the strength of the Institution; they are what our many thousands of current and future students are drawn to and invest their time and aspirations in. As we move through a period of some regulatory and deregulatory uncertainty, there is a greater need for QUT to monitor and respond to the needs and expectations of our students. The life-cycle approach, with its rich and predicative data, provides the best source of evidence we have had, to date, to assure the quality of our courses and their relevance in a rapidly changing higher education context.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A number of problems in network operations and engineering call for new methods of traffic analysis. While most existing traffic analysis methods are fundamentally temporal, there is a clear need for the analysis of traffic across multiple network links — that is, for spatial traffic analysis. In this paper we give examples of problems that can be addressed via spatial traffic analysis. We then propose a formal approach to spatial traffic analysis based on the wavelet transform. Our approach (graph wavelets) generalizes the traditional wavelet transform so that it can be applied to data elements connected via an arbitrary graph topology. We explore the necessary and desirable properties of this approach and consider some of its possible realizations. We then apply graph wavelets to measurements from an operating network. Our results show that graph wavelets are very useful for our motivating problems; for example, they can be used to form highly summarized views of an entire network's traffic load, to gain insight into a network's global traffic response to a link failure, and to localize the extent of a failure event within the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: This series of guidance documents on cough, which will be published over time, is a hybrid of two processes: (1) evidence-based guidelines and (2) trustworthy consensus statements based on a robust and transparent process.

METHODS: The CHEST Guidelines Oversight Committee selected a nonconflicted Panel Chair and jointly assembled an international panel of experts in each clinical area with few, if any, conflicts of interest. PICO (population, intervention, comparator, outcome)-based key questions and parameters of eligibility were developed for each clinical topic to inform the comprehensive literature search. Existing guidelines, systematic reviews, and primary studies were assessed for relevance and quality. Data elements were extracted into evidence tables and synthesized to provide summary statistics. These, in turn, are presented to support the evidence-based graded recommendations. A highly structured consensus-based Delphi approach was used to provide expert advice on all guidance statements. Transparency of process was documented.

RESULTS: Evidence-based guideline recommendations and consensus-based suggestions were carefully crafted to provide direction to health-care providers and investigators who treat and/or study patients with cough. Manuscripts and tables summarize the evidence in each clinical area supporting the recommendations and suggestions.

CONCLUSIONS: The resulting guidance statements are based on a rigorous methodology and transparency of process. Unless otherwise stated, the recommendations and suggestions meet the guidelines for trustworthiness developed by the Institute of Medicine and can be applied with confidence by physicians, nurses, other health-care providers, investigators, and patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El traumatismo craneoencefálico, es la epidemia silenciosa de nuestra época, que genera gastos en salud, en países como Estados Unidos, cercanos a los 60 billones de dólares anuales, y cerca de 400 billones en rehabilitación de los discapacitados. El pilar del manejo médico del trauma craneoencefálico moderado o severo, es la osmoterapia, principalmente con sustancias como el manitol y las soluciones hipertónicas. Se realizó la revisión de 14 bases de datos, encontrando 4657754 artículos, quedando al final 40 artículos después de un análisis exhaustivo, que se relacionaban con el manejo de la hipertensión endocraneana y terapia osmótica. Resultados: Se compararon diferentes estudios, encontrando gran variabilidad estos, sin homogenización en los análisis estadísticos, y la poca rigurosidad no permitieron, la recolección de datos y la comparación entre los diferentes estudios, no permitió realizar el meta-análisis y por esto se decidió la realización de una revisión sistemática de la literatura. Se evidenció principalmente tres cosas: la primera es la poca rigurosidad con la que se realizan los estudios clínicos; la segunda, es que aún falta mucha más investigación principalmente, la presencia de estudios clínicos aleatorizados multicéntricos, que logren dar una sólida evidencia y que genere validez científica que se requiere, a pesar de la evidencia clara en la práctica clínica; la tercera es la seguridad para su uso, con poca presencia de complicaciones para las soluciones salinas hipertónicas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Clustering with the agglomerative Information Bottleneck (aIB) algorithm suffers from the sub-optimality problem, which cannot guarantee to preserve as much relative information as possible. To handle this problem, we introduce a density connectivity chain, by which we consider not only the information between two data elements, but also the information among the neighbors of a data element. Based on this idea, we propose DCIB, a Density Connectivity Information Bottleneck algorithm that applies the Information Bottleneck method to quantify the relative information during the clustering procedure. As a hierarchical algorithm, the DCIB algorithm produces a pruned clustering tree-structure and gets clustering results in different sizes in a single execution. The experiment results in the documentation clustering indicate that the DCIB algorithm can preserve more relative information and achieve higher precision than the aIB algorithm.