985 resultados para data elements
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
This report evaluates the use of remotely sensed images in implementing the Iowa DOT LRS that is currently in the stages of system architecture. The Iowa Department of Transportation is investing a significant amount of time and resources into creation of a linear referencing system (LRS). A significant portion of the effort in implementing the system will be creation of a datum, which includes geographically locating anchor points and then measuring anchor section distances between those anchor points. Currently, system architecture and evaluation of different data collection methods to establish the LRS datum is being performed for the DOT by an outside consulting team.
Resumo:
Cover title.
Resumo:
Cover title.
Resumo:
Includes bibliographical references.
Resumo:
This report evaluates the use of remotely sensed images in implementing the Iowa DOT LRS that is currently in the stages of system architecture. The Iowa Department of Transportation is investing a significant amount of time and resources into creation of a linear referencing system (LRS). A significant portion of the effort in implementing the system will be creation of a datum, which includes geographically locating anchor points and then measuring anchor section distances between those anchor points. Currently, system architecture and evaluation of different data collection methods to establish the LRS datum is being performed for the DOT by an outside consulting team.
Resumo:
This thesis work describes the creation of a pipework data structure for design system integration. Work is completed in pulp and paper plant delivery company with global engineering network operations in mind. User case of process design to 3D pipework design is introduced with influence of subcontracting engineering offices. Company data element list is gathered by using key person interviews and results are processed into a pipework data element list. Inter-company co-operation is completed in standardization association and common standard for pipework data elements is found. As result inter-company created pipework data element list is introduced. Further list usage, development and relations to design software vendors are evaluated.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Visual cluster analysis provides valuable tools that help analysts to understand large data sets in terms of representative clusters and relationships thereof. Often, the found clusters are to be understood in context of belonging categorical, numerical or textual metadata which are given for the data elements. While often not part of the clustering process, such metadata play an important role and need to be considered during the interactive cluster exploration process. Traditionally, linked-views allow to relate (or loosely speaking: correlate) clusters with metadata or other properties of the underlying cluster data. Manually inspecting the distribution of metadata for each cluster in a linked-view approach is tedious, specially for large data sets, where a large search problem arises. Fully interactive search for potentially useful or interesting cluster to metadata relationships may constitute a cumbersome and long process. To remedy this problem, we propose a novel approach for guiding users in discovering interesting relationships between clusters and associated metadata. Its goal is to guide the analyst through the potentially huge search space. We focus in our work on metadata of categorical type, which can be summarized for a cluster in form of a histogram. We start from a given visual cluster representation, and compute certain measures of interestingness defined on the distribution of metadata categories for the clusters. These measures are used to automatically score and rank the clusters for potential interestingness regarding the distribution of categorical metadata. Identified interesting relationships are highlighted in the visual cluster representation for easy inspection by the user. We present a system implementing an encompassing, yet extensible, set of interestingness scores for categorical metadata, which can also be extended to numerical metadata. Appropriate visual representations are provided for showing the visual correlations, as well as the calculated ranking scores. Focusing on clusters of time series data, we test our approach on a large real-world data set of time-oriented scientific research data, demonstrating how specific interesting views are automatically identified, supporting the analyst discovering interesting and visually understandable relationships.
Resumo:
Advances in communication, navigation and imaging technologies are expected to fundamentally change methods currently used to collect data. Electronic data interchange strategies will also minimize data handling and automatically update files at the point of capture. This report summarizes the outcome of using a multi-camera platform as a method to collect roadway inventory data. It defines basic system requirements as expressed by users, who applied these techniques and examines how the application of the technology met those needs. A sign inventory case study was used to determine the advantages of creating and maintaining the database and provides the capability to monitor performance criteria for a Safety Management System. The project identified at least 75 percent of the data elements needed for a sign inventory can be gathered by viewing a high resolution image.
Resumo:
The purpose of this thesis was to investigate creating and improving category purchasing visibility for corporate procurement by utilizing financial information. This thesis was a part of the global category driven spend analysis project of Konecranes Plc. While creating general understanding for building category driven corporate spend visibility, the IT architecture and needed purchasing parameters for spend analysis were described. In the case part of the study three manufacturing plants of Konecranes Standard Lifting, Heavy Lifting and Services business areas were examined. This included investigating the operative IT system architecture and needed processes for building corporate spend visibility. The key findings of this study were the identification of the needed processes for gathering purchasing data elements while creating corporate spend visibility in fragmented source system environment. As an outcome of the study, roadmap presenting further development areas was introduced for Konecranes.
Resumo:
El traumatismo craneoencefálico, es la epidemia silenciosa de nuestra época, que genera gastos en salud, en países como Estados Unidos, cercanos a los 60 billones de dólares anuales, y cerca de 400 billones en rehabilitación de los discapacitados. El pilar del manejo médico del trauma craneoencefálico moderado o severo, es la osmoterapia, principalmente con sustancias como el manitol y las soluciones hipertónicas. Se realizó la revisión de 14 bases de datos, encontrando 4657754 artículos, quedando al final 40 artículos después de un análisis exhaustivo, que se relacionaban con el manejo de la hipertensión endocraneana y terapia osmótica. Resultados: Se compararon diferentes estudios, encontrando gran variabilidad estos, sin homogenización en los análisis estadísticos, y la poca rigurosidad no permitieron, la recolección de datos y la comparación entre los diferentes estudios, no permitió realizar el meta-análisis y por esto se decidió la realización de una revisión sistemática de la literatura. Se evidenció principalmente tres cosas: la primera es la poca rigurosidad con la que se realizan los estudios clínicos; la segunda, es que aún falta mucha más investigación principalmente, la presencia de estudios clínicos aleatorizados multicéntricos, que logren dar una sólida evidencia y que genere validez científica que se requiere, a pesar de la evidencia clara en la práctica clínica; la tercera es la seguridad para su uso, con poca presencia de complicaciones para las soluciones salinas hipertónicas.
Resumo:
Includes bibliography