946 resultados para injury data quality


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, with the expansion of the reference stations networks, several positioning techniques have been developed and/or improved. Among them, the VRS (Virtual Reference Station) concept has been very used. In this paper the goal is to generate VRS data in a modified technique. In the proposed methodology the DD (double difference) ambiguities are not computed. The network correction terms are obtained using only atmospheric (ionospheric and tropospheric) models. In order to carry out the experiments it was used data of five reference stations from the GPS Active Network of West of São Paulo State and an extra station. To evaluate the VRS data quality it was used three different strategies: PPP (Precise Point Positioning) and Relative Positioning in static and kinematic modes, and DGPS (Differential GPS). Furthermore, the VRS data were generated in the position of a real reference station. The results provided by the VRS data agree quite well with those of the real file data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of São Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation. © 2011 by the Istituto Nazionale di Geofisica e Vulcanologia. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) can be used to monitor hazardous and inaccessible areas. In these situations, the power supply (e.g. battery) of each node cannot be easily replaced. One solution to deal with the limited capacity of current power supplies is to deploy a large number of sensor nodes, since the lifetime and dependability of the network will increase through cooperation among nodes. Applications on WSN may also have other concerns, such as meeting temporal deadlines on message transmissions and maximizing the quality of information. Data fusion is a well-known technique that can be useful for the enhancement of data quality and for the maximization of WSN lifetime. In this paper, we propose an approach that allows the implementation of parallel data fusion techniques in IEEE 802.15.4 networks. One of the main advantages of the proposed approach is that it enables a trade-off between different user-defined metrics through the use of a genetic machine learning algorithm. Simulations and field experiments performed in different communication scenarios highlight significant improvements when compared with, for instance, the Gur Game approach or the implementation of conventional periodic communication techniques over IEEE 802.15.4 networks. © 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

.--A. Introduction.--B. Summary of evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Primary Care Information System (SIAB) concentrates basic healthcare information from all different regions of Brazil. The information is collected by primary care teams on a paper-based procedure that degrades the quality of information provided to the healthcare authorities and slows down the process of decision making. To overcome these problems we propose a new data gathering application that uses a mobile device connected to a 3G network and a GPS to be used by the primary care teams for collecting the families' data. A prototype was developed in which a digital version of one SIAB form is made available at the mobile device. The prototype was tested in a basic healthcare unit located in a suburb of Sao Paulo. The results obtained so far have shown that the proposed process is a better alternative for data collecting at primary care, both in terms of data quality and lower deployment time to health care authorities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aimee Guidera, Director of the National Data Quality Campaign, delivered the second annual Lee Gurel '48 Lecture in Education, "From Dartboards to Dashboards: The Imperative of Using Data to Improve Student Outcomes." Aimee Rogstad Guidera is the Founding Executive Director of the Data Quality Campaign. She manages a growing partnership among national organizations collaborating to improve the quality, accessibility and use of education data to improve student achievement. Working with 10 Founding Partners, Aimee launched the DQC in 2005 with the goal of every state having a robust longitudinal data system in place by 2009. The Campaign is now in the midst of its second phase focusing on State Actions to ensure effective data use. Aimee joined the National Center for Educational Accountability as Director of the Washington, DC office in 2003. During her eight previous years in various roles at the National Alliance of Business, Aimee supported the corporate community's efforts to increase achievement at all levels of learning. As NAB Vice President of Programs, she managed the Business Coalition Network, comprised of over 1,000 business led coalitions focused on improving education in communities across the country. Prior to joining the Alliance, Aimee focused on school readiness, academic standards, education goals and accountability systems while in the Center for Best Practices at the National Governors Association. She taught for the Japanese Ministry of Education in five Hiroshima high schools where she interviewed educators and studied the Japanese education system immediately after receiving her AB from Princeton University’s Woodrow Wilson School of Public & International Affairs. Aimee also holds a Masters Degree in Public Policy from Harvard’s John F. Kennedy School of Government.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The key role players of brain swelling seen after severe human head injury have only been partly determined. We used our human head injury data base to determine relationships between potassium, glutamate, lactate and cerebral blood flow (CBF). A total of 70 severely head injured patients (GCS < or = 8) were studied using intracerebral microdialysis to measure extracellular glutamate, potassium and lactate. Xenon CT was used to determine regional cerebral blood flow (rCBF). The mean +/- SEM of the r value of all patients, between potassium and glutamate, and potassium and lactate was 0.25 +/- 0.04 (p < 0.0001) and 0.17 +/- 0.06 (p = 0.006), respectively, demonstrating in both cases a positive relationship. rCBF was negatively correlated with potassium with marginal significance (r = -0.35, p = 0.08). When separated into two groups, patients with contusion had higher potassium levels than patients without contusion (1.55 +/- 0.03 mmol/l versus 1.26 +/- 0.02 mmol/l, respectively). These results in severely head injured patients confirm previous in vitro and animal studies in which relationships between potassium, glutamate, lactate and CBF were found. Potassium efflux is a major determinant of cell swelling leading to clinically significant cytotoxic edema due to increased glutamate release during reduced cerebral blood flow.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasing amounts of clinical research data are collected by manual data entry into electronic source systems and directly from research subjects. For this manual entered source data, common methods of data cleaning such as post-entry identification and resolution of discrepancies and double data entry are not feasible. However data accuracy rates achieved without these mechanisms may be higher than desired for a particular research use. We evaluated a heuristic usability method for utility as a tool to independently and prospectively identify data collection form questions associated with data errors. The method evaluated had a promising sensitivity of 64% and a specificity of 67%. The method was used as described in the literature for usability with no further adaptations or specialization for predicting data errors. We conclude that usability evaluation methodology should be further investigated for use in data quality assurance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. METHODS Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek's p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps 'guideline - perform - falsify - reform'. RESULTS 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. CONCLUSIONS The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systems for the identification and registration of cattle have gradually been receiving attention for use in syndromic surveillance, a relatively recent approach for the early detection of infectious disease outbreaks. Real or near real-time monitoring of deaths or stillbirths reported to these systems offer an opportunity to detect temporal or spatial clusters of increased mortality that could be caused by an infectious disease epidemic. In Switzerland, such data are recorded in the "Tierverkehrsdatenbank" (TVD). To investigate the potential of the Swiss TVD for syndromic surveillance, 3 years of data (2009-2011) were assessed in terms of data quality, including timeliness of reporting and completeness of geographic data. Two time-series consisting of reported on-farm deaths and stillbirths were retrospectively analysed to define and quantify the temporal patterns that result from non-health related factors. Geographic data were almost always present in the TVD data; often at different spatial scales. On-farm deaths were reported to the database by farmers in a timely fashion; stillbirths were less timely. Timeliness and geographic coverage are two important features of disease surveillance systems, highlighting the suitability of the TVD for use in a syndromic surveillance system. Both time series exhibited different temporal patterns that were associated with non-health related factors. To avoid false positive signals, these patterns need to be removed from the data or accounted for in some way before applying aberration detection algorithms in real-time. Evaluating mortality data reported to systems for the identification and registration of cattle is of value for comparing national data systems and as a first step towards a European-wide early detection system for emerging and re-emerging cattle diseases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: The Texas Occupational Safety & Health Surveillance System (TOSHSS) was created to collect, analyze and interpret occupational injury and illness data in order to decrease the impact of occupational injuries within the state of Texas. This process evaluation was performed midway through the 4-year grant to assess the efficiency and effectiveness of the surveillance system’s planning and implementation activities1. ^ Methods: Two evaluation guidelines published by the Centers for Disease Control and Prevention (CDC) were used as the theoretical models for this process evaluation. The Framework for Program Evaluation in Public Health was used to examine the planning and design of TOSHSS using logic models. The Framework for Evaluating Public Health Surveillance Systems was used to examine the implementation of approximately 60 surveillance activities, including uses of the data obtained from the surveillance system. ^ Results/Discussion: TOSHSS planning activities omitted the creation of a scientific advisory committee and specific activities designed to maintain contacts with stakeholders; and proposed activities should be reassessed and aligned with ongoing performance measurement criteria, including the role of collaborators in helping the surveillance system achieve each proposed activity. TOSHSS implementation activities are substantially meeting expectations and received an overall score of 61% for all activities being performed. TOSHSS is considered a surveillance system that is simple, flexible, acceptable, fairly stable, timely, moderately useful, with good data quality and a PVP of 86%. ^ Conclusions: Through the third year of TOSHSS implementation, the surveillance system is has made a considerable contribution to the collection of occupational injury and illness information within the state of Texas. Implementation of the nine recommendations provided under this process evaluation is expected to increase the overall usefulness of the surveillance system and assist TDSHS in reducing occupational fatalities, injuries, and diseases within the state of Texas. ^ 1 Disclaimer: The Texas Occupational Safety and Health Surveillance System is supported by Grant/Cooperative Agreement Number (U60 OH008473-01A1). The content of the current evaluation are solely the responsibility of the authors and do not necessarily represent the official views of the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this study was to determine the external validity of a clinical prediction rule developed by the European Multicenter Study on Human Spinal Cord Injury (EM-SCI) to predict the ambulation outcomes 12 months after traumatic spinal cord injury. Data from the North American Clinical Trials Network (NACTN) data registry with approximately 500 SCI cases were used for this validity study. The predictive accuracy of the EM-SCI prognostic model was evaluated using calibration and discrimination based on 231 NACTN cases. The area under the receiver-operating-characteristics curve (ROC) curve was 0.927 (95% CI 0.894 – 0.959) for the EM-SCI model when applied to NACTN population. This is lower than the AUC of 0.956 (95% CI 0.936 – 0.976) reported for the EM-SCI population, but suggests that the EM-SCI clinical prediction rule distinguished well between those patients in the NACTN population who were able to achieve independent ambulation and those who did not achieve independent ambulation. The calibration curve suggests that higher the prediction score is, the better the probability of walking with the best prediction for AIS D patients. In conclusion, the EM-SCI clinical prediction rule was determined to be generalizable to the adult NACTN SCI population.^