962 resultados para Subsequential Completeness
Resumo:
BACKGROUND We describe the setup of a neonatal quality improvement tool and list which peer-reviewed requirements it fulfils and which it does not. We report on the so-far observed effects, how the units can identify quality improvement potential, and how they can measure the effect of changes made to improve quality. METHODS Application of a prospective longitudinal national cohort data collection that uses algorithms to ensure high data quality (i.e. checks for completeness, plausibility and reliability), and to perform data imaging (Plsek's p-charts and standardized mortality or morbidity ratio SMR charts). The collected data allows monitoring a study collective of very low birth-weight infants born from 2009 to 2011 by applying a quality cycle following the steps 'guideline - perform - falsify - reform'. RESULTS 2025 VLBW live-births from 2009 to 2011 representing 96.1% of all VLBW live-births in Switzerland display a similar mortality rate but better morbidity rates when compared to other networks. Data quality in general is high but subject to improvement in some units. Seven measurements display quality improvement potential in individual units. The methods used fulfil several international recommendations. CONCLUSIONS The Quality Cycle of the Swiss Neonatal Network is a helpful instrument to monitor and gradually help improve the quality of care in a region with high quality standards and low statistical discrimination capacity.
Resumo:
OBJECTIVES Accurate trial reporting facilitates evaluation and better use of study results. The objective of this article is to investigate the quality of reporting of randomized controlled trials (RCTs) in leading orthodontic journals, and to explore potential predictors of improved reporting. METHODS The 50 most recent issues of 4 leading orthodontic journals until November 2013 were electronically searched. Reporting quality assessment was conducted using the modified CONSORT statement checklist. The relationship between potential predictors and the modified CONSORT score was assessed using linear regression modeling. RESULTS 128 RCTs were identified with a mean modified CONSORT score of 68.97% (SD = 11.09). The Journal of Orthodontics (JO) ranked first in terms of completeness of reporting (modified CONSORT score 76.21%, SD = 10.1), followed by American Journal of Orthodontics and Dentofacial Orthopedics (AJODO) (73.05%, SD = 10.1). Journal of publication (AJODO: β = 10.08, 95% CI: 5.78, 14.38; JO: β = 16.82, 95% CI: 11.70, 21.94; EJO: β = 7.21, 95% CI: 2.69, 11.72 compared to Angle), year of publication (β = 0.98, 95% CI: 0.28, 1.67 for each additional year), region of authorship (Europe: β = 5.19, 95% CI: 1.30, 9.09 compared to Asia/other), statistical significance (significant: β = 3.10, 95% CI: 0.11, 6.10 compared to non-significant) and methodologist involvement (involvement: β = 5.60, 95% CI: 1.66, 9.54 compared to non-involvement) were all significant predictors of improved modified CONSORT scores in the multivariable model. Additionally, median overall Jadad score was 2 (IQR = 2) across journals, with JO (median = 3, IQR = 1) and AJODO (median = 3, IQR = 2) presenting the highest score values. CONCLUSION The reporting quality of RCTs published in leading orthodontic journals is considered suboptimal in various CONSORT areas. This may have a bearing in trial result interpretation and use in clinical decision making and evidence- based orthodontic treatment interventions.
Resumo:
OBJECTIVE To describe a novel CONsolidated Standards of Reporting Trials (CONSORT) adherence strategy implemented by the American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO) and to report its impact on the completeness of reporting of published trials. STUDY DESIGN AND SETTING The AJO-DO CONSORT adherence strategy, initiated in June 2011, involves active assessment of randomized clinical trial (RCT) reporting during the editorial process. The completeness of reporting CONSORT items was compared between trials submitted and published during the implementation period (July 2011 to September 2013) and trials published between August 2007 and July 2009. RESULTS Of the 42 RCTs submitted (July 2011 to September 2013), 23 were considered for publication and assessed for completeness of reporting, seven of which were eventually published. For all published RCTs between 2007 and 2009 (n = 20), completeness of reporting by CONSORT item ranged from 0% to 100% (Median = 40%, interquartile range = 60%). All published trials in 2011-2013, reported 33 of 37 CONSORT (sub) items. Four CONSORT 2010 checklist items remained problematic even after implementation of the adherence strategy: changes to methods (3b), changes to outcomes (6b) after the trial commenced, interim analysis (7b), and trial stopping (14b), which are typically only reported when applicable. CONCLUSION Trials published following implementation of the AJO-DO CONSORT adherence strategy completely reported more CONSORT items than those published or submitted previously.
Resumo:
AIM Abstracts of randomized clinical trials are extremely important as trial appraisal is often based on the information included here. The objective of this study was to assess the quality of the reporting of RCT abstracts in journals of Oral Implantology. MATERIAL AND METHODS Six leading Implantology journals were screened for RCTs between years 2008 and 2012. A 21-item modified CONSORT for abstracts checklist was used to examine the completeness of abstract reporting. Descriptive statistics and linear regression modeling were employed for data analysis. RESULTS One hundred and sixty three RCT abstracts were included in this study. The majority of the RCTs were published in the Clinical Oral Implants Research (42.9%). The mean overall reporting quality score was 58.6% (95% CI: 57.6-59.7). The highest score was noted in the European Journal of Oral Implantology (63.8%; 95% CI: 61.8-65.8). Multivariate analysis demonstrated that abstract quality score was related to publication journal and number of research centers involved. Most abstracts adequately reported interventions (89.0%), objectives (77.9%) and conclusions (74.8%) while failed to report randomization procedures, allocation concealment, effect estimate, confidence intervals, and funding. Registration of RCTs was not reported in any of the abstracts. CONCLUSIONS The reporting quality in abstracts of RCTs published in Oral Implantology journals needs to be improved. Editors and authors should be encouraged to endorse the CONSORT for abstracts guidelines in order to achieve optimal quality in abstract reporting.
Resumo:
Companion animals closely share their domestic environment with people and have the potential to, act as sources of zoonotic diseases. They also have the potential to be sentinels of infectious and noninfectious, diseases. With the exception of rabies, there has been minimal ongoing surveillance of, companion animals in Canada. We developed customized data extraction software, the University of, Calgary Data Extraction Program (UCDEP), to automatically extract and warehouse the electronic, medical records (EMR) from participating private veterinary practices to make them available for, disease surveillance and knowledge creation for evidence-based practice. It was not possible to build, generic data extraction software; the UCDEP required customization to meet the specific software, capabilities of the veterinary practices. The UCDEP, tailored to the participating veterinary practices', management software, was capable of extracting data from the EMR with greater than 99%, completeness and accuracy. The experiences of the people developing and using the UCDEP and the, quality of the extracted data were evaluated. The electronic medical record data stored in the data, warehouse may be a valuable resource for surveillance and evidence-based medical research.
Resumo:
BACKGROUND Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. The current reference values for 24-hour urinary creatinine excretion rely on observations performed in the 1960s and 1970s in relatively small and mostly selected groups, and may thus poorly fit to the present-day general European population. The aim of this study was to establish and validate anthropometry-based age- and sex-specific reference values of the 24-hour urinary creatinine excretion on adult populations with preserved renal function. METHODS We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m(2) and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). Based on current reference values (177 to 221 μmol/kg/24 hours in men and 133 to 177 μmol/kg/24 hours in women), 56% of the urinary collections in the whole population and 67% in people >60 years old would have been considered as inaccurate. A linear regression model with sex, BMI and age as predictor variables was found to provide the best prediction of the observed values and showed a good fit when applied to the validation population. CONCLUSIONS We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.
Resumo:
This article reviews the status of the exciting and fastly evolving field of dark matter research as of summer 2013, when it was discussed at ICRC 2013 in Rio de Janeiro. It focuses on the three main avenues to detect WIMP dark matter: direct detection, indirect detection and collider searches. The article is based on the dark matter rapporteur talk summarizing the presentations given at the conference, filling some gaps for completeness.
Resumo:
In this article, we introduce the probabilistic justification logic PJ, a logic in which we can reason about the probability of justification statements. We present its syntax and semantics, and establish a strong completeness theorem. Moreover, we investigate the relationship between PJ and the logic of uncertain justifications.
Resumo:
INTRODUCTION Every joint registry aims to improve patient care by identifying implants that have an inferior performance. For this reason, each registry records the implant name that has been used in the individual patient. In most registries, a paper-based approach has been utilized for this purpose. However, in addition to being time-consuming, this approach does not account for the fact that failure patterns are not necessarily implant specific but can be associated with design features that are used in a number of implants. Therefore, we aimed to develop and evaluate an implant product library that allows both time saving barcode scanning on site in the hospital for the registration of the implant components and a detailed description of implant specifications. MATERIALS AND METHODS A task force consisting of representatives of the German Arthroplasty Registry, industry, and computer specialists agreed on a solution that allows barcode scanning of implant components and that also uses a detailed standardized classification describing arthroplasty components. The manufacturers classified all their components that are sold in Germany according to this classification. The implant database was analyzed regarding the completeness of components by algorithms and real-time data. RESULTS The implant library could be set up successfully. At this point, the implant database includes more than 38,000 items, of which all were classified by the manufacturers according to the predefined scheme. Using patient data from the German Arthroplasty Registry, several errors in the database were detected, all of which were corrected by the respective implant manufacturers. CONCLUSIONS The implant library that was developed for the German Arthroplasty Registry allows not only on-site barcode scanning for the registration of the implant components but also its classification tree allows a sophisticated analysis regarding implant characteristics, regardless of brand or manufacturer. The database is maintained by the implant manufacturers, thereby allowing registries to focus their resources on other areas of research. The database might represent a possible global model, which might encourage harmonization between joint replacement registries enabling comparisons between joint replacement registries.
Resumo:
We present a probabilistic justification logic, PPJ, to study rational belief, degrees of belief and justifications. We establish soundness and completeness for PPJ and show that its satisfiability problem is decidable. In the last part we use PPJ to provide a solution to the lottery paradox.
Resumo:
Systems for the identification and registration of cattle have gradually been receiving attention for use in syndromic surveillance, a relatively recent approach for the early detection of infectious disease outbreaks. Real or near real-time monitoring of deaths or stillbirths reported to these systems offer an opportunity to detect temporal or spatial clusters of increased mortality that could be caused by an infectious disease epidemic. In Switzerland, such data are recorded in the "Tierverkehrsdatenbank" (TVD). To investigate the potential of the Swiss TVD for syndromic surveillance, 3 years of data (2009-2011) were assessed in terms of data quality, including timeliness of reporting and completeness of geographic data. Two time-series consisting of reported on-farm deaths and stillbirths were retrospectively analysed to define and quantify the temporal patterns that result from non-health related factors. Geographic data were almost always present in the TVD data; often at different spatial scales. On-farm deaths were reported to the database by farmers in a timely fashion; stillbirths were less timely. Timeliness and geographic coverage are two important features of disease surveillance systems, highlighting the suitability of the TVD for use in a syndromic surveillance system. Both time series exhibited different temporal patterns that were associated with non-health related factors. To avoid false positive signals, these patterns need to be removed from the data or accounted for in some way before applying aberration detection algorithms in real-time. Evaluating mortality data reported to systems for the identification and registration of cattle is of value for comparing national data systems and as a first step towards a European-wide early detection system for emerging and re-emerging cattle diseases.
Resumo:
This paper estimates the aggregate demand for private health insurance coverage in the U.S. using an error-correction model and by recognizing that people are without private health insurance for voluntary, structural, frictional, and cyclical reasons and because of public alternatives. Insurance coverage is measured both by the percentage of the population enrolled in private health insurance plans and the completeness of the insurance coverage. Annual data for the period 1966-1999 are used and both short and long run price and income elasticities of demand are estimated. The empirical findings indicate that both private insurance enrollment and completeness are relatively inelastic with respect to changes in price and income in the short and long run. Moreover, private health insurance enrollment is found to be inversely related to the poverty rate, particularly in the short-run. Finally, our results suggest that an increase in the number cyclically uninsured generates less of a welfare loss than an increase in the structurally uninsured.
Resumo:
Medication reconciliation, with the aim to resolve medication discrepancy, is one of the Joint Commission patient safety goals. Medication errors and adverse drug events that could result from medication discrepancy affect a large population. At least 1.5 million adverse drug events and $3.5 billion of financial burden yearly associated with medication errors could be prevented by interventions such as medication reconciliation. This research was conducted to answer the following research questions: (1a) What are the frequency range and type of measures used to report outpatient medication discrepancy? (1b) Which effective and efficient strategies for medication reconciliation in the outpatient setting have been reported? (2) What are the costs associated with medication reconciliation practice in primary care clinics? (3) What is the quality of medication reconciliation practice in primary care clinics? (4) Is medication reconciliation practice in primary care clinics cost-effective from the clinic perspective? Study designs used to answer these questions included a systematic review, cost analysis, quality assessments, and cost-effectiveness analysis. Data sources were published articles in the medical literature and data from a prospective workflow study, which included 150 patients and 1,238 medications. The systematic review confirmed that the prevalence of medication discrepancy was high in ambulatory care and higher in primary care settings. Effective strategies for medication reconciliation included the use of pharmacists, letters, a standardized practice approach, and partnership between providers and patients. Our cost analysis showed that costs associated with medication reconciliation practice were not substantially different between primary care clinics using or not using electronic medical records (EMR) ($0.95 per patient per medication in EMR clinics vs. $0.96 per patient per medication in non-EMR clinics, p=0.78). Even though medication reconciliation was frequently practiced (97-98%), the quality of such practice was poor (0-33% of process completeness measured by concordance of medication numbers and 29-33% of accuracy measured by concordance of medication names) and negatively (though not significantly) associated with medication regimen complexity. The incremental cost-effectiveness ratios for concordance of medication number per patient per medication and concordance of medication names per patient per medication were both 0.08, favoring EMR. Future studies including potential cost-savings from medication features of the EMR and potential benefits to minimize severity of harm to patients from medication discrepancy are warranted. ^
Resumo:
Background: The failure rate of health information systems is high, partially due to fragmented, incomplete, or incorrect identification and description of specific and critical domain requirements. In order to systematically transform the requirements of work into real information system, an explicit conceptual framework is essential to summarize the work requirements and guide system design. Recently, Butler, Zhang, and colleagues proposed a conceptual framework called Work Domain Ontology (WDO) to formally represent users’ work. This WDO approach has been successfully demonstrated in a real world design project on aircraft scheduling. However, as a top level conceptual framework, this WDO has not defined an explicit and well specified schema (WDOS) , and it does not have a generalizable and operationalized procedure that can be easily applied to develop WDO. Moreover, WDO has not been developed for any concrete healthcare domain. These limitations hinder the utility of WDO in real world information system in general and in health information system in particular. Objective: The objective of this research is to formalize the WDOS, operationalize a procedure to develop WDO, and evaluate WDO approach using Self-Nutrition Management (SNM) work domain. Method: Concept analysis was implemented to formalize WDOS. Focus group interview was conducted to capture concepts in SNM work domain. Ontology engineering methods were adopted to model SNM WDO. Part of the concepts under the primary goal “staying healthy” for SNM were selected and transformed into a semi-structured survey to evaluate the acceptance, explicitness, completeness, consistency, experience dependency of SNM WDO. Result: Four concepts, “goal, operation, object and constraint”, were identified and formally modeled in WDOS with definitions and attributes. 72 SNM WDO concepts under primary goal were selected and transformed into semi-structured survey questions. The evaluation indicated that the major concepts of SNM WDO were accepted by 41 overweight subjects. SNM WDO is generally independent of user domain experience but partially dependent on SNM application experience. 23 of 41 paired concepts had significant correlations. Two concepts were identified as ambiguous concepts. 8 extra concepts were recommended towards the completeness of SNM WDO. Conclusion: The preliminary WDOS is ready with an operationalized procedure. SNM WDO has been developed to guide future SNM application design. This research is an essential step towards Work-Centered Design (WCD).
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^