896 resultados para Open Government Data
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in the Northern Territory. Government administrative data over a decade (2003-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (e.g., police, teachers, doctors, nurses, vs family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in Queensland. Government administrative data over a decade (2003-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (both mandated reporters e.g., teachers, doctors, nurses, and non-mandated reporters e.g., family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in South Australia. Government administrative data over a decade (2003-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (both mandated reporters e.g., police, teachers, doctors, nurses; and non-mandated reporters e.g., family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in Tasmania. Government administrative data over a nine year period (2004-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (both mandated reporters e.g., police, teachers, doctors, nurses; and non-mandated reporters e.g., family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in Victoria. Government administrative data over a decade (2003-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (both mandated reporters e.g., police, teachers, doctors, nurses; and non-mandated reporters e.g., family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
This report presents the results of a study exploring the law and practice of mandatory reporting of child abuse and neglect in Western Australia. Government administrative data over a decade (2003-2012) were accessed and analysed to map trends in reporting of different types of child abuse and neglect (physical abuse, sexual abuse, emotional abuse, and neglect) by different reporter groups (e.g., police, teachers, doctors, nurses, family members, neighbours), and the outcomes of these reports (whether investigated, and whether substantiated or not). The study was funded by the Australian Government and administered through the Government of Victoria.
Resumo:
Background: Long working hours might increase the risk of cardiovascular disease, but prospective evidence is scarce, imprecise, and mostly limited to coronary heart disease. We aimed to assess long working hours as a risk factor for incident coronary heart disease and stroke.
Methods We identified published studies through a systematic review of PubMed and Embase from inception to Aug 20, 2014. We obtained unpublished data for 20 cohort studies from the Individual-Participant-Data Meta-analysis in Working Populations (IPD-Work) Consortium and open-access data archives. We used cumulative random-effects meta-analysis to combine effect estimates from published and unpublished data.
Findings We included 25 studies from 24 cohorts in Europe, the USA, and Australia. The meta-analysis of coronary heart disease comprised data for 603 838 men and women who were free from coronary heart disease at baseline; the meta-analysis of stroke comprised data for 528 908 men and women who were free from stroke at baseline. Follow-up for coronary heart disease was 5·1 million person-years (mean 8·5 years), in which 4768 events were recorded, and for stroke was 3·8 million person-years (mean 7·2 years), in which 1722 events were recorded. In cumulative meta-analysis adjusted for age, sex, and socioeconomic status, compared with standard hours (35-40 h per week), working long hours (≥55 h per week) was associated with an increase in risk of incident coronary heart disease (relative risk [RR] 1·13, 95% CI 1·02-1·26; p=0·02) and incident stroke (1·33, 1·11-1·61; p=0·002). The excess risk of stroke remained unchanged in analyses that addressed reverse causation, multivariable adjustments for other risk factors, and different methods of stroke ascertainment (range of RR estimates 1·30-1·42). We recorded a dose-response association for stroke, with RR estimates of 1·10 (95% CI 0·94-1·28; p=0·24) for 41-48 working hours, 1·27 (1·03-1·56; p=0·03) for 49-54 working hours, and 1·33 (1·11-1·61; p=0·002) for 55 working hours or more per week compared with standard working hours (ptrend<0·0001).
Interpretation Employees who work long hours have a higher risk of stroke than those working standard hours; the association with coronary heart disease is weaker. These findings suggest that more attention should be paid to the management of vascular risk factors in individuals who work long hours.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
Die Open Government Bewegung soll der Verwaltungsführung mehr Transparenz und Verständnis entgegenbringen. Durch Open Finance Apps werden Finanzangaben und dazugehörige Informationen verständlich zugänglich gemacht und Grössenverhältnisse veranschaulicht.
Resumo:
These data result from an investigation examining the interplay between dyadic rapport and consequential behavior-mirroring. Participants responded to a variety of interpersonally-focused pretest measures prior to their engagement in videotaped interdependent tasks (coded for interactional synchrony using Motion Energy Analysis [17,18]). A post-task evaluation of rapport and other related constructs followed each exchange. Four studies shared these same dependent measures, but asked distinct questions: Study 1 (Ndyad = 38) explored the influence of perceived responsibility and gender-specificity of the task; Study 2 (Ndyad = 51) focused on dyad sex-makeup; Studies 3 (Ndyad = 41) and 4 (Ndyad = 63) examined cognitive load impacts on the interactions. Versions of the data are structured with both individual and dyad as the unit of analysis. Our data possess strong reuse potential for theorists interested in dyadic processes and are especially pertinent to questions about dyad agreement and interpersonal perception / behavior association relationships.
Resumo:
Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.
Resumo:
In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, the social sciences, and media and communications. New technologies may enhance the traditional aims of journalism, or may initiate greater interaction between journalists and information and communication technology (ICT) specialists. The enhanced use of computing in news production is related in particular to three factors: larger government data sets becoming more widely available; the increasingly sophisticated and ubiquitous nature of software; and the developing digital economy. Drawing upon international examples, this paper argues that computational journalism techniques may provide new foundations for original investigative journalism and increase the scope for new forms of interaction with readers. Computer journalism provides a major opportunity to enhance the delivery of original investigative journalism, and to attract and retain readers online.