170 resultados para multiple data sources


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Cohort studies can provide valuable evidence of cause and effect relationships but are subject to loss of participants over time, limiting the validity of findings. Computerised record linkage offers a passive and ongoing method of obtaining health outcomes from existing routinely collected data sources. However, the quality of record linkage is reliant upon the availability and accuracy of common identifying variables. We sought to develop and validate a method for linking a cohort study to a state-wide hospital admissions dataset with limited availability of unique identifying variables. Methods A sample of 2000 participants from a cohort study (n = 41 514) was linked to a state-wide hospitalisations dataset in Victoria, Australia using the national health insurance (Medicare) number and demographic data as identifying variables. Availability of the health insurance number was limited in both datasets; therefore linkage was undertaken both with and without use of this number and agreement tested between both algorithms. Sensitivity was calculated for a sub-sample of 101 participants with a hospital admission confirmed by medical record review. Results Of the 2000 study participants, 85% were found to have a record in the hospitalisations dataset when the national health insurance number and sex were used as linkage variables and 92% when demographic details only were used. When agreement between the two methods was tested the disagreement fraction was 9%, mainly due to "false positive" links when demographic details only were used. A final algorithm that used multiple combinations of identifying variables resulted in a match proportion of 87%. Sensitivity of this final linkage was 95%. Conclusions High quality record linkage of cohort data with a hospitalisations dataset that has limited identifiers can be achieved using combinations of a national health insurance number and demographic data as identifying variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper demonstrates the affordances of the work diary as a data collection tool for both pilot studies and qualitative research of social interactions. Observation is the cornerstone of many qualitative, ethnographic research projects (Creswell, 2008). However, determining through observation, the activities of busy school teams could be likened to joining dots of a child’s drawing activity to reveal a complex picture of interactions. Teachers, leaders and support personnel are in different locations within a school, performing diverse tasks for a variety of outcomes, which hopefully achieve a common goal. As a researcher, the quest to observe these busy teams and their interactions with each other was daunting and perhaps unrealistic. The decision to use a diary as part of a wider research project was to overcome the physical impossibility of simultaneously observing multiple team members. One reported advantage of the use of the diary in research was its suitability as a substitute for lengthy researcher observation, because multiple data sets could be collected at once (Lewis et al, 2005; Marelli, 2007).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an innovative platform that facilitates the collection of objective safety data around occurrences at railway level crossings using data sources including forward-facing video, telemetry from trains and geo-referenced asset and survey data. This platform is being developed with support by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper provides a description of the underlying accident causation model, the development methodology and refinement process as well as a description of the data collection platform. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accumulated evidence from more than four decades of education research strongly suggests that parent involvement in schools carries significant benefits for students as well as for the success of schools (e.g., Henderson & Mapp, 2002). Governments in Australia and overseas have supported parent involvement in schools with a range of initiatives while parent groups have indicated a strong desire for expanded school roles that include participation in formal educational processes namely curriculum, pedagogy, and assessment. Research has also signalled the need for teachers to engage parents rather than adopt traditional parent-school involvement practices so that parents can participate as joint educators in their children's schooling alongside teachers (Pushor, 2001). Actually improving the quality of contact and relationships between parents and teachers to enable engagement however remains problematic. Coteaching and cogenerative dialoguing originally emerged as an innovative approach in the context of teaching secondary school science. Coteaching brings together the collective expertise of several individuals to expand learning opportunities for students while cogenerative dialogues refer to sessions in which participants talk, listen, and learn from one another about the process (Roth & Tobin, 2002a). Coteaching and cogenerative dialoguing reportedly benefits students academically and socially while rewarding educators professionally and emotionally through the support and collaboration they receive from fellow coteachers. These benefits ensue because coteaching theoretically positions teachers at one another's elbows, providing new and different understandings about teaching based on first-hand perspectives and shared goals for assisting students to learn. This thesis proposes that coteaching and cogenerative dialoguing may provide a vehicle for improving quality of contact and relationships between parents and teachers. To investigate coteaching and cogenerative dialoguing as a parent-teacher engagement mechanism, interpretive ethnographic case study research was conducted involving two parents and a secondary school teacher. Sociological ideas, namely Bourdieu's (1977) fields, habitus, and capitals, together with multiple dialectical concepts such as agency|structure (Sewell, 1992) and agency|passivity (Roth, 2007b, 2010) were assembled into a conceptual framework to examine parent-teacher relationships by describing and explaining cultural production and identity construction throughout the case study. Video and audio recordings of cogenerative dialogues and cotaught lessons comprised the chief data sources. Data were analysed using qualitative techniques such as discourse and conversation analysis to identify patterns and contradictions (Roth & Tobin, 2002a). The use of quality criteria detailed by Guba and Lincoln (2005) gives credence to the way in which ethical considerations infused the planning and conduct of this research. From the processes of data collection and analyses, three broad assertions were proffered. The findings highlight the significance of using multiple coordinated dialectical concepts for analysing the affordances and challenges of coteaching and cogenerative dialogues that include parents and teachers. Adopting the principles and purposes of coteaching and cogenerative dialoguing promoted trusting respectful relationships that generated an equitable culture. The simultaneous processes and tensions between logistics and ethics (i.e., the logistics|ethics dialectic) were proposed as a new way to conceptualise how power was redistributed among the participants. Knowledge of positive emotional energy and ongoing capital exchange conceived dialectically as the reciprocal interaction among cultural, social, and symbolic capitals (i.e., the dialectical relationship of cultural|social|symbolic capital) showed how coteaching and cogenerative dialoguing facilitated mutual understandings, joint decision-making, and group solidarity. The notion of passivity as the dialectical partner of agency explained how traditional roles and responsibilities were reconfigured and individual and collective agency expanded. Complexities that surfaced when implementing the coteaching and cogenerative dialoguing approach were outweighed by the multiple benefits that accrued for all involved. These benefits included the development of community-relevant and culturally-significant curricula that increased student agency and learning outcomes, heightened parent self-efficacy for participating in and contributing to formal educational processes, and enhanced teacher professionalism. This case study contributes to existing theory, knowledge and practice, and methodology in the research areas of parent-teacher relationships, specifically in secondary schools, and coteaching and cogenerative dialoguing. The study is particularly relevant given the challenges schools and teachers increasingly face to meaningfully connect with parents to better meet the needs of educational stakeholders in times of continual, complex, and rapid societal change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on a study that demonstrates how to apply pattern matching as an analytical method in case-study research. Case-study design is appropriate for the investigation of highly-contextualized phenomena that occur within the social world. Case-study design is considered a pragmatic approach that permits employment of multiple methods and data sources in order to attain a rich understanding of the phenomenon under investigation. The findings from such multiple methods can be reconciled in case-study analysis, specifically through a pattern-matching technique. Although this technique is theoretically explained in the literature, there is scant guidance on how to apply the method practically when analyzing data. This paper demonstrates the steps taken during pattern matching in a completed case-study project that investigated the influence of cultural diversity in a multicultural nursing workforce on the quality and safety of patient care. The example highlighted in this paper contributes to the practical understanding of the pattern-matching process, and can also make a substantial contribution to case-study methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Loop detectors are the oldest and widely used traffic data source. On urban arterials, they are mainly installed for signal control. Recently state of the art Bluetooth MAC Scanners (BMS) has significantly captured the interest of stakeholders for exploiting it for area wide traffic monitoring. Loop detectors provide flow- a fundamental traffic parameter; whereas BMS provides individual vehicle travel time between BMS stations. Hence, these two data sources complement each other, and if integrated should increase the accuracy and reliability of the traffic state estimation. This paper proposed a model that integrates loops and BMS data for seamless travel time and density estimation for urban signalised network. The proposed model is validated using both real and simulated data and the results indicate that the accuracy of the proposed model is over 90%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In market economies the built environment is largely the product of private sector property development. Property development is a high-risk entrepreneurial activity executing expensive projects with long gestation periods in an uncertain environment and into an uncertain future. Risk lies at the core of development: the developer manages the multiple risks of development and it is the capital injection and financing that is placed at risk. From the developer's perspective the search for development capital is a quest: to access more finance, over a longer term, with fewer conditions and at lower rates. From the supply angle, capital of various sources - banks, insurance companies, superannuation funds, accumulated firm profits, retail investors and private equity - is always seeking above market returns for limited risk. Property development presents one potentially lucrative, but risky, investment opportunity. Competition for returns on capital produces a continual dynamic evolution of methods for funding property developments. And thus the relationship between capital and development and the outcomes for the built environment are in a restless continual evolution. Little is documented about the ways development is financed in Australia and even less of the consequences for cities. Using publicly available data sources and examples of different development financing from Australian practice, this paper argues that different methods of financing development have different outcomes and consequences for the built environment. This paper also presents an agenda for further research into these themes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research on attrition has focused on the economic significance of low graduation rates in terms of costs to students (fees that do not culminate in a credential) and impact on future income. For a student who fails a unit and repeats the unit multiple times, the financial impact is significant and lasting (Bexley, Daroesman, Arkoudis & James 2013). There are obvious advantages for the timely completion of a degree, both for the student and the institution. Advantages to students include fee minimisation, enhanced engagement opportunities, effectual pathway to employment and a sense of worth, morale and cohort-identity benefits. Work undertaken by the QUT Analytics Project in 2013 and 2014 explored student engagement patterns capturing a variety of data sources and specifically, the use of LMS amongst students in 804 undergraduate units in one semester. Units with high failure rates were given further attention and it was found that students who were repeating a unit were less likely to pass the unit than students attempting it for the first time. In this repeating cohort, academic and behavioural variables were consistently more significant in the modelling than were any demographic variables, indicating that a student’s performance at university is far more impacted by what they do once they arrive than it is by where they come from. The aim of this poster session is to examine the findings and commonalities of a number of case studies that articulated the engagement activities of repeating students (which included collating data from Individual Unit Reports, academic and peer advising programs and engagement with virtual learning resources). Understanding the profile of the repeating student cohort is therefore as important as considering the characteristics of successful students so that the institution might be better placed to target the repeating students and make proactive interventions as early as possible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The identification of safety hazards and risks and their associated control measures provides the foundation for any safety program and essentially determines the scope, content and complexity of an effective occupational health and safety management system. In the case of work-related road safety (WRRS), there is a gap within current knowledge, research and practice regarding the holistic assessment of WRRS safety systems and practice. In order to mitigate this gap, a multi-level process tool for assessing WRRS safety systems was developed from extensive consultation, practice and informed by theoretical models and frameworks. Data collection for the Organisational Driving Safety Systems Analysis (ODSSA) tool utilised a case study methodology and included multiple information sources: such as documents, archival records, interviews, direct observations, participant observations, and physical artefacts. Previous trials and application of the ODSSA has indicated that the tool is applicable to a wide range of organisational fleet environments and settings. This paper reports on the research results and effectiveness of the ODSSA tool to assess WRRS systems across a large organisation that recently underwent considerable organisational change, including amalgamation of multiple organisations. The outcomes of this project identified considerable differences in the degree by which the organisation addressed WRRS across their vehicle fleet operations and provided guidelines for improving organisations’ WRRS systems. The ODSSA tool was pivotal in determining WRRS system deficiencies and provided a platform to inform mitigation and improvement strategies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents and evaluates Quantum Inspired models of Target Activation using Cued-Target Recall Memory Modelling over multiple sources of Free Association data. Two components were evaluated: Whether Quantum Inspired models of Target Activation would provide a better framework than their classical psychological counterparts and how robust these models are across the different sources of Free Association data. In previous work, a formal model of cued-target recall did not exist and as such Target Activation was unable to be assessed directly. Further to that, the data source used was suspected of suffering from temporal and geographical bias. As a consequence, Target Activation was measured against cued-target recall data as an approximation of performance. Since then, a formal model of cued-target recall (PIER3) has been developed [10] with alternative sources of data also becoming available. This allowed us to directly model target activation in cued-target recall with human cued-target recall pairs and use multiply sources of Free Association Data. Featural Characteristics known to be important to Target Activation were measured for each of the data sources to identify any major differences that may explain variations in performance for each of the models. Each of the activation models were used in the PIER3 memory model for each of the data sources and was benchmarked against cued-target recall pairs provided by the University of South Florida (USF). Two methods where used to evaluate performance. The first involved measuring the divergence between the sets of results using the Kullback Leibler (KL) divergence with the second utilizing a previous statistical analysis of the errors [9]. Of the three sources of data, two were sourced from human subjects being the USF Free Association Norms and the University of Leuven (UL) Free Association Networks. The third was sourced from a new method put forward by Galea and Bruza, 2015 in which pseudo Free Association Networks (Corpus Based Association Networks - CANs) are built using co-occurrence statistics on large text corpus. It was found that the Quantum Inspired Models of Target Activation not only outperformed the classical psychological model but was more robust across a variety of data sources.