105 resultados para research data management


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Public health decision making is critically dependant on accurate, timely and reliable information. There is a widespread belief that most of the national and sub-national health information systems fail in providing much needed information support for evidence based health planning and interventions. This situation is more acute in developing nations where resources are either stagnant or decreasing, coupled with the situations of demographic transition and double burden of diseases. Literature abounds with publications, which provide information on misguided health interventions in developing nations, leading to failure and waste of resources. Health information system failure is widely blamed for this situation. Nevertheless, there is a dearth of comprehensive evaluations of existing national or sub-national health information systems, especially in the region of South-East Asia. This study makes an attempt to bridge this knowledge gap by evaluating a regional health information system in Sri Lanka. It explores the strengths and weaknesses of the current health information system and related causative factors in a decentralised health system and then proposes strategic recommendations for reform measures. A mix methodological and phased approach was adopted to reach the objectives. An initial self administered questionnaire survey was conducted among health managers to study their perceptions in relation to the regional health information system and its management support. The survey findings were used to establish the presence of health information system failure in the region and also as a precursor to the more in-depth case study which was followed. The sources of data for the case study were literature review, document analysis and key stake holder interviews. Health information system resources, health indicators, data sources, data management, data quality, and information dissemination were the six major components investigated. The study findings reveal that accurate, timely and reliable health information is unavailable and therefore evidence based health planning is lacking in the studied health region. Strengths and weaknesses of the current health information system were identified and strategic recommendations were formulated accordingly. It is anticipated that this research will make a significant and multi-fold contribution for health information management in developing countries. First, it will attempt to bridge an existing knowledge gap by presenting the findings of a comprehensive case study to reveal the strengths and weaknesses of a decentralised health information system in a developing country. Second, it will enrich the literature by providing an assessment tool and a research method for the evaluation of regional health information systems. Third, it will make a rewarding practical contribution by presenting valuable guidelines for improving health information systems in regional Sri Lanka.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

- Speeding and crash involvement in Australia - Speeding recidivist research in Queensland - Challenges from an Australian perspective - Auditor-General reviews of speed camera programs - Implications for future speed management

Relevância:

90.00% 90.00%

Publicador:

Resumo:

IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background When large scale trials are investigating the effects of interventions on appetite, it is paramount to efficiently monitor large amounts of human data. The original hand-held Electronic Appetite Ratings System (EARS) was designed to facilitate the administering and data management of visual analogue scales (VAS) of subjective appetite sensations. The purpose of this study was to validate a novel hand-held method (EARS II (HP® iPAQ)) against the standard Pen and Paper (P&P) method and the previously validated EARS. Methods Twelve participants (5 male, 7 female, aged 18-40) were involved in a fully repeated measures design. Participants were randomly assigned in a crossover design, to either high fat (>48% fat) or low fat (<28% fat) meal days, one week apart and completed ratings using the three data capture methods ordered according to Latin Square. The first set of appetite sensations was completed in a fasted state, immediately before a fixed breakfast. Thereafter, appetite sensations were completed every thirty minutes for 4h. An ad libitum lunch was provided immediately before completing a final set of appetite sensations. Results Repeated measures ANOVAs were conducted for ratings of hunger, fullness and desire to eat. There were no significant differences between P&P compared with either EARS or EARS II (p > 0.05). Correlation coefficients between P&P and EARS II, controlling for age and gender, were performed on Area Under the Curve ratings. R2 for Hunger (0.89), Fullness (0.96) and Desire to Eat (0.95) were statistically significant (p < 0.05). Conclusions EARS II was sensitive to the impact of a meal and recovery of appetite during the postprandial period and is therefore an effective device for monitoring appetite sensations. This study provides evidence and support for further validation of the novel EARS II method for monitoring appetite sensations during large scale studies. The added versatility means that future uses of the system provides the potential to monitor a range of other behavioural and physiological measures often important in clinical and free living trials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While undertaking the ANDS RDA Gold Standard Record Exemplars project, research data sharing was discussed with many QUT researchers. Our experiences provided rich insight into researcher attitudes towards their data and the sharing of such data. Generally, we found traditional altruistic motivations for research data sharing did not inspire researchers, but an explanation of the more achievement-oriented benefits were more compelling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neighbourhood like the concept of liveability is usually measured by either subjective indicators using surveys of residents’ perceptions or by objective means using secondary data or relative weights for objective indicators of the urban environment. Rarely, have objective and subjective indicators been related to one another in order to understand what constitutes a liveable urban neighbourhood both spatially and behaviourally. This paper explores the use of qualitative (diaries, in-depth interviews) and quantitative (Global Positioning Systems, Geographical Information Systems mapping) liveability research data to examine the perceptions and behaviour of 12 older residents living in six high density urban areas of Brisbane. Older urban Australians are one of the two principal groups highly attracted to high density urban living. The strength of the relationship between the qualitative and quantitative measures was examined. Results of the research indicate a weak relationship between subjective and objective indicators. Linking the two methods (quantitative and qualitative) is important in obtaining a greater understanding of human behaviour and the lived world of older urban Australians and in providing a wider picture of the urban neighbourhood.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This case-study explores alternative and experimental methods of research data acquisition, through an emerging research methodology, ‘Guerrilla Research Tactics’ [GRT]. The premise is that the researcher develops covert tactics for attracting and engaging with research participants. These methods range between simple analogue interventions to physical bespoke artefacts which contain an embedded digital link to a live, interactive data collecting resource, such as an online poll, survey or similar. These artefacts are purposefully placed in environments where the researcher anticipates an encounter and response from the potential research participant. The choice of design and placement of artefacts is specific and intentional. DESCRIPTION: Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. This case-study assesses the application of ‘Guerrilla Research Tactics’ [GRT] Methodology as an alternative, engaging and interactive method of data acquisition for higher degree research. Extending Gauntlett’s definition of ‘new creative methods… an alternative to language driven qualitative research methods' (2007), this case-study contributes to the existing body of literature addressing creative and interactive approaches to HDR data collection. The case-study was undertaken with Masters of Architecture and Urban Design research students at QUT, in 2012. Typically students within these creative disciplines view research as a taxing and boring process, distracting them from their studio design focus. An obstacle that many students face, is acquiring data from their intended participant groups. In response to these challenges the authors worked with students to develop creative, fun, and engaging research methods for both the students and their research participants. GRT are influenced by and developed from a combination of participatory action research (Kindon, 2008) and unobtrusive research methods (Kellehear, 1993), to enhance social research. GRT takes un-obtrusive research in a new direction, beyond the typical social research methods. The Masters research students developed alternative methods for acquiring data, which relied on a combination of analogue design interventions and online platforms commonly distributed through social networks. They identified critical issues that required action by the community, and the processes they developed focused on engaging with communities, to propose solutions. Key characteristics shared between both GRT and Guerrilla Activism, are notions of political issues, the unexpected, the unconventional, and being interactive, unique and thought provoking. The trend of Guerrilla Activism has been adapted to: marketing, communication, gardening, craftivism, theatre, poetry, and art. Focusing on the action element and examining elements of current trends within Guerrilla marketing, we believe that GRT can be applied to a range of research areas within various academic disciplines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores what we are calling “Guerrilla Research Tactics” (GRT): research methods that exploit emerging mobile and cloud based digital technologies. We examine some case studies in the use of this technology to generate research data directly from the physical fabric and the people of the city. We argue that GRT is a new and novel way of engaging public participation in urban, place based research because it facilitates the co- creation of knowledge, with city inhabitants, ‘on the fly’. This paper discusses the potential of these new research techniques and what they have to offer researchers operating in the creative disciplines and beyond. This work builds on and extends Gauntlett’s “new creative methods” (2007) and contributes to the existing body of literature addressing creative and interactive approaches to data collection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While social enterprises have gained increasing policy attention as vehicles for generating innovative responses to complex social and environmental problems, surprisingly little is known about them. In particular, the social innovation produced by social enterprises (Mulgan, Tucker, Ali, & Sander, 2007) has been presumed rather than demonstrated, and remains under-investigated in the literature. While social enterprises are held to be inherently innovative as they seek to response to social needs (Nicholls, 2010), there has been conjecture that the collaborative governance arrangements typical in social enterprises may be conducive to innovation (Lumpkin, Moss, Gras, Kato, & Amezcua, In press), as members and volunteers provide a source of creative ideas and are unfettered in such thinking by responsibility to deliver organisational outcomes (Hendry, 2004). However this is complicated by the sheer array of governance arrangements which exist in social enterprises, which range from flat participatory democratic structures through to hierarchical arrangements. In continental Europe, there has been a stronger focus on democratic participation as a characteristic of Social Enterprises than, for example, the USA. In response to this gap in knowledge, a research project was undertaken to identify the population of social enterprises in Australia. The size, composition and the social innovations initiated by these enterprises has been reported elsewhere (see Barraket, 2010). The purpose of this paper is to undertake a closer examination of innovation in social enterprises – particularly how the collaborative governance of social enterprises might influence innovation. Given the pre-paradigmatic state of social entrepreneurship research (Nicholls, 2010), and the importance of drawing draw on established theories in order to advance theory (Short, Moss, & Lumpkin, 2009), a number of conceptual steps are needed in order to examine how collaborative governance might influence by social enterprises. In this paper, we commence by advancing a definition as to what a social enterprise is. In light of our focus on the potential role of collaborative governance in social innovation amongst social enterprises, we go on to consider the collaborative forms of governance prevalent in the Third Sector. Then, collaborative innovation is explored. Drawing on this information and our research data, we finally consider how collaborative governance might affect innovation amongst social enterprises.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.